|
|
|
In this paper we present StyleBin—an approach to example-based stylization of videos that can produce consistent binocular depiction of stylized content on stereoscopic displays. Given the target sequence and a set of stylized keyframes accompanied by information about depth in the scene, we formulate an optimization problem that converts the target video into a pair of stylized sequences, in which each frame consists of a set of seamlessly stitched patches taken from the original stylized keyframe. The aim of the optimization process is to align the individual patches so that they respect the semantics of the given target scene, while at the same time also following the prescribed local disparity in the corresponding viewpoints and being consistent in time. In contrast to previous depth-aware style transfer techniques, our approach is the first that can deliver semantically meaningful stylization and preserve essential visual characteristics of the given artistic media. We demonstrate the practical utility of the proposed method in various stylization use cases. Full Text Supplementary Material Video (side-by-side) BibTeX SIGGRAPH Asia 2022 Conference Papers, art. no. 15, 2022(SIGGRAPH Asia 2022, Daegu, South Korea, December 2022) Supplementary Videos |