|
|
|
||
|
|
|
In this paper we present Diffusion Image Analogies—an examplebased image editing approach that builds upon the concept of image analogies originally introduced by Hertzmann et al. [2001]. Given a pair of images that specify the intent of a specific transition, our approach enables to modify the target image in a way that it follows the analogy specified by this exemplar. In contrast to previous techniques which were able to capture analogies mostly on the low-level textural details our approach handles also changes in higher level semantics including transition of object domain, change of facial expression, or stylization. Although similar modifications can be achieved using diffusion models guided by text prompts [Rombach et al. 2022] our approach can operate solely in the domain of images without the need to specify the user’s intent using textual form.We demonstrate power of our approach in various challenging scenarios where the specified analogy would be difficult to transfer using previous techniques. Full Text Supplementary Material Source Code BibTeX ACM SIGGRAPH 2023 Conference Proceedings, art. no. 79, 2023(SIGGRAPH 2023, Los Angeles, USA, August 2023) => Back to main page <= |