ChunkyGAN

Real Image Inversion via Segments

Adéla Šubrtová*
CTU in Prague, FEE
 
David Futschik*
CTU in Prague, FEE
 
Jan Čech
CTU in Prague, FEE
Michal Lukáč
Adobe Research
 
Eli Shechtman
Adobe Research
 
Daniel Sýkora
CTU in Prague, FEE

* - joint first authors



Abstract

We present ChunkyGAN—a novel paradigm for modeling and editing images using generative adversarial networks. Unlike previous techniques seeking a global latent representation of the input image, our approach subdivides the input image into a set of smaller components (chunks) specified either manually or automatically using a pre-trained segmentation network. For each chunk, the latent code of a generative network is estimated locally with greater accuracy thanks to a smaller number of constraints. Moreover, during the optimization of latent codes, segmentation can further be refined to improve matching quality. This process enables high-quality projection of the original image with spatial disentanglement that previous methods would find challenging to achieve. To demonstrate the advantage of our approach, we evaluated it quantitatively and also qualitatively in various image editing scenarios that benefit from the higher reconstruction quality and local nature of the approach. Our method is flexible enough to manipulate even out-of-domain images that would be hard to reconstruct using global techniques.

Full Text     Supplementary Material     BibTeX

Proceedings of the European Conference on Computer Vision, pp. 189–204, 2022

(ECCV'22, Tel Aviv, Isreal, October 2022)

Related Project
=> Back to main page <=