Generative PhotomontagePublished on Aug 14·Submitted byak... | Generative PhotomontagePublished on Aug 14·Submitted byak...
Generative Photomontage
Published on Aug 14
·
Submitted by
akhaliq
on Aug 15
Authors:
Sean J. Liu
,
Nupur Kumari
,
Ariel Shamir
,
Jun-Yan Zhu
Abstract
Text-to-image models are powerful tools for image creation. However, the generation process is akin to a dice roll and makes it difficult to achieve a single image that captures everything a user wants. In this paper, we propose a framework for creating the desired image by compositing it from various parts of generated images, in essence forming a Generative Photomontage. Given a stack of images generated by ControlNet using the same input condition and different seeds, we let users select desired parts from the generated results using a brush stroke interface. We introduce a novel technique that takes in the user's brush strokes, segments the generated images using a graph-based optimization in diffusion feature space, and then composites the segmented regions via a new feature-space blending method. Our method faithfully preserves the user-selected regions while compositing them harmoniously. We demonstrate that our flexible framework can be used for many applications, including generating new appearance combinations, fixing incorrect shapes and artifacts, and improving prompt alignment. We show compelling results for each application and demonstrate that our method outperforms existing image blending methods and various baselines.
https://huggingface.co/papers/2408.07116 Paper page - Generative Photomontage