UniTune: Text-Driven Image Editing by Fine Tuning a Diffusion Model on a Single Image
Text-driven image generation methods have shown impressive results recently, allowing casual users to generate high quality images by providing textual descriptions. However, similar capabilities for editing existing images are still out of reach. Text-driven image editing methods usually need edit...
Saved in:
Main Authors | , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
17.10.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Text-driven image generation methods have shown impressive results recently,
allowing casual users to generate high quality images by providing textual
descriptions. However, similar capabilities for editing existing images are
still out of reach. Text-driven image editing methods usually need edit masks,
struggle with edits that require significant visual changes and cannot easily
keep specific details of the edited portion. In this paper we make the
observation that image-generation models can be converted to image-editing
models simply by fine-tuning them on a single image. We also show that
initializing the stochastic sampler with a noised version of the base image
before the sampling and interpolating relevant details from the base image
after sampling further increase the quality of the edit operation. Combining
these observations, we propose UniTune, a novel image editing method. UniTune
gets as input an arbitrary image and a textual edit description, and carries
out the edit while maintaining high fidelity to the input image. UniTune does
not require additional inputs, like masks or sketches, and can perform multiple
edits on the same image without retraining. We test our method using the Imagen
model in a range of different use cases. We demonstrate that it is broadly
applicable and can perform a surprisingly wide range of expressive editing
operations, including those requiring significant visual changes that were
previously impossible. |
---|---|
DOI: | 10.48550/arxiv.2210.09477 |