Instilling Multi-round Thinking to Text-guided Image Generation

This paper delves into the text-guided image editing task, focusing on modifying a reference image according to user-specified textual feedback to embody specific attributes. Despite recent advancements, a persistent challenge remains that the single-round generation often overlooks crucial details,...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Zeng, Lidong, Zheng, Zhedong, Yinwei Wei, Tat-seng Chua
Format Paper Journal Article
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 09.03.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This paper delves into the text-guided image editing task, focusing on modifying a reference image according to user-specified textual feedback to embody specific attributes. Despite recent advancements, a persistent challenge remains that the single-round generation often overlooks crucial details, particularly in the realm of fine-grained changes like shoes or sleeves. This issue compounds over multiple rounds of interaction, severely limiting customization quality. In an attempt to address this challenge, we introduce a new self-supervised regularization, \ie, multi-round regularization, which is compatible with existing methods. Specifically, the multi-round regularization encourages the model to maintain consistency across different modification orders. It builds upon the observation that the modification order generally should not affect the final result. Different from traditional one-round generation, the mechanism underpinning the proposed method is the error amplification of initially minor inaccuracies in capturing intricate details. Qualitative and quantitative experiments affirm that the proposed method achieves high-fidelity editing quality, especially the local modification, in both single-round and multiple-round generation, while also showcasing robust generalization to irregular text inputs. The effectiveness of our semantic alignment with textual feedback is further substantiated by the retrieval improvements on FahisonIQ and Fashion200k.
ISSN:2331-8422
DOI:10.48550/arxiv.2401.08472