StorySync: Training-Free Subject Consistency in Text-to-Image Generation via Region Harmonization

Generating a coherent sequence of images that tells a visual story, using text-to-image diffusion models, often faces the critical challenge of maintaining subject consistency across all story scenes. Existing approaches, which typically rely on fine-tuning or retraining models, are computationally...

Full description

Saved in:
Bibliographic Details
Main Authors Gaur, Gopalji, Zolfaghari, Mohammadreza, Brox, Thomas
Format Journal Article
LanguageEnglish
Published 31.07.2025
Subjects
Online AccessGet full text
DOI10.48550/arxiv.2508.03735

Cover

Loading…
More Information
Summary:Generating a coherent sequence of images that tells a visual story, using text-to-image diffusion models, often faces the critical challenge of maintaining subject consistency across all story scenes. Existing approaches, which typically rely on fine-tuning or retraining models, are computationally expensive, time-consuming, and often interfere with the model's pre-existing capabilities. In this paper, we follow a training-free approach and propose an efficient consistent-subject-generation method. This approach works seamlessly with pre-trained diffusion models by introducing masked cross-image attention sharing to dynamically align subject features across a batch of images, and Regional Feature Harmonization to refine visually similar details for improved subject consistency. Experimental results demonstrate that our approach successfully generates visually consistent subjects across a variety of scenarios while maintaining the creative abilities of the diffusion model.
DOI:10.48550/arxiv.2508.03735