SubZero: Composing Subject, Style, and Action via Zero-Shot Personalization
Diffusion models are increasingly popular for generative tasks, including personalized composition of subjects and styles. While diffusion models can generate user-specified subjects performing text-guided actions in custom styles, they require fine-tuning and are not feasible for personalization on...
Saved in:
Main Authors | , , , , , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
26.02.2025
|
Subjects | |
Online Access | Get full text |
DOI | 10.48550/arxiv.2502.19673 |
Cover
Loading…
Summary: | Diffusion models are increasingly popular for generative tasks, including
personalized composition of subjects and styles. While diffusion models can
generate user-specified subjects performing text-guided actions in custom
styles, they require fine-tuning and are not feasible for personalization on
mobile devices. Hence, tuning-free personalization methods such as IP-Adapters
have progressively gained traction. However, for the composition of subjects
and styles, these works are less flexible due to their reliance on ControlNet,
or show content and style leakage artifacts. To tackle these, we present
SubZero, a novel framework to generate any subject in any style, performing any
action without the need for fine-tuning. We propose a novel set of constraints
to enhance subject and style similarity, while reducing leakage. Additionally,
we propose an orthogonalized temporal aggregation scheme in the cross-attention
blocks of denoising model, effectively conditioning on a text prompt along with
single subject and style images. We also propose a novel method to train
customized content and style projectors to reduce content and style leakage.
Through extensive experiments, we show that our proposed approach, while
suitable for running on-edge, shows significant improvements over
state-of-the-art works performing subject, style and action composition. |
---|---|
DOI: | 10.48550/arxiv.2502.19673 |