Training-free diffusion for controlling illumination conditions in images
This paper introduces a novel approach to illumination manipulation in diffusion models, addressing the gap in conditional image generation with a focus on lighting conditions. While most of methods employ ControlNet and its variants to address the illumination-aware guidance in diffusion models. In...
Saved in:
Published in | Computer vision and image understanding Vol. 260; p. 104450 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
Elsevier Inc
01.10.2025
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | This paper introduces a novel approach to illumination manipulation in diffusion models, addressing the gap in conditional image generation with a focus on lighting conditions. While most of methods employ ControlNet and its variants to address the illumination-aware guidance in diffusion models. In contrast, We conceptualize the diffusion model as a black-box image render and strategically decompose its energy function in alignment with the image formation model. Our method effectively separates and controls illumination-related properties during the generative process. It generates images with realistic illumination effects, including cast shadow, soft shadow, and inter-reflections. Remarkably, it achieves this without the necessity for learning intrinsic decomposition, finding directions in latent space, or undergoing additional training with new datasets.
•Diffusion models can be treated as black-box renderers.•Diffusion energy function can affect lighting change in image synthesis.•Training-free relighting can be achieved with proper physics-based constraints. |
---|---|
ISSN: | 1077-3142 |
DOI: | 10.1016/j.cviu.2025.104450 |