Analyzing and Improving the Training Dynamics of Diffusion Models

Diffusion models currently dominate the field of data- driven image synthesis with their unparalleled scaling to large datasets. In this paper, we identify and rectify several causes for uneven and ineffective training in the popular ADM diffusion model architecture, without altering its high- level...

Full description

Saved in:
Bibliographic Details
Published in2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) pp. 24174 - 24184
Main Authors Karras, Tero, Aittala, Miika, Lehtinen, Jaakko, Hellsten, Janne, Aila, Timo, Laine, Samuli
Format Conference Proceeding
LanguageEnglish
Published IEEE 16.06.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Diffusion models currently dominate the field of data- driven image synthesis with their unparalleled scaling to large datasets. In this paper, we identify and rectify several causes for uneven and ineffective training in the popular ADM diffusion model architecture, without altering its high- level structure. Observing uncontrolled magnitude changes and imbalances in both the network activations and weights over the course of training, we redesign the network layers to preserve activation, weight, and update magnitudes on ex- pectation. We find that systematic application of this philoso- phy eliminates the observed drifts and imbalances, resulting in considerably better networks at equal computational com- plexity. Our modifications improve the previous record FID of 2.41 in ImageNet-512 synthesis to 1.81, achieved using fast deterministic sampling. As an independent contribution, we present a method for setting the exponential moving average (EMA) parameters post-hoc, i.e., after completing the training run. This allows precise tuning of EMA length without the cost of performing several training runs, and reveals its surprising interactions with network architecture, training time, and guidance.
ISSN:2575-7075
DOI:10.1109/CVPR52733.2024.02282