Shallow diffusion networks provably learn hidden low-dimensional structure

Diffusion-based generative models provide a powerful framework for learning to sample from a complex target distribution. The remarkable empirical success of these models applied to high-dimensional signals, including images and video, stands in stark contrast to classical results highlighting the c...

Full description

Saved in:
Bibliographic Details
Main Authors Boffi, Nicholas M, Jacot, Arthur, Tu, Stephen, Ziemann, Ingvar
Format Journal Article
LanguageEnglish
Published 15.10.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Diffusion-based generative models provide a powerful framework for learning to sample from a complex target distribution. The remarkable empirical success of these models applied to high-dimensional signals, including images and video, stands in stark contrast to classical results highlighting the curse of dimensionality for distribution recovery. In this work, we take a step towards understanding this gap through a careful analysis of learning diffusion models over the Barron space of single layer neural networks. In particular, we show that these shallow models provably adapt to simple forms of low dimensional structure, thereby avoiding the curse of dimensionality. We combine our results with recent analyses of sampling with diffusion models to provide an end-to-end sample complexity bound for learning to sample from structured distributions. Importantly, our results do not require specialized architectures tailored to particular latent structures, and instead rely on the low-index structure of the Barron space to adapt to the underlying distribution.
DOI:10.48550/arxiv.2410.11275