Feedback Efficient Online Fine-Tuning of Diffusion Models

Diffusion models excel at modeling complex data distributions, including those of images, proteins, and small molecules. However, in many cases, our goal is to model parts of the distribution that maximize certain properties: for example, we may want to generate images with high aesthetic quality, o...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Uehara, Masatoshi, Zhao, Yulai, Black, Kevin, Hajiramezanali, Ehsan, Scalia, Gabriele, Nathaniel Lee Diamant, Tseng, Alex M, Levine, Sergey, Biancalani, Tommaso
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 18.07.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Diffusion models excel at modeling complex data distributions, including those of images, proteins, and small molecules. However, in many cases, our goal is to model parts of the distribution that maximize certain properties: for example, we may want to generate images with high aesthetic quality, or molecules with high bioactivity. It is natural to frame this as a reinforcement learning (RL) problem, in which the objective is to fine-tune a diffusion model to maximize a reward function that corresponds to some property. Even with access to online queries of the ground-truth reward function, efficiently discovering high-reward samples can be challenging: they might have a low probability in the initial distribution, and there might be many infeasible samples that do not even have a well-defined reward (e.g., unnatural images or physically impossible molecules). In this work, we propose a novel reinforcement learning procedure that efficiently explores on the manifold of feasible samples. We present a theoretical analysis providing a regret guarantee, as well as empirical validation across three domains: images, biological sequences, and molecules.
ISSN:2331-8422