Understanding Hallucinations in Diffusion Models through Mode Interpolation
Colloquially speaking, image generation models based upon diffusion processes are frequently said to exhibit "hallucinations," samples that could never occur in the training data. But where do such hallucinations come from? In this paper, we study a particular failure mode in diffusion mod...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
13.06.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Colloquially speaking, image generation models based upon diffusion processes
are frequently said to exhibit "hallucinations," samples that could never occur
in the training data. But where do such hallucinations come from? In this
paper, we study a particular failure mode in diffusion models, which we term
mode interpolation. Specifically, we find that diffusion models smoothly
"interpolate" between nearby data modes in the training set, to generate
samples that are completely outside the support of the original training
distribution; this phenomenon leads diffusion models to generate artifacts that
never existed in real data (i.e., hallucinations). We systematically study the
reasons for, and the manifestation of this phenomenon. Through experiments on
1D and 2D Gaussians, we show how a discontinuous loss landscape in the
diffusion model's decoder leads to a region where any smooth approximation will
cause such hallucinations. Through experiments on artificial datasets with
various shapes, we show how hallucination leads to the generation of
combinations of shapes that never existed. Finally, we show that diffusion
models in fact know when they go out of support and hallucinate. This is
captured by the high variance in the trajectory of the generated sample towards
the final few backward sampling process. Using a simple metric to capture this
variance, we can remove over 95% of hallucinations at generation time while
retaining 96% of in-support samples. We conclude our exploration by showing the
implications of such hallucination (and its removal) on the collapse (and
stabilization) of recursive training on synthetic data with experiments on
MNIST and 2D Gaussians dataset. We release our code at
https://github.com/locuslab/diffusion-model-hallucination. |
---|---|
AbstractList | Colloquially speaking, image generation models based upon diffusion processes
are frequently said to exhibit "hallucinations," samples that could never occur
in the training data. But where do such hallucinations come from? In this
paper, we study a particular failure mode in diffusion models, which we term
mode interpolation. Specifically, we find that diffusion models smoothly
"interpolate" between nearby data modes in the training set, to generate
samples that are completely outside the support of the original training
distribution; this phenomenon leads diffusion models to generate artifacts that
never existed in real data (i.e., hallucinations). We systematically study the
reasons for, and the manifestation of this phenomenon. Through experiments on
1D and 2D Gaussians, we show how a discontinuous loss landscape in the
diffusion model's decoder leads to a region where any smooth approximation will
cause such hallucinations. Through experiments on artificial datasets with
various shapes, we show how hallucination leads to the generation of
combinations of shapes that never existed. Finally, we show that diffusion
models in fact know when they go out of support and hallucinate. This is
captured by the high variance in the trajectory of the generated sample towards
the final few backward sampling process. Using a simple metric to capture this
variance, we can remove over 95% of hallucinations at generation time while
retaining 96% of in-support samples. We conclude our exploration by showing the
implications of such hallucination (and its removal) on the collapse (and
stabilization) of recursive training on synthetic data with experiments on
MNIST and 2D Gaussians dataset. We release our code at
https://github.com/locuslab/diffusion-model-hallucination. |
Author | Maini, Pratyush Kolter, J. Zico Lipton, Zachary C Aithal, Sumukh K |
Author_xml | – sequence: 1 givenname: Sumukh K surname: Aithal fullname: Aithal, Sumukh K – sequence: 2 givenname: Pratyush surname: Maini fullname: Maini, Pratyush – sequence: 3 givenname: Zachary C surname: Lipton fullname: Lipton, Zachary C – sequence: 4 givenname: J. Zico surname: Kolter fullname: Kolter, J. Zico |
BackLink | https://doi.org/10.48550/arXiv.2406.09358$$DView paper in arXiv |
BookMark | eNotj8tOwzAURL2ABRQ-gBX-gQTHr9hLVB6tKGJT1tGNH60l41R2guDvCYHVaDRHI51LdJaG5BC6aUjNlRDkDvJX-KwpJ7Immgl1gV7ek3W5jJBsSAe8gRgnExKMYUgFh4QfgvdTmRt-HayLBY_HPEyH41LxNo0un4a48Ffo3EMs7vo_V2j_9Lhfb6rd2_N2fb-rQLaqsgwoBWKoUFwbTUEK35JGcm1b5ZSXVnLCm5nyqm-8Ngyspn0_z046o9gK3f7dLjbdKYcPyN_dr1W3WLEf1xBKdQ |
ContentType | Journal Article |
Copyright | http://creativecommons.org/licenses/by/4.0 |
Copyright_xml | – notice: http://creativecommons.org/licenses/by/4.0 |
DBID | AKY GOX |
DOI | 10.48550/arxiv.2406.09358 |
DatabaseName | arXiv Computer Science arXiv.org |
DatabaseTitleList | |
Database_xml | – sequence: 1 dbid: GOX name: arXiv.org url: http://arxiv.org/find sourceTypes: Open Access Repository |
DeliveryMethod | fulltext_linktorsrc |
ExternalDocumentID | 2406_09358 |
GroupedDBID | AKY GOX |
ID | FETCH-LOGICAL-a678-d3a22a0c25849c92a65f701649d78e8f6d640413a2f8b1f9c3ad92bb9d7e6ec83 |
IEDL.DBID | GOX |
IngestDate | Wed Aug 28 12:10:13 EDT 2024 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | false |
IsScholarly | false |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-a678-d3a22a0c25849c92a65f701649d78e8f6d640413a2f8b1f9c3ad92bb9d7e6ec83 |
OpenAccessLink | https://arxiv.org/abs/2406.09358 |
ParticipantIDs | arxiv_primary_2406_09358 |
PublicationCentury | 2000 |
PublicationDate | 2024-06-13 |
PublicationDateYYYYMMDD | 2024-06-13 |
PublicationDate_xml | – month: 06 year: 2024 text: 2024-06-13 day: 13 |
PublicationDecade | 2020 |
PublicationYear | 2024 |
Score | 1.923506 |
SecondaryResourceType | preprint |
Snippet | Colloquially speaking, image generation models based upon diffusion processes
are frequently said to exhibit "hallucinations," samples that could never occur... |
SourceID | arxiv |
SourceType | Open Access Repository |
SubjectTerms | Computer Science - Learning |
Title | Understanding Hallucinations in Diffusion Models through Mode Interpolation |
URI | https://arxiv.org/abs/2406.09358 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwdV07T8MwELbaTiwIBKg85YHVkDiOHyMCSgUCllbqFp1jW4pUpahtKn4-fgTRhdH2Lfd5-E53390hdEtzY7XKJJGFc4QxIwiwTBNdGqascNxEbc77B5_O2euiXAwQ_u2FgfV3s0vzgfXmPtDNXSzVDdGQ0iDZevlcpOJkHMXV2__Z-RgzXu2RxOQIHfbRHX5I33GMBrY9QW_z_QYSPIXlsqublIXb4KbFT41zXcha4bCZbLnB_fKceMRJFrhKmrVTNJs8zx6npN9hQMDTADEFUApZTT3Pq1pR4KUTYaqVMkJa6aHgLPM8AtRJnTtVF2AU1do_W25rWZyhUbtq7Tg0V0MGUpYaQDNXO1UWpeWKCaBceM_P0Th6Xn2lMRVVAKWKoFz8_3SJDqin6SB-yosrNNquO3vtaXarbyLWPzDIfqk |
link.rule.ids | 228,230,783,888 |
linkProvider | Cornell University |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Understanding+Hallucinations+in+Diffusion+Models+through+Mode+Interpolation&rft.au=Aithal%2C+Sumukh+K&rft.au=Maini%2C+Pratyush&rft.au=Lipton%2C+Zachary+C&rft.au=Kolter%2C+J.+Zico&rft.date=2024-06-13&rft_id=info:doi/10.48550%2Farxiv.2406.09358&rft.externalDocID=2406_09358 |