Pre-Training Multimodal Hallucination Detectors with Corrupted Grounding Data

Multimodal language models can exhibit hallucinations in their outputs, which limits their reliability. The ability to automatically detect these errors is important for mitigating them, but has been less explored and existing efforts do not localize hallucinations, instead framing this as a classif...

Full description

Saved in:
Bibliographic Details
Main Authors Whitehead, Spencer, Phillips, Jacob, Hendryx, Sean
Format Journal Article
LanguageEnglish
Published 30.08.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Multimodal language models can exhibit hallucinations in their outputs, which limits their reliability. The ability to automatically detect these errors is important for mitigating them, but has been less explored and existing efforts do not localize hallucinations, instead framing this as a classification task. In this work, we first pose multimodal hallucination detection as a sequence labeling task where models must localize hallucinated text spans and present a strong baseline model. Given the high cost of human annotations for this task, we propose an approach to improve the sample efficiency of these models by creating corrupted grounding data, which we use for pre-training. Leveraging phrase grounding data, we generate hallucinations to replace grounded spans and create hallucinated text. Experiments show that pre-training on this data improves sample efficiency when fine-tuning, and that the learning signal from the grounding data plays an important role in these improvements.
DOI:10.48550/arxiv.2409.00238