From Pixels to Tokens: Revisiting Object Hallucinations in Large Vision-Language Models
Hallucinations in large vision-language models (LVLMs) are a significant challenge, i.e., generating objects that are not presented in the visual input, which impairs their reliability. Recent studies often attribute hallucinations to a lack of understanding of visual input, yet ignore a more fundam...
Saved in:
Main Authors | , , , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
09.10.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Hallucinations in large vision-language models (LVLMs) are a significant
challenge, i.e., generating objects that are not presented in the visual input,
which impairs their reliability. Recent studies often attribute hallucinations
to a lack of understanding of visual input, yet ignore a more fundamental
issue: the model's inability to effectively extract or decouple visual
features. In this paper, we revisit the hallucinations in LVLMs from an
architectural perspective, investigating whether the primary cause lies in the
visual encoder (feature extraction) or the modal alignment module (feature
decoupling). Motivated by our findings on the preliminary investigation, we
propose a novel tuning strategy, PATCH, to mitigate hallucinations in LVLMs.
This plug-and-play method can be integrated into various LVLMs, utilizing
adaptive virtual tokens to extract object features from bounding boxes, thereby
addressing hallucinations caused by insufficient decoupling of visual features.
PATCH achieves state-of-the-art performance on multiple multi-modal
hallucination datasets. We hope this approach provides researchers with deeper
insights into the underlying causes of hallucinations in LVLMs, fostering
further advancements and innovation in this field. |
---|---|
DOI: | 10.48550/arxiv.2410.06795 |