Have the VLMs Lost Confidence? A Study of Sycophancy in VLMs

In the study of LLMs, sycophancy represents a prevalent hallucination that poses significant challenges to these models. Specifically, LLMs often fail to adhere to original correct responses, instead blindly agreeing with users' opinions, even when those opinions are incorrect or malicious. How...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Li, Shuo, Ji, Tao, Fan, Xiaoran, Lu, Linsheng, Yang, Leyi, Yang, Yuming, Xi, Zhiheng, Zheng, Rui, Wang, Yuran, Zhao, Xiaohui, Gui, Tao, Zhang, Qi, Huang, Xuanjing
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 15.10.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In the study of LLMs, sycophancy represents a prevalent hallucination that poses significant challenges to these models. Specifically, LLMs often fail to adhere to original correct responses, instead blindly agreeing with users' opinions, even when those opinions are incorrect or malicious. However, research on sycophancy in visual language models (VLMs) has been scarce. In this work, we extend the exploration of sycophancy from LLMs to VLMs, introducing the MM-SY benchmark to evaluate this phenomenon. We present evaluation results from multiple representative models, addressing the gap in sycophancy research for VLMs. To mitigate sycophancy, we propose a synthetic dataset for training and employ methods based on prompts, supervised fine-tuning, and DPO. Our experiments demonstrate that these methods effectively alleviate sycophancy in VLMs. Additionally, we probe VLMs to assess the semantic impact of sycophancy and analyze the attention distribution of visual tokens. Our findings indicate that the ability to prevent sycophancy is predominantly observed in higher layers of the model. The lack of attention to image knowledge in these higher layers may contribute to sycophancy, and enhancing image attention at high layers proves beneficial in mitigating this issue.
ISSN:2331-8422