Pointing out Human Answer Mistakes in a Goal-Oriented Visual Dialogue

Effective communication between humans and intelligent agents has promising applications for solving complex problems. One such approach is visual dialogue, which leverages multimodal context to assist humans. However, real-world scenarios occasionally involve human mistakes, which can cause intelli...

Full description

Saved in:
Bibliographic Details
Published in2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) pp. 4665 - 4670
Main Authors Oshima, Ryosuke, Shinagawa, Seitaro, Tsunashima, Hideki, Feng, Qi, Morishima, Shigeo
Format Conference Proceeding
LanguageEnglish
Published IEEE 02.10.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Effective communication between humans and intelligent agents has promising applications for solving complex problems. One such approach is visual dialogue, which leverages multimodal context to assist humans. However, real-world scenarios occasionally involve human mistakes, which can cause intelligent agents to fail. While most prior research assumes perfect answers from human interlocutors, we focus on a setting where the agent points out unintentional mistakes for the interlocutor to review, better reflecting real-world situations. In this paper, we show that human answer mistakes depend on question type and QA turn in the visual dialogue by analyzing a previously unused data collection of human mistakes. We demonstrate the effectiveness of those factors for the model's accuracy in a pointing-human-mistake task through experiments using a simple MLP model and a Visual Language Model.
ISSN:2473-9944
DOI:10.1109/ICCVW60793.2023.00503