What Else Do I Need to Know? The Effect of Background Information on Users' Reliance on QA Systems

NLP systems have shown impressive performance at answering questions by retrieving relevant context. However, with the increasingly large models, it is impossible and often undesirable to constrain models' knowledge or reasoning to only the retrieved context. This leads to a mismatch between th...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Goyal, Navita, Briakou, Eleftheria, Liu, Amanda, Baumler, Connor, Bonial, Claire, Micher, Jeffrey, Voss, Clare R, Carpuat, Marine, Daumé, Hal
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 26.10.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:NLP systems have shown impressive performance at answering questions by retrieving relevant context. However, with the increasingly large models, it is impossible and often undesirable to constrain models' knowledge or reasoning to only the retrieved context. This leads to a mismatch between the information that the models access to derive the answer and the information that is available to the user to assess the model predicted answer. In this work, we study how users interact with QA systems in the absence of sufficient information to assess their predictions. Further, we ask whether adding the requisite background helps mitigate users' over-reliance on predictions. Our study reveals that users rely on model predictions even in the absence of sufficient information needed to assess the model's correctness. Providing the relevant background, however, helps users better catch model errors, reducing over-reliance on incorrect predictions. On the flip side, background information also increases users' confidence in their accurate as well as inaccurate judgments. Our work highlights that supporting users' verification of QA predictions is an important, yet challenging, problem.
ISSN:2331-8422