Explaining AI weaknesses improves human–AI performance in a dynamic control task
AI-based decision support is increasingly implemented to support operators in dynamic control tasks. While these systems continuously improve, to truly achieve human–system synergy, one must also study humans’ system understanding and behavior. Accordingly, we investigated the impact of explainabili...
Saved in:
Published in | International journal of human-computer studies Vol. 199; p. 103505 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
Elsevier Ltd
01.05.2025
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | AI-based decision support is increasingly implemented to support operators in dynamic control tasks. While these systems continuously improve, to truly achieve human–system synergy, one must also study humans’ system understanding and behavior. Accordingly, we investigated the impact of explainability instructions regarding a specific system weakness on performance and trust in two experiments (with higher task demands in Experiment 2). Participants performed a dynamic control task with support from either an explainable AI (XAI, information on a system weakness), a non-explainable AI (nonXAI, no information on system weakness), or without support (manual, only in Experiment 2). Results show that participants with XAI support outperformed those in the nonXAI group, particularly in situations where the AI actually erred. Notably, informing users of system weaknesses did not affect trust once they had interacted with the system. In addition, Experiment 2 showed the general benefit of decision support over working manually under higher task demands. These findings suggest that AI support can enhance performance in complex tasks and that providing information on potential system weaknesses aids in managing system errors and resource allocation without compromising trust.
•Two experiments examined explainable AI in a dynamic supervisory control task.•One group of subjects was informed about AI weaknesses; i.e. where errors can occur.•Explainability improved performance, especially in cases where the AI erred.•Weakness information did not negatively affect trust in the AI after the interaction.•Explaining AI weaknesses enhanced human–AI collaboration without compromising trust. |
---|---|
ISSN: | 1071-5819 |
DOI: | 10.1016/j.ijhcs.2025.103505 |