Is this AI sexist? The effects of a biased AI’s anthropomorphic appearance and explainability on users’ bias perceptions and trust
Biases in artificial intelligence (AI), a pressing issue in human-AI interaction, can be exacerbated by AI systems’ opaqueness. This paper reports on our development of a user-centered explainable-AI approach to reducing such opaqueness, guided by the theoretical framework of anthropomorphism and th...
Saved in:
Published in | International journal of information management Vol. 76; p. 102775 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
Elsevier Ltd
01.06.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Biases in artificial intelligence (AI), a pressing issue in human-AI interaction, can be exacerbated by AI systems’ opaqueness. This paper reports on our development of a user-centered explainable-AI approach to reducing such opaqueness, guided by the theoretical framework of anthropomorphism and the results of two 3 × 3 between-subjects experiments (n = 207 and n = 223). Specifically, those experiments investigated how, in a gender-biased hiring situation, three levels of AI human-likeness (low, medium, high) and three levels of richness of AI explanation (none, lean, rich) influenced users’ 1) perceptions of AI bias and 2) adoption of AI’s recommendations, as well as how such perceptions and adoption varied across participant characteristics such as gender and pre-existing trust in AI. We found that comprehensive explanations helped users to recognize AI bias and mitigate its influence, and that this effect was particularly pronounced among females in a scenario where females were being discriminated against. Follow-up interviews corroborated our quantitative findings. These results can usefully inform explainable AI interface design.
•We study user responses to AI gender bias in high-stakes hiring contexts.•We examine the impact of AI's appearance and capability on bias perception.•Enhancing AI's humanlike appearance revealed bias against female applicants.•Richer AI explanations revealed bias against females but not bias against males.•Females can better leverage AI explanations to detect bias and inform decisions. |
---|---|
ISSN: | 0268-4012 1873-4707 |
DOI: | 10.1016/j.ijinfomgt.2024.102775 |