Low-rank human-like agents are trusted more and blamed less in human-autonomy teaming

If humans are to team with artificial teammates, factors that influence trust and shared accountability must be considered when designing agents. This study investigates the influence of anthropomorphism, rank, decision cost, and task difficulty on trust in human-autonomous teams (HAT) and how blame...

Full description

Saved in:
Bibliographic Details
Published inFrontiers in artificial intelligence Vol. 7; p. 1273350
Main Authors Gall, Jody, Stanton, Christopher J
Format Journal Article
LanguageEnglish
Published Switzerland Frontiers Media S.A 29.04.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:If humans are to team with artificial teammates, factors that influence trust and shared accountability must be considered when designing agents. This study investigates the influence of anthropomorphism, rank, decision cost, and task difficulty on trust in human-autonomous teams (HAT) and how blame is apportioned if shared tasks fail. Participants (  = 31) completed repeated trials with an artificial teammate using a low-fidelity variation of an air-traffic control game. We manipulated anthropomorphism (human-like or machine-like), military rank of artificial teammates using three-star (superiors), two-star (peers), or one-star (subordinate) agents, the perceived payload of vehicles with people or supplies onboard, and task difficulty with easy or hard missions using a within-subject design. A behavioural measure of trust was inferred when participants accepted agent recommendations, and a measure of no trust when recommendations were rejected or ignored. We analysed the data for trust using binomial logistic regression. After each trial, blame was apportioned using a 2-item scale and analysed using a one-way repeated measures ANOVA. A post-experiment questionnaire obtained participants' power distance orientation using a seven-item scale. Possible power-related effects on trust and blame apportioning are discussed. Our findings suggest that artificial agents with higher levels of anthropomorphism and lower levels of rank increased trust and shared accountability, with human team members accepting more blame for team failures.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
Florian Georg Jentsch, University of Central Florida, United States
Edited by: Georgios Leontidis, University of Aberdeen, United Kingdom
Reviewed by: Sunitha Basodi, Tri-institutional Center for Translational Research in Neuroimaging and Data Science, United States
ISSN:2624-8212
2624-8212
DOI:10.3389/frai.2024.1273350