A human factors analysis of proactive support in human-robot teaming

It has long been assumed that for effective human-robot teaming, it is desirable for assistive robots to infer the goals and intents of the humans, and take proactive actions to help them achieve their goals. However, there has not been any systematic evaluation of the accuracy of this claim. On the...

Full description

Saved in:
Bibliographic Details
Published in2015 IEEE RSJ International Conference on Intelligent Robots and Systems (IROS) pp. 3586 - 3593
Main Authors Yu Zhang, Narayanan, Vignesh, Chakraborti, Tathagata, Kambhampati, Subbarao
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.09.2015
Subjects
Online AccessGet full text
DOI10.1109/IROS.2015.7353878

Cover

Loading…
More Information
Summary:It has long been assumed that for effective human-robot teaming, it is desirable for assistive robots to infer the goals and intents of the humans, and take proactive actions to help them achieve their goals. However, there has not been any systematic evaluation of the accuracy of this claim. On the face of it, there are several ways a proactive robot assistant can in fact reduce the effectiveness of teaming. For example, it can increase the cognitive load of the human teammate by performing actions that are unanticipated by the human. In such cases, even though the teaming performance could be improved, it is unclear whether humans are willing to adapt to robot actions or are able to adapt in a timely manner. Furthermore, misinterpretations and delays in goal and intent recognition due to partial observations and limited communication can also reduce the performance. In this paper, our aim is to perform an analysis of human factors on the effectiveness of such proactive support in human-robot teaming. We perform our evaluation in a simulated Urban Search and Rescue (USAR) task, in which the efficacy of teaming is not only dependent on individual performance but also on teammates' interactions with each other. In this task, the human teammate is remotely controlling a robot while working with an intelligent robot teammate `Mary'. Our main result shows that the subjects generally preferred Mary with the ability to provide proactive support (compared to Mary without this ability). Our results also show that human cognitive load was increased with a proactive assistant (albeit not significantly) even though the subjects appeared to interact with it less.
DOI:10.1109/IROS.2015.7353878