Influence of Different Explanation Types on Robot-Related Human Factors in Robot Navigation Tasks

The field of robotics has shown significant advances in autonomous systems, particularly in robot navigation. Since the decisions made during navigation can be difficult for human operators to understand, research aims to provide explanations that improve human-robot interaction (HRI). However, gene...

Full description

Saved in:
Bibliographic Details
Published in2024 33rd IEEE International Conference on Robot and Human Interactive Communication (ROMAN) pp. 1084 - 1091
Main Authors Eder, Matthias, Konczol, Clemens, Kienzl, Julian, Mosbacher, Jochen A., Kubicek, Bettina, Steinbauer-Wagner, Gerald
Format Conference Proceeding
LanguageEnglish
Published IEEE 26.08.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The field of robotics has shown significant advances in autonomous systems, particularly in robot navigation. Since the decisions made during navigation can be difficult for human operators to understand, research aims to provide explanations that improve human-robot interaction (HRI). However, generating and designing such explanations with the intention of improving robot-related human factors is still an ongoing research challenge. This paper addresses this challenge by investigating the impact of different explanation types on a set of human factors in the context of robot navigation. For this purpose, we conducted a user study that examined the impact of six different explanation types on commonly used human factors, including trust, satisfaction, situation awareness, likeability, understandability, and perceived usefulness. Additionally, the study provides indications of their general applicability for robot navigation explanations through creation of sum ranks across the observed human factor metrics. The results show that depending on the chosen explanation type, a significant impact on the measured factors can be observed. While constraint-based explanations are generally rated highly across all factors, apologetic explanations are not perceived well across all measured human factors. Our results provide insights into the impact of explanation types used for robot navigation scenarios on robot-related human factors, and also provide practical insights for designing explanations for robot navigation scenarios.
ISSN:1944-9437
DOI:10.1109/RO-MAN60168.2024.10731192