A framework of explanation generation toward reliable autonomous robots
To realize autonomous collaborative robots, it is important to increase the trust that users have in them. Toward this goal, this paper proposes an algorithm that endows an autonomous agent with the ability to explain the transition from the current state to the target state in a Markov decision pro...
Saved in:
Published in | Advanced robotics Vol. 35; no. 17; pp. 1054 - 1067 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
Taylor & Francis
02.09.2021
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | To realize autonomous collaborative robots, it is important to increase the trust that users have in them. Toward this goal, this paper proposes an algorithm that endows an autonomous agent with the ability to explain the transition from the current state to the target state in a Markov decision process (MDP). According to cognitive science, to generate an explanation that is acceptable to humans, it is important to present the minimum information necessary to sufficiently understand an event. To meet this requirement, we propose a framework for identifying important elements in the decision-making process using a prediction model for the world and generating explanations based on these elements. To verify the ability of the proposed method, we conducted an experiment using a grid environment. It was inferred from the result of a simulation experiment that the explanation generated using the proposed method was composed of the minimum elements important for understanding the transition from the current state to the target state. Furthermore, subject experiments showed that the generated explanation was a good summary of the process of state transition, and that a high evaluation was obtained for the explanation of the reason for an action. |
---|---|
ISSN: | 0169-1864 1568-5535 |
DOI: | 10.1080/01691864.2021.1946423 |