A human-centered safe robot reinforcement learning framework with interactive behaviors

Deployment of Reinforcement Learning (RL) algorithms for robotics applications in the real world requires ensuring the safety of the robot and its environment. Safe Robot RL (SRRL) is a crucial step toward achieving human-robot coexistence. In this paper, we envision a human-centered SRRL framework...

Full description

Saved in:
Bibliographic Details
Published inFrontiers in neurorobotics Vol. 17; p. 1280341
Main Authors Gu, Shangding, Kshirsagar, Alap, Du, Yali, Chen, Guang, Peters, Jan, Knoll, Alois
Format Journal Article
LanguageEnglish
Published Switzerland Frontiers Research Foundation 09.11.2023
Frontiers Media S.A
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Deployment of Reinforcement Learning (RL) algorithms for robotics applications in the real world requires ensuring the safety of the robot and its environment. Safe Robot RL (SRRL) is a crucial step toward achieving human-robot coexistence. In this paper, we envision a human-centered SRRL framework consisting of three stages: safe exploration, safety value alignment, and safe collaboration. We examine the research gaps in these areas and propose to leverage interactive behaviors for SRRL. Interactive behaviors enable bi-directional information transfer between humans and robots, such as conversational robot ChatGPT. We argue that interactive behaviors need further attention from the SRRL community. We discuss four open challenges related to the robustness, efficiency, transparency, and adaptability of SRRL with interactive behaviors.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1662-5218
1662-5218
DOI:10.3389/fnbot.2023.1280341