Safe Reinforcement Learning in a Simulated Robotic Arm

Reinforcement learning (RL) agents need to explore their environments in order to learn optimal policies. In many environments and tasks, safety is of critical importance. The widespread use of simulators offers a number of advantages, including safe exploration which will be inevitable in cases whe...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Kovač, Luka, Farkaš, Igor
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 28.02.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Reinforcement learning (RL) agents need to explore their environments in order to learn optimal policies. In many environments and tasks, safety is of critical importance. The widespread use of simulators offers a number of advantages, including safe exploration which will be inevitable in cases when RL systems need to be trained directly in the physical environment (e.g. in human-robot interaction). The popular Safety Gym library offers three mobile agent types that can learn goal-directed tasks while considering various safety constraints. In this paper, we extend the applicability of safe RL algorithms by creating a customized environment with Panda robotic arm where Safety Gym algorithms can be tested. We performed pilot experiments with the popular PPO algorithm comparing the baseline with the constrained version and show that the constrained version is able to learn the equally good policy while better complying with safety constraints and taking longer training time as expected.
ISSN:2331-8422