ReProHRL: Towards Multi-Goal Navigation in the Real World using Hierarchical Agents

Robots have been successfully used to perform tasks with high precision. In real-world environments with sparse rewards and multiple goals, learning is still a major challenge and Reinforcement Learning (RL) algorithms fail to learn good policies. Training in simulation environments and then fine-tu...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Manjunath, Tejaswini, Navardi, Mozhgan, Dixit, Prakhar, Prakash, Bharat, Mohsenin, Tinoosh
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 17.08.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Robots have been successfully used to perform tasks with high precision. In real-world environments with sparse rewards and multiple goals, learning is still a major challenge and Reinforcement Learning (RL) algorithms fail to learn good policies. Training in simulation environments and then fine-tuning in the real world is a common approach. However, adapting to the real-world setting is a challenge. In this paper, we present a method named Ready for Production Hierarchical RL (ReProHRL) that divides tasks with hierarchical multi-goal navigation guided by reinforcement learning. We also use object detectors as a pre-processing step to learn multi-goal navigation and transfer it to the real world. Empirical results show that the proposed ReProHRL method outperforms the state-of-the-art baseline in simulation and real-world environments in terms of both training time and performance. Although both methods achieve a 100% success rate in a simple environment for single goal-based navigation, in a more complex environment and multi-goal setting, the proposed method outperforms the baseline by 18% and 5%, respectively. For the real-world implementation and proof of concept demonstration, we deploy the proposed method on a nano-drone named Crazyflie with a front camera to perform multi-goal navigation experiments.
ISSN:2331-8422