Model-Based Inverse Reinforcement Learning from Visual Demonstrations

Proceedings of the 2020 Conference on Robot Learning, PMLR 155:1930-1942, 2021 Scaling model-based inverse reinforcement learning (IRL) to real robotic manipulation tasks with unknown dynamics remains an open problem. The key challenges lie in learning good dynamics models, developing algorithms tha...

Full description

Saved in:
Bibliographic Details
Main Authors Das, Neha, Bechtle, Sarah, Davchev, Todor, Jayaraman, Dinesh, Rai, Akshara, Meier, Franziska
Format Journal Article
LanguageEnglish
Published 18.10.2020
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Proceedings of the 2020 Conference on Robot Learning, PMLR 155:1930-1942, 2021 Scaling model-based inverse reinforcement learning (IRL) to real robotic manipulation tasks with unknown dynamics remains an open problem. The key challenges lie in learning good dynamics models, developing algorithms that scale to high-dimensional state-spaces and being able to learn from both visual and proprioceptive demonstrations. In this work, we present a gradient-based inverse reinforcement learning framework that utilizes a pre-trained visual dynamics model to learn cost functions when given only visual human demonstrations. The learned cost functions are then used to reproduce the demonstrated behavior via visual model predictive control. We evaluate our framework on hardware on two basic object manipulation tasks.
Bibliography:PMLR 155:1930-1942
DOI:10.48550/arxiv.2010.09034