Model-Based Inverse Reinforcement Learning from Visual Demonstrations
Proceedings of the 2020 Conference on Robot Learning, PMLR 155:1930-1942, 2021 Scaling model-based inverse reinforcement learning (IRL) to real robotic manipulation tasks with unknown dynamics remains an open problem. The key challenges lie in learning good dynamics models, developing algorithms tha...
Saved in:
Main Authors | , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
18.10.2020
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Proceedings of the 2020 Conference on Robot Learning, PMLR
155:1930-1942, 2021 Scaling model-based inverse reinforcement learning (IRL) to real robotic
manipulation tasks with unknown dynamics remains an open problem. The key
challenges lie in learning good dynamics models, developing algorithms that
scale to high-dimensional state-spaces and being able to learn from both visual
and proprioceptive demonstrations. In this work, we present a gradient-based
inverse reinforcement learning framework that utilizes a pre-trained visual
dynamics model to learn cost functions when given only visual human
demonstrations. The learned cost functions are then used to reproduce the
demonstrated behavior via visual model predictive control. We evaluate our
framework on hardware on two basic object manipulation tasks. |
---|---|
Bibliography: | PMLR 155:1930-1942 |
DOI: | 10.48550/arxiv.2010.09034 |