Reinforcement Learning With Vision-Proprioception Model for Robot Planar Pushing

We propose a vision-proprioception model for planar object pushing, efficiently integrating all necessary information from the environment. A Variational Autoencoder (VAE) is used to extract compact representations from the task-relevant part of the image. With the real-time robot state obtained eas...

Full description

Saved in:
Bibliographic Details
Published inFrontiers in neurorobotics Vol. 16; p. 829437
Main Authors Cong, Lin, Liang, Hongzhuo, Ruppel, Philipp, Shi, Yunlei, Görner, Michael, Hendrich, Norman, Zhang, Jianwei
Format Journal Article
LanguageEnglish
Published Switzerland Frontiers Research Foundation 02.03.2022
Frontiers Media S.A
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:We propose a vision-proprioception model for planar object pushing, efficiently integrating all necessary information from the environment. A Variational Autoencoder (VAE) is used to extract compact representations from the task-relevant part of the image. With the real-time robot state obtained easily from the hardware system, we fuse the latent representations from the VAE and the robot end-effector position together as the state of a Markov Decision Process. We use Soft Actor-Critic to train the robot to push different objects from random initial poses to target positions in simulation. Hindsight Experience replay is applied during the training process to improve the sample efficiency. Experiments demonstrate that our algorithm achieves a pushing performance superior to a state-based baseline model that cannot be generalized to a different object and outperforms state-of-the-art policies which operate on raw image observations. At last, we verify that our trained model has a good generalization ability to unseen objects in the real world.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
Reviewed by: Yuning Cui, Technical University of Munich, Germany; Xiangtong Yao, Technical University of Munich, Germany
Edited by: Zhenshan Bing, Technical University of Munich, Germany
ISSN:1662-5218
1662-5218
DOI:10.3389/fnbot.2022.829437