Efficient Learning of Goal-Oriented Push-Grasping Synergy in Clutter

We focus on the task of goal-oriented grasping, in which a robot is supposed to grasp a pre-assigned goal object in clutter and needs some pre-grasp actions such as pushes to enable stable grasps. However, in this task, the robot gets positive rewards from environment only when successfully grasping...

Full description

Saved in:
Bibliographic Details
Published inIEEE robotics and automation letters Vol. 6; no. 4; pp. 6337 - 6344
Main Authors Xu, Kechun, Yu, Hongxiang, Lai, Qianen, Wang, Yue, Xiong, Rong
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 01.10.2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:We focus on the task of goal-oriented grasping, in which a robot is supposed to grasp a pre-assigned goal object in clutter and needs some pre-grasp actions such as pushes to enable stable grasps. However, in this task, the robot gets positive rewards from environment only when successfully grasping the goal object. Besides, joint pushing and grasping elongates the action sequence, compounding the problem of reward delay. Thus, sample inefficiency remains a main challenge in this task. In this letter, a goal-conditioned hierarchical reinforcement learning formulation with high sample efficiency is proposed to learn a push-grasping policy for grasping a specific object in clutter. In our work, sample efficiency is improved by two means. First, we use a goal-conditioned mechanism by goal relabeling to enrich the replay buffer. Second, the pushing and grasping policies are respectively regarded as a generator and a discriminator and the pushing policy is trained with supervision of the grasping discriminator, thus densifying pushing rewards. To deal with the problem of distribution mismatch caused by different training settings of two policies, an alternating training stage is added to learn pushing and grasping in turn. A series of experiments carried out in simulation and real world indicate that our method can quickly learn effective pushing and grasping policies and outperforms existing methods in task completion rate and goal grasp success rate by less times of motion. Furthermore, we validate that our system can also adapt to goal-agnostic conditions with better performance. Note that our system can be transferred to the real world without any fine-tuning. Our code is available at https://github.com/xukechun/Efficient_goal-oriented_push-grasping_synergy
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2377-3766
2377-3766
DOI:10.1109/LRA.2021.3092640