Collision-free Grasp Detection From Color and Depth Images

Efficient and reliable grasp pose generation plays a crucial role in robotic manipulation tasks. The advancement of deep learning techniques applied to point cloud data has led to rapid progress in grasp detection. However, point cloud data has limitations: no appearance information and susceptibili...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on artificial intelligence pp. 1 - 10
Main Authors Hoang, Dinh-Cuong, Nguyen, Anh-Nhat, Nguyen, Chi-Minh, Phi, An-Binh, Duong, Quang-Tri, Tran, Khanh-Duong, Trinh, Viet-Anh, Tran, Van-Duc, Pham, Hai-Nam, Ngo, Phuc-Quan, Vu, Duy-Quang, Nguyen, Thu-Uyen, Vu, Van-Duc, Tran, Duc-Thanh, Nguyen, Van-Thiep
Format Journal Article
LanguageEnglish
Published IEEE 28.06.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Efficient and reliable grasp pose generation plays a crucial role in robotic manipulation tasks. The advancement of deep learning techniques applied to point cloud data has led to rapid progress in grasp detection. However, point cloud data has limitations: no appearance information and susceptibility to sensor noise. In contrast, color (RGB) images offer high-resolution and intricate textural details, making them a valuable complement to the three-dimensional geometry offered by point clouds or depth (D) images. Nevertheless, the effective integration of appearance information to enhance point cloud-based grasp detection remains an open question. In this study, we extend the concepts of VoteGrasp [1] and introduce an innovative deep learning approach referred to as VoteGrasp (RGBD). To build robustness to occlusion, the proposed model generates candidates by casting votes and accumulating evidence for feasible grasp configurations. This methodology revolves around fusing votes extracted from images and point clouds. To further enhance the collaborative effect of merging appearance and geometry features, we introduce a context learning module. We exploit contextual information by encoding the dependency of objects in the scene into features to boost the performance of grasp generation. The contextual information enables our model to increase the likelihood that the generated grasps are collision-free. The efficacy of our model is verified through comprehensive evaluations on the demanding GraspNet-1Billion dataset, leading to a significant improvement of 9.3 in Average Precision (AP) over the existing state-of-the-art results. Additionally, we provide extensive analyses through ablation studies to elucidate the contributions of each design decision.
ISSN:2691-4581
2691-4581
DOI:10.1109/TAI.2024.3420848