The Fusion and Verification of 2D Human Skeleton and 3D Point Cloud based on RealSense

Recognizing the human body's 3D coordinates is important for human-robot collaboration because the robot might need to deliver objects to human or avoid the collision. This study proposed a method to concatenate the extracted human skeletons from RGB sensor with depth information from the Intel...

Full description

Saved in:
Bibliographic Details
Published inInternational Conference on Advanced Robotics and Intelligent Systems (Online) pp. 1 - 5
Main Authors Yang, Chao-Lung, Wu, Shao-Qing, Chen, Syuan-Jen, Kao, Tzu-Ching, Huang, Chao-Hung, Liou, En, Lin, Po-Ting, Hua, Kai-Lung
Format Conference Proceeding
LanguageEnglish
Published IEEE 30.08.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Recognizing the human body's 3D coordinates is important for human-robot collaboration because the robot might need to deliver objects to human or avoid the collision. This study proposed a method to concatenate the extracted human skeletons from RGB sensor with depth information from the Intel RealSense D455 Depth Camera for accurately estimating 3D human coordinates. First, MediaPipe pose estimation was used to extract human body skeletons. By aligning the skeleton joints with the corresponding depth information from the depth camera, the overlapped depth information of each skeleton point can be obtained. The 3D skeleton point cloud can be generated by computing the coordinates of skeletons through RealSense SDK and OpenGL. To validate the extracted depth coordinates, the experiments were conducted at various distances of testing subjects. The experimental results showed that the measurement error is within 1.5% at distances of 1, 1.5, and 2 meters, establishing a reliable reference for the proposed approach.
ISSN:2572-6919
DOI:10.1109/ARIS59192.2023.10268484