Mixed Reality-Based 6D-Pose Annotation System for Robot Manipulation in Retail Environments

Robot manipulation in retail environments is a challenging task due to the need for large amounts of annotated data for accurate 6D-pose estimation of items. Onsite data collection, additional manual annotation, and model fine-tuning are often required when deploying robots in new environments, as v...

Full description

Saved in:
Bibliographic Details
Published in2024 IEEE/SICE International Symposium on System Integration (SII) pp. 1425 - 1432
Main Authors Tornberg, Carl, EI Hafi, Lotfi, Uriguen Eljuri, Pedro Miguel, Yamamoto, Masaki, Garcia Ricardez, Gustavo Alfonso, Solis, Jorge, Taniguchi, Tadahiro
Format Conference Proceeding
LanguageEnglish
Published IEEE 08.01.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Robot manipulation in retail environments is a challenging task due to the need for large amounts of annotated data for accurate 6D-pose estimation of items. Onsite data collection, additional manual annotation, and model fine-tuning are often required when deploying robots in new environments, as varying lighting conditions, clutter, and occlusions can significantly diminish performance. Therefore, we propose a system to annotate the 6D pose of items using mixed reality (MR) to enhance the robustness of robot manipulation in retail environments. Our main contribution is a system that can display 6D-pose estimation results of a trained model from multiple perspectives in MR, and enable onsite (re-)annotation of incorrectly inferred item poses using hand gestures. The proposed system is compared to a PC-based annotation system using a mouse and the robot camera's point cloud in an extensive quantitative experiment. Our experimental results indicate that MR can increase the accuracy of pose annotation, especially by reducing position errors.
ISBN:9798350312072
9798350312089
ISSN:2474-2325
DOI:10.1109/SII58957.2024.10417443