Research on demonstration task segmentation method based on multi-mode information

Since the demonstration data contains information such as the demonstration intention of the instructor, the pose of the workpiece, and environmental constraints, it is difficult to segment teaching data accurately using with a single method. Therefore, this paper proposes a segmentation method base...

Full description

Saved in:
Bibliographic Details
Published in2022 12th International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER) pp. 343 - 347
Main Authors Zhang, Wei, Cao, Tieze, Sun, Anbing, Gan, Xiaochuan, Fan, Jingjing, Hao, Lina, Cheng, Hongtai
Format Conference Proceeding
LanguageEnglish
Published IEEE 27.07.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Since the demonstration data contains information such as the demonstration intention of the instructor, the pose of the workpiece, and environmental constraints, it is difficult to segment teaching data accurately using with a single method. Therefore, this paper proposes a segmentation method based on multimodal information to solve this problem. The demonstration data is preliminarily segmented by a method based on gestures, trajectory variance and contact force, and then the demonstration tasks are accurately segmented into unconstrained tasks, position-constrained tasks and force-constrained tasks by fused segmentation criteria. Finally, the effectiveness of the proposed segmentation method based on multimodal information is verified by reproducing the experiments of assembling planetary gear reducers.
ISSN:2642-6633
DOI:10.1109/CYBER55403.2022.9907753