The HA4M dataset: Multi-Modal Monitoring of an assembly task for Human Action recognition in Manufacturing

This paper introduces the Human Action Multi-Modal Monitoring in Manufacturing (HA4M) dataset, a collection of multi-modal data relative to actions performed by different subjects building an Epicyclic Gear Train (EGT). In particular, 41 subjects executed several trials of the assembly task, which c...

Full description

Saved in:
Bibliographic Details
Published inScientific data Vol. 9; no. 1; pp. 745 - 12
Main Authors Cicirelli, Grazia, Marani, Roberto, Romeo, Laura, Domínguez, Manuel García, Heras, Jónathan, Perri, Anna G., D’Orazio, Tiziana
Format Journal Article
LanguageEnglish
Published London Nature Publishing Group UK 02.12.2022
Nature Publishing Group
Nature Portfolio
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This paper introduces the Human Action Multi-Modal Monitoring in Manufacturing (HA4M) dataset, a collection of multi-modal data relative to actions performed by different subjects building an Epicyclic Gear Train (EGT). In particular, 41 subjects executed several trials of the assembly task, which consists of 12 actions. Data were collected in a laboratory scenario using a Microsoft® Azure Kinect which integrates a depth camera, an RGB camera, and InfraRed (IR) emitters. To the best of authors’ knowledge, the HA4M dataset is the first multi-modal dataset about an assembly task containing six types of data: RGB images, Depth maps, IR images, RGB-to-Depth-Aligned images, Point Clouds and Skeleton data. These data represent a good foundation to develop and test advanced action recognition systems in several fields, including Computer Vision and Machine Learning, and application domains such as smart manufacturing and human-robot collaboration. Measurement(s) human actions in manufacturing context Technology Type(s) Microsoft Azure Kinect Camera
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ObjectType-Article-2
ObjectType-Undefined-1
ObjectType-Feature-3
content type line 23
ISSN:2052-4463
2052-4463
DOI:10.1038/s41597-022-01843-z