Action Segmentation Using 2D Skeleton Heatmaps and Multi-Modality Fusion

This paper presents a 2D skeleton-based action segmentation method with applications in fine-grained human activity recognition. In contrast with state-of-the-art methods which directly take sequences of 3D skeleton coordinates as inputs and apply Graph Convolutional Networks (GCNs) for spatiotempor...

Full description

Saved in:
Bibliographic Details
Published in2024 IEEE International Conference on Robotics and Automation (ICRA) pp. 1048 - 1055
Main Authors Hyder, Syed Waleed, Usama, Muhammad, Zafar, Anas, Naufil, Muhammad, Fateh, Fawad Javed, Konin, Andrey, Zia, M. Zeeshan, Tran, Quoc-Huy
Format Conference Proceeding
LanguageEnglish
Published IEEE 13.05.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This paper presents a 2D skeleton-based action segmentation method with applications in fine-grained human activity recognition. In contrast with state-of-the-art methods which directly take sequences of 3D skeleton coordinates as inputs and apply Graph Convolutional Networks (GCNs) for spatiotemporal feature learning, our main idea is to use sequences of 2D skeleton heatmaps as inputs and employ Temporal Convolutional Networks (TCNs) to extract spatiotemporal features. Despite lacking 3D information, our approach yields comparable/superior performances and better robustness against missing keypoints than previous methods on action segmentation datasets. Moreover, we improve the performances further by using both 2D skeleton heatmaps and RGB videos as inputs. To our best knowledge, this is the first work to utilize 2D skeleton heatmap inputs and the first work to explore 2D skeleton+RGB fusion for action segmentation.
DOI:10.1109/ICRA57147.2024.10610644