Multi‐temporal scale aggregation refinement graph convolutional network for skeleton‐based action recognition

Skeleton‐based human action recognition is gaining significant attention and finding widespread application in various fields, such as virtual reality and human‐computer interaction systems. Recent studies have highlighted the effectiveness of graph convolutional network (GCN) based methods in this...

Full description

Saved in:
Bibliographic Details
Published inComputer animation and virtual worlds Vol. 35; no. 1
Main Authors Li, Xuanfeng, Lu, Jian, Zhou, Jian, Liu, Wei, Zhang, Kaibing
Format Journal Article
LanguageEnglish
Published Chichester Wiley Subscription Services, Inc 01.01.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Skeleton‐based human action recognition is gaining significant attention and finding widespread application in various fields, such as virtual reality and human‐computer interaction systems. Recent studies have highlighted the effectiveness of graph convolutional network (GCN) based methods in this task, leading to a remarkable improvement in prediction accuracy. However, most GCN‐based methods overlook the varying contributions of self, centripetal and centrifugal subsets. Besides, only a single‐scale temporal feature is adopted, and the multi‐temporal scale information is ignored. To this end, firstly, in order to differentiate the importance of different skeleton subsets, we develop a refinement graph convolution, which can adaptively learn a weight for each subset feature. Secondly, a multi‐temporal scale aggregation module is proposed to extract more discriminative temporal dynamic information. Furthermore, a multi‐temporal scale aggregation refinement graph convolutional network (MTSA‐RGCN) is proposed, and four‐stream structure is also adopted in this paper, which can comprehensively model complementary features and eventually achieves a significant performance boost. In the empirical experiments, the performance of our approach has been greatly improved on both NTU‐RGB+D 60 and NTU‐RGB+D 120 datasets, compared to other state‐of‐the‐art methods. The overall pipeline of our proposed method. The skeleton data is first input into RGCN to obtain basic feature expressions. RGCN can learn more spatial motion information of actions. Features with different temporal resolutions are then modulated in the temporal and spatial dimensions and aggregated into features with rich discriminative temporal information for final classification.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1546-4261
1546-427X
DOI:10.1002/cav.2221