Meta-action descriptor for action recognition in RGBD video

Action recognition is one of the hottest research topics in computer vision. Recent methods represent actions based on global or local video features. These approaches, however, lack semantic structure and may not provide a deep insight into the essence of an action. In this work, the authors argue...

Full description

Saved in:
Bibliographic Details
Published inIET computer vision Vol. 11; no. 4; pp. 301 - 308
Main Authors Huang, Min, Su, Song-Zhi, Cai, Guo-Rong, Zhang, Hong-Bo, Cao, Donglin, Li, Shao-Zi
Format Journal Article
LanguageEnglish
Published The Institution of Engineering and Technology 01.06.2017
Wiley
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Action recognition is one of the hottest research topics in computer vision. Recent methods represent actions based on global or local video features. These approaches, however, lack semantic structure and may not provide a deep insight into the essence of an action. In this work, the authors argue that semantic clues, such as joint positions and part-level motion clustering, help verify actions. To this end, a meta-action descriptor for action recognition in RGBD video is proposed in this study. Specifically, two discrimination-based strategies – dynamic and discriminative part clustering – are introduced to improve accuracy. Experiments conducted on the MSR Action 3D dataset show that the proposed method significantly outperforms the methods without joint position semantic.
ISSN:1751-9632
1751-9640
1751-9640
DOI:10.1049/iet-cvi.2016.0252