SkeletonNet: Mining Deep Part Features for 3-D Action Recognition

This letter presents SkeletonNet, a deep learning framework for skeleton-based 3-D action recognition. Given a skeleton sequence, the spatial structure of the skeleton joints in each frame and the temporal information between multiple frames are two important factors for action recognition. We first...

Full description

Saved in:
Bibliographic Details
Published inIEEE signal processing letters Vol. 24; no. 6; pp. 731 - 735
Main Authors Qiuhong Ke, Senjian An, Bennamoun, Mohammed, Sohel, Ferdous, Boussaid, Farid
Format Journal Article
LanguageEnglish
Published IEEE 01.06.2017
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This letter presents SkeletonNet, a deep learning framework for skeleton-based 3-D action recognition. Given a skeleton sequence, the spatial structure of the skeleton joints in each frame and the temporal information between multiple frames are two important factors for action recognition. We first extract body-part-based features from each frame of the skeleton sequence. Compared to the original coordinates of the skeleton joints, the proposed features are translation, rotation, and scale invariant. To learn robust temporal information, instead of treating the features of all frames as a time series, we transform the features into images and feed them to the proposed deep learning network, which contains two parts: one to extract general features from the input images, while the other to generate a discriminative and compact representation for action recognition. The proposed method is tested on the SBU kinect interaction dataset, the CMU dataset, and the large-scale NTU RGB+D dataset and achieves state-of-the-art performance.
ISSN:1070-9908
1558-2361
DOI:10.1109/LSP.2017.2690339