Skip-Pose Vectors: Pose-based motion embedding using Encoder-Decoder models

This paper proposes a pose-based unsupervised embedding learning method for action recognition. To classify human action based on the similarity of motions, it is important to establish a good feature space such that similar motions are mapped to similar vector representations. On the other hand, le...

Full description

Saved in:
Bibliographic Details
Published in2019 16th International Conference on Machine Vision Applications (MVA) pp. 1 - 6
Main Authors Shirakawa, Yuta, Kozakaya, Tatsuo
Format Conference Proceeding
LanguageEnglish
Published MVA Organization 01.05.2019
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This paper proposes a pose-based unsupervised embedding learning method for action recognition. To classify human action based on the similarity of motions, it is important to establish a good feature space such that similar motions are mapped to similar vector representations. On the other hand, learning a feature space with this property with a supervised approach requires huge training samples, tailored supervised key-points, and action categories. Although the labeling cost of keypoints is decreasing day by day with improvement of 2D pose estimation methods, labeling video category is still problematic work due to the variety of categories, ambiguity and variations of videos. To avoid the need for such expensive category labeling, following the success of "Skip-Thought Vectors", an unsupervised approach to model the similarity of sentences, we apply its idea to contiguous pose sequences to learn feature representations for measuring motion similarities. Thanks to handling human action as 2D poses instead of images, the model size can be small and easy to handle, and we can augment the training data by projecting 3D motion capture data to 2D. Through evaluation on the JHMDB dataset, we explore various design choices, such as whether to handle the actions as a sequence of poses or as a sequence of images. Our approach leverages pose sequences from 3D motion capture and improves its performance as much as 61.6% on JHMDB.
DOI:10.23919/MVA.2019.8757937