Real-time human pose estimation and gesture recognition from depth images using superpixels and SVM classifier

In this paper, we present human pose estimation and gesture recognition algorithms that use only depth information. The proposed methods are designed to be operated with only a CPU (central processing unit), so that the algorithm can be operated on a low-cost platform, such as an embedded board. The...

Full description

Saved in:
Bibliographic Details
Published inSensors (Basel, Switzerland) Vol. 15; no. 6; pp. 12410 - 12427
Main Authors Kim, Hanguen, Lee, Sangwon, Lee, Dongsung, Choi, Soonmin, Ju, Jinsun, Myung, Hyun
Format Journal Article
LanguageEnglish
Published Switzerland MDPI AG 26.05.2015
MDPI
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In this paper, we present human pose estimation and gesture recognition algorithms that use only depth information. The proposed methods are designed to be operated with only a CPU (central processing unit), so that the algorithm can be operated on a low-cost platform, such as an embedded board. The human pose estimation method is based on an SVM (support vector machine) and superpixels without prior knowledge of a human body model. In the gesture recognition method, gestures are recognized from the pose information of a human body. To recognize gestures regardless of motion speed, the proposed method utilizes the keyframe extraction method. Gesture recognition is performed by comparing input keyframes with keyframes in registered gestures. The gesture yielding the smallest comparison error is chosen as a recognized gesture. To prevent recognition of gestures when a person performs a gesture that is not registered, we derive the maximum allowable comparison errors by comparing each registered gesture with the other gestures. We evaluated our method using a dataset that we generated. The experiment results show that our method performs fairly well and is applicable in real environments.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1424-8220
1424-8220
DOI:10.3390/s150612410