Human Action Recognition Utilizing Variations in Skeleton Dimensions

This paper presents a human action recognition system that distinguishes between different actions using a new set of features based on global variation in the visual appearance of the subject body. The proposed technique utilizes the changes in human body dimensions, during performing an action, to...

Full description

Saved in:
Bibliographic Details
Published inArabian journal for science and engineering (2011) Vol. 43; no. 2; pp. 597 - 610
Main Authors Moussa, Mona M., Hemayed, Elsayed E., El Nemr, Heba A., Fayek, Magda B.
Format Journal Article
LanguageEnglish
Published Berlin/Heidelberg Springer Berlin Heidelberg 01.02.2018
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This paper presents a human action recognition system that distinguishes between different actions using a new set of features based on global variation in the visual appearance of the subject body. The proposed technique utilizes the changes in human body dimensions, during performing an action, to extract this feature set. These dimension variations are calculated from the human body skeleton performing the action to be recognized. The technique can process both 2D and 3D camera videos due to the high-level nature of the extracted features where the skeleton can be extracted from a video captured using traditional 2D cameras or depth sensing cameras. Finally, a multi-class linear support vector machine is employed in the classification stage. Experiments are conducted on Weizmann, Berkeley MHAD, and MSR-Action3D datasets. The results show that the technique achieves an accuracy of 98.9% for Weizmann, 99.63% for Berkeley MHAD, and 94.3% for MSR-Action3D. Moreover, a cross-dataset experiment is held to ensure the generality of the proposed technique, where the system is trained using Berkeley MHAD dataset and tested using MSR-Action3D, which achieved 88.76% accuracy. Moreover, a novel approach for handwriting actions recognition that depends on hands tracking is presented. This approach achieved an accuracy of 100%, and it can be considered an application for the proposed technique.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2193-567X
1319-8025
2191-4281
DOI:10.1007/s13369-017-2694-9