Action Recognition and Localization by Hierarchical Space-Time Segments

We propose Hierarchical Space-Time Segments as a new representation for action recognition and localization. This representation has a two-level hierarchy. The first level comprises the root space-time segments that may contain a human body. The second level comprises multi-grained space-time segmen...

Full description

Saved in:
Bibliographic Details
Published in2013 IEEE International Conference on Computer Vision pp. 2744 - 2751
Main Authors Shugao Ma, Jianming Zhang, Ikizler-Cinbis, Nazli, Sclaroff, Stan
Format Conference Proceeding Journal Article
LanguageEnglish
Published IEEE 01.12.2013
Subjects
Online AccessGet full text
ISSN1550-5499
DOI10.1109/ICCV.2013.341

Cover

More Information
Summary:We propose Hierarchical Space-Time Segments as a new representation for action recognition and localization. This representation has a two-level hierarchy. The first level comprises the root space-time segments that may contain a human body. The second level comprises multi-grained space-time segments that contain parts of the root. We present an unsupervised method to generate this representation from video, which extracts both static and non-static relevant space-time segments, and also preserves their hierarchical and temporal relationships. Using simple linear SVM on the resultant bag of hierarchical space-time segments representation, we attain better than, or comparable to, state-of-the-art action recognition performance on two challenging benchmark datasets and at the same time produce good action localization results.
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Conference-1
ObjectType-Feature-3
content type line 23
SourceType-Conference Papers & Proceedings-2
ISSN:1550-5499
DOI:10.1109/ICCV.2013.341