Recognizing human actions using multiple features

In this paper, we propose a framework that fuses multiple features for improved action recognition in videos. The fusion of multiple features is important for recognizing actions as often a single feature based representation is not enough to capture the imaging variations (view-point, illumination...

Full description

Saved in:
Bibliographic Details
Published in2008 IEEE Conference on Computer Vision and Pattern Recognition pp. 1 - 8
Main Authors Jingen Liu, Ali, S., Shah, M.
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.06.2008
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In this paper, we propose a framework that fuses multiple features for improved action recognition in videos. The fusion of multiple features is important for recognizing actions as often a single feature based representation is not enough to capture the imaging variations (view-point, illumination etc.) and attributes of individuals (size, age, gender etc.). Hence, we use two types of features: i) a quantized vocabulary of local spatio-temporal (ST) volumes (or cuboids), and ii) a quantized vocabulary of spin-images, which aims to capture the shape deformation of the actor by considering actions as 3D objects (x, y, t). To optimally combine these features, we treat different features as nodes in a graph, where weighted edges between the nodes represent the strength of the relationship between entities. The graph is then embedded into a k-dimensional space subject to the criteria that similar nodes have Euclidian coordinates which are closer to each other. This is achieved by converting this constraint into a minimization problem whose solution is the eigenvectors of the graph Laplacian matrix. This procedure is known as Fiedler embedding. The performance of the proposed framework is tested on publicly available data sets. The results demonstrate that fusion of multiple features helps in achieving improved performance, and allows retrieval of meaningful features and videos from the embedding space.
ISBN:9781424422425
1424422426
ISSN:1063-6919
1063-6919
DOI:10.1109/CVPR.2008.4587527