Learning 3D action models from a few 2D videos for view invariant action recognition

Most existing approaches for learning action models work by extracting suitable low-level features and then training appropriate classifiers. Such approaches require large amounts of training data and do not generalize well to variations in viewpoint, scale and across datasets. Some work has been do...

Full description

Saved in:
Bibliographic Details
Published in2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition pp. 20006 - 2013
Main Authors Natarajan, P, Singh, V K, Nevatia, R
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.06.2010
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Most existing approaches for learning action models work by extracting suitable low-level features and then training appropriate classifiers. Such approaches require large amounts of training data and do not generalize well to variations in viewpoint, scale and across datasets. Some work has been done recently to learn multi-view action models from Mocap data, but obtaining such data is time consuming and requires costly infrastructure. We present a method that addresses both these issues by learning action models from just a few video training samples. We model each action as a sequence of primitive actions, represented as functions which transform the actor's state. We formulate model learning as a curve-fitting problem, and present a novel algorithm for learning human actions by lifting 2D annotations of a few keyposes to 3D and interpolating between them. Actions are inferred by sampling the models and accumulating the feature weights learned discriminatively using a latent state Perceptron algorithm. We show results comparable to state-of-art on the standard Weizmann dataset, with a much smaller train:test ratio, and also in datasets for visual gesture recognition and cluttered grocery store environments.
ISBN:1424469848
9781424469840
ISSN:1063-6919
1063-6919
DOI:10.1109/CVPR.2010.5539876