Deep Convolutional Neural Networks for Efficient Pose Estimation in Gesture Videos

Our objective is to efficiently and accurately estimate the upper body pose of humans in gesture videos. To this end, we build on the recent successful applications of deep convolutional neural networks (ConvNets). Our novelties are: (i) our method is the first to our knowledge to use ConvNets for e...

Full description

Saved in:
Bibliographic Details
Published inComputer Vision -- ACCV 2014 Vol. 9003; pp. 538 - 552
Main Authors Pfister, Tomas, Simonyan, Karen, Charles, James, Zisserman, Andrew
Format Book Chapter
LanguageEnglish
Published Switzerland Springer International Publishing AG 2015
Springer International Publishing
SeriesLecture Notes in Computer Science
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Our objective is to efficiently and accurately estimate the upper body pose of humans in gesture videos. To this end, we build on the recent successful applications of deep convolutional neural networks (ConvNets). Our novelties are: (i) our method is the first to our knowledge to use ConvNets for estimating human pose in videos; (ii) a new network that exploits temporal information from multiple frames, leading to better performance; (iii) showing that pre-segmenting the foreground of the video improves performance; and (iv) demonstrating that even without foreground segmentations, the network learns to abstract away from the background and can estimate the pose even in the presence of a complex, varying background. We evaluate our method on the BBC TV Signing dataset and show that our pose predictions are significantly better, and an order of magnitude faster to compute, than the state of the art [3].
Bibliography:Electronic supplementary materialThe online version of this chapter (doi:10.1007/978-3-319-16865-4_35) contains supplementary material, which is available to authorized users. Videos can also be accessed at http://www.springerimages.com/videos/978-3-319-16864-7.
ISBN:3319168649
9783319168647
ISSN:0302-9743
1611-3349
DOI:10.1007/978-3-319-16865-4_35