Learning to Fuse 2D and 3D Image Cues for Monocular Body Pose Estimation

Most recent approaches to monocular 3D human pose estimation rely on Deep Learning. They typically involve regressing from an image to either 3D joint coordinates directly or 2D joint locations from which 3D coordinates are inferred. Both approaches have their strengths and weaknesses and we therefo...

Full description

Saved in:
Bibliographic Details
Published in2017 IEEE International Conference on Computer Vision (ICCV) pp. 3961 - 3970
Main Authors Tekin, Bugra, Marquez-Neila, Pablo, Salzmann, Mathieu, Fua, Pascal
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.10.2017
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Most recent approaches to monocular 3D human pose estimation rely on Deep Learning. They typically involve regressing from an image to either 3D joint coordinates directly or 2D joint locations from which 3D coordinates are inferred. Both approaches have their strengths and weaknesses and we therefore propose a novel architecture designed to deliver the best of both worlds by performing both simultaneously and fusing the information along the way. At the heart of our framework is a trainable fusion scheme that learns how to fuse the information optimally instead of being hand-designed. This yields significant improvements upon the state-of-the-art on standard 3D human pose estimation benchmarks.
ISSN:2380-7504
DOI:10.1109/ICCV.2017.425