Single-View RGBD-Based Reconstruction of Dynamic Human Geometry

We present a method for reconstructing the geometry and appearance of indoor scenes containing dynamic human subjects using a single (optionally moving) RGBD sensor. We introduce a framework for building a representation of the articulated scene geometry as a set of piecewise rigid parts which are t...

Full description

Saved in:
Bibliographic Details
Published in2013 IEEE International Conference on Computer Vision Workshops pp. 307 - 314
Main Authors Malleson, Charles, Klaudiny, Martin, Hilton, Adrian, Guillemaut, Jean-Yves
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.12.2013
Subjects
Online AccessGet full text
DOI10.1109/ICCVW.2013.48

Cover

Loading…
More Information
Summary:We present a method for reconstructing the geometry and appearance of indoor scenes containing dynamic human subjects using a single (optionally moving) RGBD sensor. We introduce a framework for building a representation of the articulated scene geometry as a set of piecewise rigid parts which are tracked and accumulated over time using moving voxel grids containing a signed distance representation. Data association of noisy depth measurements with body parts is achieved by online training of a prior shape model for the specific subject. A novel frame-to-frame model registration is introduced which combines iterative closest-point with additional correspondences from optical flow and prior pose constraints from noisy skeletal tracking data. We quantitatively evaluate the reconstruction and tracking performance of the approach using a synthetic animated scene. We demonstrate that the approach is capable of reconstructing mid-resolution surface models of people from low-resolution noisy data acquired from a consumer RGBD camera.
DOI:10.1109/ICCVW.2013.48