Near laser-scan quality 3-D face reconstruction from a low-quality depth stream

We propose a method to produce near laser-scan quality 3-D face models of a freely moving user with a low-cost, low resolution range sensor in real-time. Our approach does not require any prior knowledge about the geometry of a face and can produce faithful geometric models of any star-shaped object...

Full description

Saved in:
Bibliographic Details
Published inImage and vision computing Vol. 36; pp. 61 - 69
Main Authors Hernandez, Matthias, Choi, Jongmoo, Medioni, Gérard
Format Journal Article
LanguageEnglish
Published Elsevier B.V 01.04.2015
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:We propose a method to produce near laser-scan quality 3-D face models of a freely moving user with a low-cost, low resolution range sensor in real-time. Our approach does not require any prior knowledge about the geometry of a face and can produce faithful geometric models of any star-shaped object. We use a cylindrical representation, which enables us to efficiently process the 3-D mesh by applying 2-D filters. We use the first frame as a reference and incrementally build the model by registering each subsequent cloud of 3-D points to the reference using the ICP (Iterative Closest Point) algorithm implemented on a GPU (Graphics Processing Unit). The registered point clouds are merged into a single image through a cylindrical representation. The noise from the sensor and from the pose estimation error is removed with a temporal integration and a spatial smoothing of the successively incremented model. To validate our approach, we quantitatively compare our model to laser scans, and show comparable accuracy.11This paper extends the method presented in [15]. •We infer a very accurate 3-D face model for a freely moving user from a single depth camera.•Using unwrapped cylindrical 2-D images enables us to use simple 2-D image processing algorithms to process the 3-D information.•We use a combination of spatial smoothing and temporal integration for noise removal.•A robust rejection method produces reliable results in the presence of facial expression changes and partial occlusions.•Our system runs online and in real-time.
ISSN:0262-8856
1872-8138
DOI:10.1016/j.imavis.2014.12.004