Markerless Shape and Motion Capture From Multiview Video Sequences

We propose a new markerless shape and motion capture approach from multiview video sequences. The shape recovery method consists of two steps: separating and merging. In the separating step, the depth map represented with a point cloud for each view is generated by solving a proposed variational mod...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on circuits and systems for video technology Vol. 21; no. 3; pp. 320 - 334
Main Authors Li, Kun, Dai, Qionghai, Xu, Wenli
Format Journal Article
LanguageEnglish
Published New York, NY IEEE 01.03.2011
Institute of Electrical and Electronics Engineers
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:We propose a new markerless shape and motion capture approach from multiview video sequences. The shape recovery method consists of two steps: separating and merging. In the separating step, the depth map represented with a point cloud for each view is generated by solving a proposed variational model, which is regularized by four constraints to ensure the accuracy and completeness of the reconstruction. Then, in the merging step, the point clouds of all the views are merged together and reconstructed into a 3-D mesh using a marching cubes method with silhouette constraints. Experiments show that the geometric details are faithfully preserved in each estimated depth map. The 3-D meshes reconstructed from the estimated depth maps are watertight and present rich geometric details, even for non-convex objects. Taking the reconstructed 3-D mesh as the underlying scene representation, a volumetric deformation method with a new positional-constraint computation scheme is proposed to automatically capture motions of the 3-D object. Our method can capture non-rigid motions even for loosely dressed humans without the aid of markers.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ObjectType-Article-2
ObjectType-Feature-1
content type line 23
ISSN:1051-8215
1558-2205
DOI:10.1109/TCSVT.2011.2106251