Subject-Specific Human Modeling for Human Pose Estimation
3-D human pose estimation or human tracking has always been the focus of research in the human-computer interaction community. As the calibration step of human pose estimation, subject-specific modeling is crucially important to the subsequent pose estimation process. It not only provides a priori k...
Saved in:
Published in | IEEE transactions on human-machine systems Vol. 53; no. 1; pp. 54 - 64 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.02.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | 3-D human pose estimation or human tracking has always been the focus of research in the human-computer interaction community. As the calibration step of human pose estimation, subject-specific modeling is crucially important to the subsequent pose estimation process. It not only provides a priori knowledge but also clearly defines the tracking target. This article presents a fully automatic subject modeling framework to reconstruct human pose, shape, as well as the body texture in a challenging optimization scenario. By integrating powerful differentiable rendering into the subject-specific modeling pipeline, the proposed method transforms the texture reconstruction problem into analysis by synthesis minimization and solves it efficiently by a gradient-based method. Furthermore, a novel covariance matrix adaptation annealing algorithm is proposed to attack the high-dimensional multimodal optimization problem in an adaptive manner. The domain knowledge of hierarchical human anatomy is seamlessly injected to the annealing optimization process by using a soft covariance matrix mask. All together contributes to the novel algorithm robust to the temptation of local minima. Experiments on the Human3.6 M dataset and the People-Snapshot dataset demonstrate the competitive results to the state of the art both qualitatively and quantitatively. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 2168-2291 2168-2305 |
DOI: | 10.1109/THMS.2022.3195952 |