Depth-Aware Multi-Person 3D Pose Estimation With Multi-Scale Waterfall Representations

Estimating absolute 3D poses of multiple people from monocular image is challenging due to the presence of occlusions and the scale variation among different persons. Among the existing methods, the top-down paradigms are highly dependent on human detection which is prone to the influence from inter...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on multimedia Vol. 25; pp. 1439 - 1451
Main Authors Shen, Tianyu, Li, Deqi, Wang, Fei-Yue, Huang, Hua
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Estimating absolute 3D poses of multiple people from monocular image is challenging due to the presence of occlusions and the scale variation among different persons. Among the existing methods, the top-down paradigms are highly dependent on human detection which is prone to the influence from inter-person occlusions, while the bottom-up paradigms suffer from the difficulties in keypoint feature extraction caused by scale variation and unreliable joint grouping caused by occlusions. To address these challenges, we introduce a novel multi-person 3D pose estimation framework, aided by multi-scale feature representations and human depth perceiving. Firstly, a waterfall-based architecture is incorporated for multi-scale feature representations to achieve a more accurate estimation of occluded joints with a better detection of human shapes. Then the global and local representations are fused for handling the effects of inter-person occlusion and scale variation in depth perceiving and keypoint feature extraction. Finally, with the guidance of the fused multi-scale representations, a depth-aware model is exploited for better 2D joint grouping and 3D pose recovering. Quantitative and qualitative evaluations on benchmark datasets of MuCo-3DHP and MuPoTS-3D prove the effectiveness of our proposed method. Furthermore, we produce an occluded MuPoTS-3D dataset and the experiments on it validate the superiority of our method for overcoming the occlusions.
ISSN:1520-9210
1941-0077
DOI:10.1109/TMM.2022.3233251