Camera Distance-Aware Top-Down Approach for 3D Multi-Person Pose Estimation From a Single RGB Image

Although significant improvement has been achieved recently in 3D human pose estimation, most of the previous methods only treat a single-person case. In this work, we firstly propose a fully learning-based, camera distance-aware top-down approach for 3D multi-person pose estimation from a single RG...

Full description

Saved in:
Bibliographic Details
Published in2019 IEEE/CVF International Conference on Computer Vision (ICCV) pp. 10132 - 10141
Main Authors Moon, Gyeongsik, Chang, Ju Yong, Lee, Kyoung Mu
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.10.2019
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Although significant improvement has been achieved recently in 3D human pose estimation, most of the previous methods only treat a single-person case. In this work, we firstly propose a fully learning-based, camera distance-aware top-down approach for 3D multi-person pose estimation from a single RGB image. The pipeline of the proposed system consists of human detection, absolute 3D human root localization, and root-relative 3D single-person pose estimation modules. Our system achieves comparable results with the state-of-the-art 3D single-person pose estimation models without any ground truth information and significantly outperforms previous 3D multi-person pose estimation methods on publicly available datasets. The code is available in 1,2 .
ISSN:2380-7504
DOI:10.1109/ICCV.2019.01023