PSHuman: Photorealistic Single-view Human Reconstruction using Cross-Scale Diffusion

Detailed and photorealistic 3D human modeling is essential for various applications and has seen tremendous progress. However, full-body reconstruction from a monocular RGB image remains challenging due to the ill-posed nature of the problem and sophisticated clothing topology with self-occlusions....

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Li, Peng, Zheng, Wangguandong, Liu, Yuan, Yu, Tao, Li, Yangguang, Qi, Xingqun, Li, Mengfei, Chi, Xiaowei, Xia, Siyu, Xue, Wei, Luo, Wenhan, Liu, Qifeng, Guo, Yike
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 16.09.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Detailed and photorealistic 3D human modeling is essential for various applications and has seen tremendous progress. However, full-body reconstruction from a monocular RGB image remains challenging due to the ill-posed nature of the problem and sophisticated clothing topology with self-occlusions. In this paper, we propose PSHuman, a novel framework that explicitly reconstructs human meshes utilizing priors from the multiview diffusion model. It is found that directly applying multiview diffusion on single-view human images leads to severe geometric distortions, especially on generated faces. To address it, we propose a cross-scale diffusion that models the joint probability distribution of global full-body shape and local facial characteristics, enabling detailed and identity-preserved novel-view generation without any geometric distortion. Moreover, to enhance cross-view body shape consistency of varied human poses, we condition the generative model on parametric models like SMPL-X, which provide body priors and prevent unnatural views inconsistent with human anatomy. Leveraging the generated multi-view normal and color images, we present SMPLX-initialized explicit human carving to recover realistic textured human meshes efficiently. Extensive experimental results and quantitative evaluations on CAPE and THuman2.1 datasets demonstrate PSHumans superiority in geometry details, texture fidelity, and generalization capability.
ISSN:2331-8422