Towards efficient and photorealistic 3D human reconstruction: A brief survey

Reconstructing 3D digital models of humans from sensory data is a long-standing problem in computer vision and graphics with a variety of applications in VR/AR, film production, and human–computer interaction, etc. While a huge amount of effort has been devoted to developing various capture hardware...

Full description

Saved in:
Bibliographic Details
Published inVisual informatics (Online) Vol. 5; no. 4; pp. 11 - 19
Main Authors Chen, Lu, Peng, Sida, Zhou, Xiaowei
Format Journal Article
LanguageEnglish
Published Elsevier B.V 01.12.2021
Elsevier
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Reconstructing 3D digital models of humans from sensory data is a long-standing problem in computer vision and graphics with a variety of applications in VR/AR, film production, and human–computer interaction, etc. While a huge amount of effort has been devoted to developing various capture hardware and reconstruction algorithms, traditional reconstruction pipelines may still suffer from high-cost capture systems and tedious capture processes, which prevent them from being easily accessible. Moreover, the dedicatedly hand-crafted pipelines are prone to reconstruction artifacts, resulting in limited visual quality. To solve these challenges, the recent trend in this area is to use deep neural networks to improve reconstruction efficiency and robustness by learning human priors from existing data. Neural network-based implicit functions have been also shown to be a favorable 3D representation compared to traditional forms like meshes and voxels. Furthermore, neural rendering has emerged as a powerful tool to achieve highly photorealistic modeling and re-rendering of humans by end-to-end optimizing the visual quality of output images. In this article, we will briefly review these advances in this fast-developing field, discuss the advantages and limitations of different approaches, and finally, share some thoughts on future research directions.
ISSN:2468-502X
2468-502X
DOI:10.1016/j.visinf.2021.10.003