Towards efficient and photorealistic 3D human reconstruction: A brief survey
Reconstructing 3D digital models of humans from sensory data is a long-standing problem in computer vision and graphics with a variety of applications in VR/AR, film production, and human–computer interaction, etc. While a huge amount of effort has been devoted to developing various capture hardware...
Saved in:
Published in | Visual informatics (Online) Vol. 5; no. 4; pp. 11 - 19 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
Elsevier B.V
01.12.2021
Elsevier |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Reconstructing 3D digital models of humans from sensory data is a long-standing problem in computer vision and graphics with a variety of applications in VR/AR, film production, and human–computer interaction, etc. While a huge amount of effort has been devoted to developing various capture hardware and reconstruction algorithms, traditional reconstruction pipelines may still suffer from high-cost capture systems and tedious capture processes, which prevent them from being easily accessible. Moreover, the dedicatedly hand-crafted pipelines are prone to reconstruction artifacts, resulting in limited visual quality. To solve these challenges, the recent trend in this area is to use deep neural networks to improve reconstruction efficiency and robustness by learning human priors from existing data. Neural network-based implicit functions have been also shown to be a favorable 3D representation compared to traditional forms like meshes and voxels. Furthermore, neural rendering has emerged as a powerful tool to achieve highly photorealistic modeling and re-rendering of humans by end-to-end optimizing the visual quality of output images. In this article, we will briefly review these advances in this fast-developing field, discuss the advantages and limitations of different approaches, and finally, share some thoughts on future research directions. |
---|---|
ISSN: | 2468-502X 2468-502X |
DOI: | 10.1016/j.visinf.2021.10.003 |