Deep relightable textures volumetric performance capture with neural rendering

The increasing demand for 3D content in augmented and virtual reality has motivated the development of volumetric performance capture systemsnsuch as the Light Stage. Recent advances are pushing free viewpoint relightable videos of dynamic human performances closer to photorealistic quality. However...

Full description

Saved in:
Bibliographic Details
Published inACM transactions on graphics Vol. 39; no. 6; pp. 1 - 21
Main Authors Meka, Abhimitra, Pandey, Rohit, Häne, Christian, Orts-Escolano, Sergio, Barnum, Peter, David-Son, Philip, Erickson, Daniel, Zhang, Yinda, Taylor, Jonathan, Bouaziz, Sofien, Legendre, Chloe, Ma, Wan-Chun, Overbeck, Ryan, Beeler, Thabo, Debevec, Paul, Izadi, Shahram, Theobalt, Christian, Rhemann, Christoph, Fanello, Sean
Format Journal Article
LanguageEnglish
Published New York, NY, USA ACM 26.11.2020
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The increasing demand for 3D content in augmented and virtual reality has motivated the development of volumetric performance capture systemsnsuch as the Light Stage. Recent advances are pushing free viewpoint relightable videos of dynamic human performances closer to photorealistic quality. However, despite significant efforts, these sophisticated systems are limited by reconstruction and rendering algorithms which do not fully model complex 3D structures and higher order light transport effects such as global illumination and sub-surface scattering. In this paper, we propose a system that combines traditional geometric pipelines with a neural rendering scheme to generate photorealistic renderings of dynamic performances under desired viewpoint and lighting. Our system leverages deep neural networks that model the classical rendering process to learn implicit features that represent the view-dependent appearance of the subject independent of the geometry layout, allowing for generalization to unseen subject poses and even novel subject identity. Detailed experiments and comparisons demonstrate the efficacy and versatility of our method to generate high-quality results, significantly outperforming the existing state-of-the-art solutions.
ISSN:0730-0301
1557-7368
DOI:10.1145/3414685.3417814