Drivable 3D Gaussian Avatars

We present Drivable 3D Gaussian Avatars (D3GA), the first 3D controllable model for human bodies rendered with Gaussian splats. Current photorealistic drivable avatars require either accurate 3D registrations during training, dense input images during testing, or both. The ones based on neural radia...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Zielonka, Wojciech, Bagautdinov, Timur, Saito, Shunsuke, Zollhöfer, Michael, Thies, Justus, Romero, Javier
Format Paper Journal Article
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 14.11.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:We present Drivable 3D Gaussian Avatars (D3GA), the first 3D controllable model for human bodies rendered with Gaussian splats. Current photorealistic drivable avatars require either accurate 3D registrations during training, dense input images during testing, or both. The ones based on neural radiance fields also tend to be prohibitively slow for telepresence applications. This work uses the recently presented 3D Gaussian Splatting (3DGS) technique to render realistic humans at real-time framerates, using dense calibrated multi-view videos as input. To deform those primitives, we depart from the commonly used point deformation method of linear blend skinning (LBS) and use a classic volumetric deformation method: cage deformations. Given their smaller size, we drive these deformations with joint angles and keypoints, which are more suitable for communication applications. Our experiments on nine subjects with varied body shapes, clothes, and motions obtain higher-quality results than state-of-the-art methods when using the same training and test data.
ISSN:2331-8422
DOI:10.48550/arxiv.2311.08581