KeypointNeRF: Generalizing Image-based Volumetric Avatars using Relative Spatial Encoding of Keypoints
Image-based volumetric humans using pixel-aligned features promise generalization to unseen poses and identities. Prior work leverages global spatial encodings and multi-view geometric consistency to reduce spatial ambiguity. However, global encodings often suffer from overfitting to the distributio...
Saved in:
Main Authors | , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
10.05.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Image-based volumetric humans using pixel-aligned features promise
generalization to unseen poses and identities. Prior work leverages global
spatial encodings and multi-view geometric consistency to reduce spatial
ambiguity. However, global encodings often suffer from overfitting to the
distribution of the training data, and it is difficult to learn multi-view
consistent reconstruction from sparse views. In this work, we investigate
common issues with existing spatial encodings and propose a simple yet highly
effective approach to modeling high-fidelity volumetric humans from sparse
views. One of the key ideas is to encode relative spatial 3D information via
sparse 3D keypoints. This approach is robust to the sparsity of viewpoints and
cross-dataset domain gap. Our approach outperforms state-of-the-art methods for
head reconstruction. On human body reconstruction for unseen subjects, we also
achieve performance comparable to prior work that uses a parametric human body
model and temporal feature aggregation. Our experiments show that a majority of
errors in prior work stem from an inappropriate choice of spatial encoding and
thus we suggest a new direction for high-fidelity image-based human modeling.
https://markomih.github.io/KeypointNeRF |
---|---|
DOI: | 10.48550/arxiv.2205.04992 |