NeLF-Pro: Neural Light Field Probes for Multi-Scale Novel View Synthesis
We present NeLF-Pro, a novel representation to model and reconstruct light fields in diverse natural scenes that vary in extent and spatial granularity. In contrast to previous fast reconstruction methods that represent the 3D scene globally, we model the light field of a scene as a set of local lig...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
20.12.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | We present NeLF-Pro, a novel representation to model and reconstruct light
fields in diverse natural scenes that vary in extent and spatial granularity.
In contrast to previous fast reconstruction methods that represent the 3D scene
globally, we model the light field of a scene as a set of local light field
feature probes, parameterized with position and multi-channel 2D feature maps.
Our central idea is to bake the scene's light field into spatially varying
learnable representations and to query point features by weighted blending of
probes close to the camera - allowing for mipmap representation and rendering.
We introduce a novel vector-matrix-matrix (VMM) factorization technique that
effectively represents the light field feature probes as products of core
factors (i.e., VM) shared among local feature probes, and a basis factor (i.e.,
M) - efficiently encoding internal relationships and patterns within the scene.
Experimentally, we demonstrate that NeLF-Pro significantly boosts the
performance of feature grid-based representations, and achieves fast
reconstruction with better rendering quality while maintaining compact
modeling. Project webpage https://sinoyou.github.io/nelf-pro/. |
---|---|
DOI: | 10.48550/arxiv.2312.13328 |