TriVol: Point Cloud Rendering via Triple Volumes
Existing learning-based methods for point cloud rendering adopt various 3D representations and feature querying mechanisms to alleviate the sparsity problem of point clouds. However, artifacts still appear in rendered images, due to the challenges in extracting continuous and discriminative 3D featu...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
29.03.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Existing learning-based methods for point cloud rendering adopt various 3D
representations and feature querying mechanisms to alleviate the sparsity
problem of point clouds. However, artifacts still appear in rendered images,
due to the challenges in extracting continuous and discriminative 3D features
from point clouds. In this paper, we present a dense while lightweight 3D
representation, named TriVol, that can be combined with NeRF to render
photo-realistic images from point clouds. Our TriVol consists of triple slim
volumes, each of which is encoded from the point cloud. TriVol has two
advantages. First, it fuses respective fields at different scales and thus
extracts local and non-local features for discriminative representation.
Second, since the volume size is greatly reduced, our 3D decoder can be
efficiently inferred, allowing us to increase the resolution of the 3D space to
render more point details. Extensive experiments on different benchmarks with
varying kinds of scenes/objects demonstrate our framework's effectiveness
compared with current approaches. Moreover, our framework has excellent
generalization ability to render a category of scenes/objects without
fine-tuning. |
---|---|
DOI: | 10.48550/arxiv.2303.16485 |