Soft 3D reconstruction for view synthesis

We present a novel algorithm for view synthesis that utilizes a soft 3D reconstruction to improve quality, continuity and robustness. Our main contribution is the formulation of a soft 3D representation that preserves depth uncertainty through each stage of 3D reconstruction and rendering. We show t...

Full description

Saved in:
Bibliographic Details
Published inACM transactions on graphics Vol. 36; no. 6; pp. 1 - 11
Main Authors Penner, Eric, Zhang, Li
Format Journal Article
LanguageEnglish
Published 20.11.2017
Online AccessGet full text

Cover

Loading…
More Information
Summary:We present a novel algorithm for view synthesis that utilizes a soft 3D reconstruction to improve quality, continuity and robustness. Our main contribution is the formulation of a soft 3D representation that preserves depth uncertainty through each stage of 3D reconstruction and rendering. We show that this representation is beneficial throughout the view synthesis pipeline. During view synthesis, it provides a soft model of scene geometry that provides continuity across synthesized views and robustness to depth uncertainty. During 3D reconstruction, the same robust estimates of scene visibility can be applied iteratively to improve depth estimation around object edges. Our algorithm is based entirely on O(1) filters, making it conducive to acceleration and it works with structured or unstructured sets of input views. We compare with recent classical and learning-based algorithms on plenoptic lightfields, wide baseline captures, and lightfield videos produced from camera arrays.
ISSN:0730-0301
1557-7368
DOI:10.1145/3130800.3130855