Novel View Synthesis Based on Similar Perspective
ABSTRACT Neural radiance fields (NeRF) technology has garnered significant attention due to its exceptional performance in generating high‐quality novel view images. In this study, we propose an innovative method that leverages the similarity between views to enhance the quality of novel view image...
Saved in:
Published in | Computer animation and virtual worlds Vol. 36; no. 1 |
---|---|
Main Author | |
Format | Journal Article |
Language | English |
Published |
Hoboken, USA
John Wiley & Sons, Inc
01.01.2025
Wiley Subscription Services, Inc |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | ABSTRACT
Neural radiance fields (NeRF) technology has garnered significant attention due to its exceptional performance in generating high‐quality novel view images. In this study, we propose an innovative method that leverages the similarity between views to enhance the quality of novel view image generation. Initially, a pre‐trained NeRF model generates an initial novel view image, which is subsequently compared and subjected to feature transfer with the most similar reference view from the training dataset. Following this, the reference view that is most similar to the initial novel view is selected from the training dataset. We designed a texture transfer module that employs a strategy progressing from coarse‐to‐fine, effectively integrating salient features from the reference view into the initial image, thus producing more realistic novel view images. By using similar views, this approach not only improves the quality of novel perspective images but also incorporates the training dataset as a dynamic information pool into the novel view integration process. This allows for the continuous acquisition and utilization of useful information from the training data throughout the synthesis process. Extensive experimental validation shows that using similar views to provide scene information significantly outperforms existing neural rendering techniques in enhancing the realism and accuracy of novel view images.
The approach capitalizes on reference views that closely resemble the initial novel viewpoint images, chosen meticulously from the training set for the learning process. In scenarios where traditional rendering methods like Instant‐NGP struggle, the model stands out. It efficiently identifies similar scene elements within the reference views (indicated in white), which assists in estimating output regions (highlighted in red) that Instant‐NGP could potentially overlook. |
---|---|
Bibliography: | Funding This work was supported by the Fundamental Research Funds for the Provincial Universities, Zhejiang Institute of Economics and Trade (Grant Number: 24YQ04). ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 1546-4261 1546-427X |
DOI: | 10.1002/cav.70006 |