A Hierarchical Architecture for Neural Materials

Neural reflectance models are capable of reproducing the spatially‐varying appearance of many real‐world materials at different scales. Unfortunately, existing techniques such as NeuMIP have difficulties handling materials with strong shadowing effects or detailed specular highlights. In this paper,...

Full description

Saved in:
Bibliographic Details
Published inComputer graphics forum Vol. 43; no. 6
Main Authors Xue, Bowen, Zhao, Shuang, Jensen, Henrik Wann, Montazeri, Zahra
Format Journal Article
LanguageEnglish
Published Oxford Blackwell Publishing Ltd 01.09.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Neural reflectance models are capable of reproducing the spatially‐varying appearance of many real‐world materials at different scales. Unfortunately, existing techniques such as NeuMIP have difficulties handling materials with strong shadowing effects or detailed specular highlights. In this paper, we introduce a neural appearance model that offers a new level of accuracy. Central to our model is an inception‐based core network structure that captures material appearances at multiple scales using parallel‐operating kernels and ensures multi‐stage features through specialized convolution layers. Furthermore, we encode the inputs into frequency space, introduce a gradient‐based loss, and employ it adaptive to the progress of the learning phase. We demonstrate the effectiveness of our method using a variety of synthetic and real examples. We propose a new framework to improve neural materials to better capture highly glossy materials and better capture self‐shadowing and sharp highlights by introducing a new hierarchical architecture and an input encoding step to the network to map the training inputs into a higher dimensional space. For better robustness, we also introduce new losses to allow our model to better capture both high‐ and low‐frequency effects.
ISSN:0167-7055
1467-8659
DOI:10.1111/cgf.15116