Twinenet: coupling features for synthesizing volume rendered images via convolutional encoder–decoders and multilayer perceptrons

Volume visualization plays a crucial role in both academia and industry, as volumetric data is extensively utilized in fields such as medicine, geosciences, and engineering. Addressing the complexities of volume rendering, neural rendering has emerged as a potential solution, facilitating the produc...

Full description

Saved in:
Bibliographic Details
Published inThe Visual computer Vol. 40; no. 10; pp. 7201 - 7220
Main Authors Luo, Shengzhou, Xu, Jingxing, Dingliana, John, Wei, Mingqiang, Han, Lu, He, Lewei, Pan, Jiahui
Format Journal Article
LanguageEnglish
Published Berlin/Heidelberg Springer Berlin Heidelberg 01.10.2024
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Volume visualization plays a crucial role in both academia and industry, as volumetric data is extensively utilized in fields such as medicine, geosciences, and engineering. Addressing the complexities of volume rendering, neural rendering has emerged as a potential solution, facilitating the production of high-quality volume rendered images. In this paper, we propose TwineNet, a neural network architecture specifically designed for volume rendering. TwineNet combines features extracted from volume data, transfer functions, and viewpoints by utilizing twining skip connections across multiple feature layers. Building upon the TwineNet architecture, we introduce two neural networks, VolTFNet and PosTFNet, which leverage convolutional encoder–decoders and multilayer perceptrons to synthesize volume rendered images with novel transfer functions and viewpoints. Our experimental findings demonstrate the superiority of our models compared to state-of-the-art approaches in generating high-quality volume rendered images with novel transfer functions and viewpoints. This research contributes to advancing the field of volume rendering and showcases the potential of neural rendering techniques in scientific visualization.
ISSN:0178-2789
1432-2315
DOI:10.1007/s00371-024-03368-5