On Demand Solid Texture Synthesis Using Deep 3D Networks

This paper describes a novel approach for on demand volumetric texture synthesis based on a deep learning framework that allows for the generation of high‐quality three‐dimensional (3D) data at interactive rates. Based on a few example images of textures, a generative network is trained to synthesiz...

Full description

Saved in:
Bibliographic Details
Published inComputer graphics forum Vol. 39; no. 1; pp. 511 - 530
Main Authors Gutierrez, J., Rabin, J., Galerne, B., Hurtut, T.
Format Journal Article
LanguageEnglish
Published Oxford Blackwell Publishing Ltd 01.02.2020
Wiley
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This paper describes a novel approach for on demand volumetric texture synthesis based on a deep learning framework that allows for the generation of high‐quality three‐dimensional (3D) data at interactive rates. Based on a few example images of textures, a generative network is trained to synthesize coherent portions of solid textures of arbitrary sizes that reproduce the visual characteristics of the examples along some directions. To cope with memory limitations and computation complexity that are inherent to both high resolution and 3D processing on the GPU, only 2D textures referred to as ‘slices’ are generated during the training stage. These synthetic textures are compared to exemplar images via a perceptual loss function based on a pre‐trained deep network. The proposed network is very light (less than 100k parameters), therefore it only requires sustainable training (i.e. few hours) and is capable of very fast generation (around a second for 2563 voxels) on a single GPU. Integrated with a spatially seeded pseudo‐random number generator (PRNG) the proposed generator network directly returns a color value given a set of 3D coordinates. The synthesized volumes have good visual results that are at least equivalent to the state‐of‐the‐art patch‐based approaches. They are naturally seamlessly tileable and can be fully generated in parallel. Graphical : This paper describes a novel approach for on demand volumetric texture synthesis based on a deep learning framework that allows for the generation of high‐quality three‐dimensional (3D) data at interactive rates. Based on a few example images of textures, a generative network is trained to synthesize coherent portions of solid textures of arbitrary sizes that reproduce the visual characteristics of the examples along some directions. To cope with memory limitations and computation complexity that are inherent to both high resolution and 3D processing on the GPU, only 2D textures referred to as ‘slices’ are generated during the training stage. These synthetic textures are compared to exemplar images via a perceptual loss function based on a pre‐trained deep network. The proposed network is very light (less than 100k parameters), therefore it only requires sustainable training (i.e. few hours) and is capable of very fast generation (around a second for 2563 voxels) on a single GPU. Integrated with a spatially seeded pseudo‐random number generator (PRNG) the proposed generator network directly returns an RGB value given a set of 3D coordinates. The synthesized volumes have good visual results that are at least equivalent to the state‐of‐the‐art patch‐based approaches. They are naturally seamlessly tileable and can be fully generated in parallel.
ISSN:0167-7055
1467-8659
DOI:10.1111/cgf.13889