3D stochastic microstructure reconstruction via slice images and attention-mechanism-based GAN

•2D slices of a 3D image are used as training images to lower GPU burdens.•The convolutional triplet attention is used to prioritize the learned features.•Only a 3D image is required for whole training. Stochastic media are used to characterize materials with irregular structure and spatial randomne...

Full description

Saved in:
Bibliographic Details
Published inComputer aided design Vol. 176; p. 103760
Main Authors Zhang, Ting, Bian, Ningjie, Li, Xue
Format Journal Article
LanguageEnglish
Published Elsevier Ltd 01.11.2024
Subjects
Online AccessGet full text
ISSN0010-4485
1879-2685
DOI10.1016/j.cad.2024.103760

Cover

Loading…
More Information
Summary:•2D slices of a 3D image are used as training images to lower GPU burdens.•The convolutional triplet attention is used to prioritize the learned features.•Only a 3D image is required for whole training. Stochastic media are used to characterize materials with irregular structure and spatial randomness, and the remarkable macroscopic features of stochastic media are often determined by their internal microstructure. Hardware loads and computational burdens have always been a challenge for the reconstruction of large-volume materials. To tackle the aforementioned concerns, this paper proposes a learning model based on generative adversarial network that uses multiple 2D slice images to reconstruct 3D stochastic microstructures. The whole model training process requires only a 3D image of stochastic media as the training image. In addition, the attention mechanism captures cross-dimensional interactions to prioritize the learned features and improves the effectiveness of training. The model is tested on stochastic porous media with two-phase internal structure and complex morphology. The experimental findings demonstrate that utilizing multiple 2D images helps the model learn better and reduces the occurrence of overfitting, while greatly reducing the hardware loads of the model.
ISSN:0010-4485
1879-2685
DOI:10.1016/j.cad.2024.103760