Automated Grain Boundary (GB) Segmentation and Microstructural Analysis in 347H Stainless Steel Using Deep Learning and Multimodal Microscopy

Austenitic 347H stainless steel offers superior mechanical properties and corrosion resistance required for extreme operating conditions such as high temperature. The change in microstructure due to composition and process variations is expected to impact material properties. Identifying microstruct...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Shoieb Ahmed Chowdhury, Taufique, M F N, Wang, Jing, Masden, Marissa, Madison Wenzlick, Devanathan, Ram, Schemer-Kohrn, Alan L, Kappagantula, Keerti S
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 12.05.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Austenitic 347H stainless steel offers superior mechanical properties and corrosion resistance required for extreme operating conditions such as high temperature. The change in microstructure due to composition and process variations is expected to impact material properties. Identifying microstructural features such as grain boundaries thus becomes an important task in the process-microstructure-properties loop. Applying convolutional neural network (CNN) based deep-learning models is a powerful technique to detect features from material micrographs in an automated manner. Manual labeling of the images for the segmentation task poses a major bottleneck for generating training data and labels in a reliable and reproducible way within a reasonable timeframe. In this study, we attempt to overcome such limitations by utilizing multi-modal microscopy to generate labels directly instead of manual labeling. We combine scanning electron microscopy (SEM) images of 347H stainless steel as training data and electron backscatter diffraction (EBSD) micrographs as pixel-wise labels for grain boundary detection as a semantic segmentation task. We demonstrate that despite producing instrumentation drift during data collection between two modes of microscopy, this method performs comparably to similar segmentation tasks that used manual labeling. Additionally, we find that na\"ive pixel-wise segmentation results in small gaps and missing boundaries in the predicted grain boundary map. By incorporating topological information during model training, the connectivity of the grain boundary network and segmentation performance is improved. Finally, our approach is validated by accurate computation on downstream tasks of predicting the underlying grain morphology distributions which are the ultimate quantities of interest for microstructural characterization.
ISSN:2331-8422