Vision Transformer-based Sinogram Enlarging for Reducing Artifacts in Sparse-view Micro-CT

Micro-computed tomography (micro-CT) provides three-dimensional (3D) morphological structures at the micrometer scale. Although this modality is expected to contribute to histopathology for analyzing the 3D microstructures of tissue specimens, micro-CT imaging for tissue specimens requires a long sc...

Full description

Saved in:
Bibliographic Details
Published in2023 IEEE Nuclear Science Symposium, Medical Imaging Conference and International Symposium on Room-Temperature Semiconductor Detectors (NSS MIC RTSD) p. 1
Main Authors Okamoto, T., Haneishi, H.
Format Conference Proceeding
LanguageEnglish
Published IEEE 04.11.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Micro-computed tomography (micro-CT) provides three-dimensional (3D) morphological structures at the micrometer scale. Although this modality is expected to contribute to histopathology for analyzing the 3D microstructures of tissue specimens, micro-CT imaging for tissue specimens requires a long scan time. Sparse-view CT, which reduces the number of projections, has great promise for speeding up the scanning process; however, analytical reconstruction from fewer projections causes severe streak artifacts on tomographic images. Convolutional neural network (CNN) based artifact reduction methods have been proposed and outperformed conventional filter processing and compressed sensing approaches. However, CNNs are inefficient for capturing long-range characteristics. Vision Transformer, as an alternative to CNN, has recently appeared, and Swin Transformer achieved state-of-the-art performance on image classification tasks on benchmark datasets. SwinIR based on the Swin Transformer also outperformed the performance of CNN-based networks for image super-resolution tasks. In this paper, we proposed an artifact reduction method for sparse-view micro-CT. We developed a SwinIR-based deep learning network for vertically enlarging sparse-view sinograms to estimate full-view sinograms. Experimental results showed that the proposed method achieved the best performance with the lowest total error among the comparison methods.
ISSN:2577-0829
DOI:10.1109/NSSMICRTSD49126.2023.10338258