Dataset augmentation with multiple contrasts images in super-resolution processing of T1-weighted brain magnetic resonance images

This study investigated the effectiveness of augmenting datasets for super-resolution processing of brain Magnetic Resonance Images (MRI) T1-weighted images (T1WIs) using deep learning. By incorporating images with different contrasts from the same subject, this study sought to improve network perfo...

Full description

Saved in:
Bibliographic Details
Published inRadiological physics and technology Vol. 18; no. 1; pp. 172 - 185
Main Authors Kageyama, Hajime, Yoshida, Nobukiyo, Kondo, Keisuke, Akai, Hiroyuki
Format Journal Article
LanguageEnglish
Published Singapore Springer Nature Singapore 01.03.2025
Springer Nature B.V
Subjects
Online AccessGet full text
ISSN1865-0333
1865-0341
1865-0341
DOI10.1007/s12194-024-00871-1

Cover

Loading…
More Information
Summary:This study investigated the effectiveness of augmenting datasets for super-resolution processing of brain Magnetic Resonance Images (MRI) T1-weighted images (T1WIs) using deep learning. By incorporating images with different contrasts from the same subject, this study sought to improve network performance and assess its impact on image quality metrics, such as peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). This retrospective study included 240 patients who underwent brain MRI. Two types of datasets were created: the Pure-Dataset group comprising T1WIs and the Mixed-Dataset group comprising T1WIs, T2-weighted images, and fluid-attenuated inversion recovery images. A U-Net-based network and an Enhanced Deep Super-Resolution network (EDSR) were trained on these datasets. Objective image quality analysis was performed using PSNR and SSIM. Statistical analyses, including paired t test and Pearson’s correlation coefficient, were conducted to evaluate the results. Augmenting datasets with images of different contrasts significantly improved training accuracy as the dataset size increased. PSNR values ranged 29.84–30.26 dB for U-Net trained on mixed datasets, and SSIM values ranged 0.9858–0.9868. Similarly, PSNR values ranged 32.34–32.64 dB for EDSR trained on mixed datasets, and SSIM values ranged 0.9941–0.9945. Significant differences in PSNR and SSIM were observed between models trained on pure and mixed datasets. Pearson’s correlation coefficient indicated a strong positive correlation between dataset size and image quality metrics. Using diverse image data obtained from the same subject can improve the performance of deep-learning models in medical image super-resolution tasks.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:1865-0333
1865-0341
1865-0341
DOI:10.1007/s12194-024-00871-1