Deep-learning-based generation of synthetic 6-minute MRI from 2-minute MRI for use in head and neck cancer radiotherapy

Quick magnetic resonance imaging (MRI) scans with low contrast-to-noise ratio are typically acquired for daily MRI-guided radiotherapy setup. However, for patients with head and neck (HN) cancer, these images are often insufficient for discriminating target volumes and organs at risk (OARs). In this...

Full description

Saved in:
Bibliographic Details
Published inFrontiers in oncology Vol. 12; p. 975902
Main Authors Wahid, Kareem A, Xu, Jiaofeng, El-Habashy, Dina, Khamis, Yomna, Abobakr, Moamen, McDonald, Brigid, O' Connell, Nicolette, Thill, Daniel, Ahmed, Sara, Sharafi, Christina Setareh, Preston, Kathryn, Salzillo, Travis C, Mohamed, Abdallah S R, He, Renjie, Cho, Nathan, Christodouleas, John, Fuller, Clifton D, Naser, Mohamed A
Format Journal Article
LanguageEnglish
Published Switzerland Frontiers Media S.A 08.11.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Quick magnetic resonance imaging (MRI) scans with low contrast-to-noise ratio are typically acquired for daily MRI-guided radiotherapy setup. However, for patients with head and neck (HN) cancer, these images are often insufficient for discriminating target volumes and organs at risk (OARs). In this study, we investigated a deep learning (DL) approach to generate high-quality synthetic images from low-quality images. We used 108 unique HN image sets of paired 2-minute T2-weighted scans (2mMRI) and 6-minute T2-weighted scans (6mMRI). 90 image sets (~20,000 slices) were used to train a 2-dimensional generative adversarial DL model that utilized 2mMRI as input and 6mMRI as output. Eighteen image sets were used to test model performance. Similarity metrics, including the mean squared error (MSE), structural similarity index (SSIM), and peak signal-to-noise ratio (PSNR) were calculated between normalized synthetic 6mMRI and ground-truth 6mMRI for all test cases. In addition, a previously trained OAR DL auto-segmentation model was used to segment the right parotid gland, left parotid gland, and mandible on all test case images. Dice similarity coefficients (DSC) were calculated between 2mMRI and either ground-truth 6mMRI or synthetic 6mMRI for each OAR; two one-sided t-tests were applied between the ground-truth and synthetic 6mMRI to determine equivalence. Finally, a visual Turing test using paired ground-truth and synthetic 6mMRI was performed using three clinician observers; the percentage of images that were correctly identified was compared to random chance using proportion equivalence tests. The median similarity metrics across the whole images were 0.19, 0.93, and 33.14 for MSE, SSIM, and PSNR, respectively. The median of DSCs comparing ground-truth vs. synthetic 6mMRI auto-segmented OARs were 0.86 vs. 0.85, 0.84 vs. 0.84, and 0.82 vs. 0.85 for the right parotid gland, left parotid gland, and mandible, respectively (equivalence p<0.05 for all OARs). The percent of images correctly identified was equivalent to chance (p<0.05 for all observers). Using 2mMRI inputs, we demonstrate that DL-generated synthetic 6mMRI outputs have high similarity to ground-truth 6mMRI, but further improvements can be made. Our study facilitates the clinical incorporation of synthetic MRI in MRI-guided radiotherapy.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
Reviewed by: Jing Yuan, Hong Kong Sanatorium and Hospital, Hong Kong SAR, China; David Waddington, The University of Sydney, Australia
This article was submitted to Radiation Oncology, a section of the journal Frontiers in Oncology
Edited by: Chia-ho Hua, St. Jude Children’s Research Hospital, United States
ISSN:2234-943X
2234-943X
DOI:10.3389/fonc.2022.975902