Multi‐mask self‐supervised learning for physics‐guided neural networks in highly accelerated magnetic resonance imaging

Self‐supervised learning has shown great promise because of its ability to train deep learning (DL) magnetic resonance imaging (MRI) reconstruction methods without fully sampled data. Current self‐supervised learning methods for physics‐guided reconstruction networks split acquired undersampled data...

Full description

Saved in:
Bibliographic Details
Published inNMR in biomedicine Vol. 35; no. 12; pp. e4798 - n/a
Main Authors Yaman, Burhaneddin, Gu, Hongyi, Hosseini, Seyed Amir Hossein, Demirel, Omer Burak, Moeller, Steen, Ellermann, Jutta, Uğurbil, Kâmil, Akçakaya, Mehmet
Format Journal Article
LanguageEnglish
Published England Wiley Subscription Services, Inc 01.12.2022
John Wiley and Sons Inc
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Self‐supervised learning has shown great promise because of its ability to train deep learning (DL) magnetic resonance imaging (MRI) reconstruction methods without fully sampled data. Current self‐supervised learning methods for physics‐guided reconstruction networks split acquired undersampled data into two disjoint sets, where one is used for data consistency (DC) in the unrolled network, while the other is used to define the training loss. In this study, we propose an improved self‐supervised learning strategy that more efficiently uses the acquired data to train a physics‐guided reconstruction network without a database of fully sampled data. The proposed multi‐mask self‐supervised learning via data undersampling (SSDU) applies a holdout masking operation on the acquired measurements to split them into multiple pairs of disjoint sets for each training sample, while using one of these pairs for DC units and the other for defining loss, thereby more efficiently using the undersampled data. Multi‐mask SSDU is applied on fully sampled 3D knee and prospectively undersampled 3D brain MRI datasets, for various acceleration rates and patterns, and compared with the parallel imaging method, CG‐SENSE, and single‐mask SSDU DL‐MRI, as well as supervised DL‐MRI when fully sampled data are available. The results on knee MRI show that the proposed multi‐mask SSDU outperforms SSDU and performs as well as supervised DL‐MRI. A clinical reader study further ranks the multi‐mask SSDU higher than supervised DL‐MRI in terms of signal‐to‐noise ratio and aliasing artifacts. Results on brain MRI show that multi‐mask SSDU achieves better reconstruction quality compared with SSDU. The reader study demonstrates that multi‐mask SSDU at R = 8 significantly improves reconstruction compared with single‐mask SSDU at R = 8, as well as CG‐SENSE at R = 2. The proposed multi‐mask self‐supervised learning via data undersampling applies a holdout masking operation on the acquired measurements to split them into multiple pairs of disjoint sets for each training sample, while using one of these pairs for data consistency units and the other for defining loss, thereby more efficiently using the undersampled data.
Bibliography:Funding information
NIH; Grant numbers: R01HL153146, P41EB027061, and U01EB025144
National Institutes of Health, Grant/Award Numbers: R01HL153146, P41EB027061, U01EB025144; National Science Foundation, Grant/Award Number: CAREER‐CCF‐1651825
NSF; Grant number: CAREER CCF‐1651825
ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
Funding information National Institutes of Health, Grant/Award Numbers: R01HL153146, P41EB027061, U01EB025144; National Science Foundation, Grant/Award Number: CAREER‐CCF‐1651825
Funding information NIH; Grant numbers: R01HL153146, P41EB027061, and U01EB025144
ISSN:0952-3480
1099-1492
DOI:10.1002/nbm.4798