mustGAN: multi-stream Generative Adversarial Networks for MR Image Synthesis

•A novel multi-stream GAN architecture for multi-contrast MRI synthesis.•Insights into the learned latent representations in one-to-one and many-to-one source-to-target mappings.•Adaptive fusion of unique features in multiple one-to-one streams and shared features in a many-to-one stream.•State-of-t...

Full description

Saved in:
Bibliographic Details
Published inMedical image analysis Vol. 70; p. 101944
Main Authors Yurt, Mahmut, Dar, Salman UH, Erdem, Aykut, Erdem, Erkut, Oguz, Kader K, Çukur, Tolga
Format Journal Article
LanguageEnglish
Published Netherlands Elsevier B.V 01.05.2021
Elsevier BV
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:•A novel multi-stream GAN architecture for multi-contrast MRI synthesis.•Insights into the learned latent representations in one-to-one and many-to-one source-to-target mappings.•Adaptive fusion of unique features in multiple one-to-one streams and shared features in a many-to-one stream.•State-of-the-art synthesis performance in multiple tasks on brain images from healthy subjects and glioma patients. [Display omitted] Multi-contrast MRI protocols increase the level of morphological information available for diagnosis. Yet, the number and quality of contrasts are limited in practice by various factors including scan time and patient motion. Synthesis of missing or corrupted contrasts from other high-quality ones can alleviate this limitation. When a single target contrast is of interest, common approaches for multi-contrast MRI involve either one-to-one or many-to-one synthesis methods depending on their input. One-to-one methods take as input a single source contrast, and they learn a latent representation sensitive to unique features of the source. Meanwhile, many-to-one methods receive multiple distinct sources, and they learn a shared latent representation more sensitive to common features across sources. For enhanced image synthesis, we propose a multi-stream approach that aggregates information across multiple source images via a mixture of multiple one-to-one streams and a joint many-to-one stream. The complementary feature maps generated in the one-to-one streams and the shared feature maps generated in the many-to-one stream are combined with a fusion block. The location of the fusion block is adaptively modified to maximize task-specific performance. Quantitative and radiological assessments on T1,- T2-, PD-weighted, and FLAIR images clearly demonstrate the superior performance of the proposed method compared to previous state-of-the-art one-to-one and many-to-one methods.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:1361-8415
1361-8423
1361-8423
DOI:10.1016/j.media.2020.101944