BPGAN: Brain PET synthesis from MRI using generative adversarial network for multi-modal Alzheimer’s disease diagnosis

•A novel model called brain PET generative adversarial network (BPGAN) is proposed. It is effective for synthesizing realistic and diverse brain PET scans from the corresponding brain MRI scans.•A hybrid loss is introduced to supervise the training process of brain PET scans synthesis on multiple le...

Full description

Saved in:
Bibliographic Details
Published inComputer methods and programs in biomedicine Vol. 217; p. 106676
Main Authors Zhang, Jin, He, Xiaohai, Qing, Linbo, Gao, Feng, Wang, Bin
Format Journal Article
LanguageEnglish
Published Ireland Elsevier B.V 01.04.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:•A novel model called brain PET generative adversarial network (BPGAN) is proposed. It is effective for synthesizing realistic and diverse brain PET scans from the corresponding brain MRI scans.•A hybrid loss is introduced to supervise the training process of brain PET scans synthesis on multiple levels.•Exploring two alternative data splitting strategies to study the impact on the MRI-to-PET synthesis task and further analyze their applicability in different medical scenarios.•The proposed model shows superior performance than the state-of-the-art methods. The experimental results indicate that BPGAN can be used as an effective data completion method for multi-modal AD diagnosis. Multi-modal medical images, such as magnetic resonance imaging (MRI) and positron emission tomography (PET), have been widely used for the diagnosis of brain disorder diseases like Alzheimer’s disease (AD) since they can provide various information. PET scans can detect cellular changes in organs and tissues earlier than MRI. Unlike MRI, PET data is difficult to acquire due to cost, radiation, or other limitations. Moreover, PET data is missing for many subjects in the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset. To solve this problem, a 3D end-to-end generative adversarial network (named BPGAN) is proposed to synthesize brain PET from MRI scans, which can be used as a potential data completion scheme for multi-modal medical image research. We propose BPGAN, which learns an end-to-end mapping function to transform the input MRI scans to their underlying PET scans. First, we design a 3D multiple convolution U-Net (MCU) generator architecture to improve the visual quality of synthetic results while preserving the diverse brain structures of different subjects. By further employing a 3D gradient profile (GP) loss and structural similarity index measure (SSIM) loss, the synthetic PET scans have higher-similarity to the ground truth. In this study, we explore alternative data partitioning ways to study their impact on the performance of the proposed method in different medical scenarios. We conduct experiments on a publicly available ADNI database. The proposed BPGAN is evaluated by mean absolute error (MAE), peak-signal-to-noise-ratio (PSNR) and SSIM, superior to other compared models in these quantitative evaluation metrics. Qualitative evaluations also validate the effectiveness of our approach. Additionally, combined with MRI and our synthetic PET scans, the accuracies of multi-class AD diagnosis on dataset-A and dataset-B are 85.00% and 56.47%, which have been improved by about 1% and 1%, respectively, compared to the stand-alone MRI. The experimental results of quantitative measures, qualitative displays, and classification evaluation demonstrate that the synthetic PET images by BPGAN are reasonable and high-quality, which provide complementary information to improve the performance of AD diagnosis. This work provides a valuable reference for multi-modal medical image analysis.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0169-2607
1872-7565
1872-7565
DOI:10.1016/j.cmpb.2022.106676