Multi-modal data Alzheimer’s disease detection based on 3D convolution

Multi-modal medical imaging information has been widely used in computer-assisted investigations and diagnoses. A typical example is that the combination of information from multi-modal medical images allows for a more accurate and comprehensive classification and diagnosis of the same Alzheimer’s d...

Full description

Saved in:
Bibliographic Details
Published inBiomedical signal processing and control Vol. 75; p. 103565
Main Authors Kong, Zhaokai, Zhang, Mengyi, Zhu, Wenjun, Yi, Yang, Wang, Tian, Zhang, Baochang
Format Journal Article
LanguageEnglish
Published Elsevier Ltd 01.05.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Multi-modal medical imaging information has been widely used in computer-assisted investigations and diagnoses. A typical example is that the combination of information from multi-modal medical images allows for a more accurate and comprehensive classification and diagnosis of the same Alzheimer’s disease (AD) subject. This paper proposes an image fusion method to fuse Magnetic Resonance Images (MRI) with Positron Emission Tomography (PET) images from AD patients. In addition, we use 3D convolutional neural networks to evaluate the effectiveness of our image fusion approach in both dichotomous and multi-classification tasks. The 3D convolution of the fused images is used to extract the information from the features, resulting in a richer multi-modal feature information. Finally, the extracted multi-modal traits are classified and predicted using a fully connected neural network. The experimental results on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) public dataset show that the proposed model achieves better results in terms of accuracy, sensitivity and specificity.
ISSN:1746-8094
1746-8108
DOI:10.1016/j.bspc.2022.103565