Classification and Graphical Analysis of Alzheimer’s Disease and Its Prodromal Stage Using Multimodal Features From Structural, Diffusion, and Functional Neuroimaging Data and the APOE Genotype
Graphical, voxel, and region-based analysis has become a popular approach to studying neurodegenerative disorders such as Alzheimer’s disease and its prodromal stage. These methods have been used previously for classification or discrimination of AD in subjects in a prodromal stage called stable MCI...
Saved in:
Published in | Frontiers in aging neuroscience Vol. 12; p. 238 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
Lausanne
Frontiers Research Foundation
30.07.2020
Frontiers Media S.A |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Graphical, voxel, and region-based analysis has become a popular approach to studying neurodegenerative disorders such as Alzheimer’s disease and its prodromal stage. These methods have been used previously for classification or discrimination of AD in subjects in a prodromal stage called stable MCI (MCIs), which does not convert to AD but remains stable over a period of time, and converting MCI (MCIc), which converts to AD, but the results reported across similar studies are often inconsistent. Furthermore, the classification accuracy for MCIs vs. MCIc is limited. In this study, we propose combining different neuroimaging modalities with the APOE-E genotype to form a multimodal system for the discrimination of AD, and to increase the classification accuracy. Initially, we used two well-known analyses to extract features from each neuroimage for the discrimination of AD: whole-brain parcellation analysis (or region-based analysis), and voxel-wise analysis. We also investigated graphical analysis for all six binary classification groups. Data for a total of 129 subjects (33 AD, 30 MCIs, 31 MCIc, and 35 HCs) for each imaging modality were obtained from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) homepage. To integrate the different modalities and different complementary information into one form, and to optimize the classifier, we used the multiple kernel learning framework. The obtained results indicated that our multimodal approach yields a significant improvement in accuracy over any single modality alone. The areas under the curve obtained by the proposed method were 97.78%, 96.94%, 95.56%, 96.25%, 96.67%, and 96.59% for AD vs. HC, MCIs vs. MCIc, AD vs. MCIc, AD vs. MCIs, HC vs. MCIc, and HC vs. MCIs binary classification, respectively. Our proposed multimodal method improved the classification result for MCIs vs. MCIc groups compared with the unimodal classification results. Our study found that the (left/right) precentral region was present in all six binary classification groups (this region can be considered the most significant region). Furthermore, using nodal network topology, we found that FDG, AV45-PET, and rs-fMRI were the most important neuroimages, and showed many affected regions relative to other modalities. We also compared our results with recently published results. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 For more information about the Alzheimer’s Disease Neuroimaging Initiative, please see the Acknowledgments section Reviewed by: Henning U. Voss, Cornell University, United States; Gabriel Gonzalez-Escamilla, Johannes Gutenberg University Mainz, Germany; Kuo-Kun Tseng, Harbin Institute of Technology, Shenzhen, China Edited by: Woon-Man Kung, Chinese Culture University, Taiwan |
ISSN: | 1663-4365 1663-4365 |
DOI: | 10.3389/fnagi.2020.00238 |