An evolutionary explainable deep learning approach for Alzheimer's MRI classification
Deep Neural Networks (DNN) are prominent Machine Learning (ML) algorithms widely used, especially in medical tasks. Among them, Convolutional Neural Networks (CNN) are well-known for image-based tasks and have shown excellent performance. In contrast to this remarkable performance, one of their most...
Saved in:
Published in | Expert systems with applications Vol. 220; p. 119709 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
Elsevier Ltd
15.06.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Deep Neural Networks (DNN) are prominent Machine Learning (ML) algorithms widely used, especially in medical tasks. Among them, Convolutional Neural Networks (CNN) are well-known for image-based tasks and have shown excellent performance. In contrast to this remarkable performance, one of their most fundamental drawbacks is their inability to clarify the cause of their outputs. Moreover, each ML algorithm needs to present an explanation of its output to the users to increase its reliability. Occlusion Map is a method used for this purpose and aims to find regions of an image that have a significant impact on determining the network's output, which does this through an iterative process of occluding different regions of images. In this study, we used Magnetic Resonance Imaging (MRI) scans from Alzheimer's Disease Neuroimaging Initiative (ADNI) and trained a 3D-CNN model to diagnose Alzheimer's Disease (AD) patients from cognitively normal (CN) subjects. We tried to combine a genetic algorithm-based Occlusion Map method with a set of Backpropagation-based explainability methods, and ultimately, we found a brain mask for AD patients. Also, by comparing the extracted brain regions with the studies in this field, we found that the extracted regions are significantly effective in diagnosing AD from the perspective of Alzheimer's specialists. Our model achieved an accuracy of 87% in 5-fold cross-validation, which is an acceptable accuracy compared to similar studies. We considered a 3D-CNN model with 96% validation accuracy (on unmasked data that includes all 96 distinct brain regions of the Harvard-Oxford brain atlas), which we used in the genetic algorithm phase to produce a suitable brain mask. Finally, using lrp_z_plus_fast explainability method, we achieved 93% validation accuracy with only 29 brain regions. |
---|---|
ISSN: | 0957-4174 1873-6793 |
DOI: | 10.1016/j.eswa.2023.119709 |