Automatic computation of regions of interest by robust principal component analysis. Application to automatic dementia diagnosis

Computer aided diagnosis systems based on brain imaging are a powerful tool to assist in the diagnosis of Alzheimer’s Disease (AD). The goal is the automatic recognizing of neurodegenerative patterns that characterize the disease. In this regard, determining regions related to the disease results cr...

Full description

Saved in:
Bibliographic Details
Published inKnowledge-based systems Vol. 123; pp. 229 - 237
Main Authors Lozano, Francisco, Ortiz, Andrés, Munilla, Jorge, Peinado, Alberto, Initiative, for the Alzheimer’s Disease Neuroimaging
Format Journal Article
LanguageEnglish
Published Amsterdam Elsevier B.V 01.05.2017
Elsevier Science Ltd
Subjects
Online AccessGet full text
ISSN0950-7051
1872-7409
DOI10.1016/j.knosys.2017.02.025

Cover

Loading…
More Information
Summary:Computer aided diagnosis systems based on brain imaging are a powerful tool to assist in the diagnosis of Alzheimer’s Disease (AD). The goal is the automatic recognizing of neurodegenerative patterns that characterize the disease. In this regard, determining regions related to the disease results crucial to select the most discriminative voxels and to optimize the number of features to be used in the learning algorithm. In this paper, we propose a method based on the robust principal component analysis (Robust PCA) algorithm that allows to automatically compute Regions Of Interest (ROIs) over a training set of images and rank them according to their diagnostic relevance. Robust PCA is used to compute the sparse error matrix, which is, in turn, employed to determine the brain areas related to the Alzheimer’s disease. These areas are further used as a mask to select and weight the most discriminative voxels to construct a classification model. We then describe a method to fuse the features computed from different image modalities based on the weights assigned by the individual Support Vector Classifiers during the training process. The method presented here has been applied to multimodal image containing both functional (18F-FDG PET) and structural (Magnetic Resonance) data. Experiments, conducted using 68 control subjects and 70 CE patients, show the effectiveness of the proposed approach for the exploratory analysis. At the same time, classification experiments using the features computed by the proposed method and assessed by cross-validation showed accuracy values up to 92% and AUC (Area Under the Curve) of 0.95. Thus, the proposal seems as an effective technique to reveal ROIs in differential diagnosis applications and to combine multimodal image data, outperforming other classification methods, including the voxel-as-features (VAF) baseline.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ObjectType-Literature Review-3
ISSN:0950-7051
1872-7409
DOI:10.1016/j.knosys.2017.02.025