Multimodal learning using convolution neural network and Sparse Autoencoder
In the last decade, pattern recognition methods using neuroimaging data for the diagnosis of Alzheimer's disease (AD) have been the subject of extensive research. Deep learning has recently been a great interest in AD classification. Previous works had done almost on single modality dataset, su...
Saved in:
Published in | International Conference on Big Data and Smart Computing pp. 309 - 312 |
---|---|
Main Authors | , , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
01.02.2017
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | In the last decade, pattern recognition methods using neuroimaging data for the diagnosis of Alzheimer's disease (AD) have been the subject of extensive research. Deep learning has recently been a great interest in AD classification. Previous works had done almost on single modality dataset, such as Magnetic Resonance Imaging (MRI) or Positron Emission Tomography (PET), shown high performances. However, identifying the distinctions between Alzheimer's brain data and healthy brain data in older adults (age > 75) is challenging due to highly similar brain patterns and image intensities. The corporation of multimodalities can solve this issue since it discovers and uses the further complementary of hidden biomarkers from other modalities instead of only one, which itself cannot provide. We therefore propose a deep learning method on fusion multimodalities. In details, our approach includes Sparse Autoencoder (SAE) and convolution neural network (CNN) train and test on combined PET-MRI data to diagnose the disease status of a patient. We focus on advantages of multimodalities to help providing complementary information than only one, lead to improve classification accuracy. We conducted experiments in a dataset of 1272 scans from ADNI study, the proposed method can achieve a classification accuracy of 90% between AD patients and healthy controls, demonstrate the improvement than using only one modality. |
---|---|
ISSN: | 2375-9356 |
DOI: | 10.1109/BIGCOMP.2017.7881683 |