A Context-Supported Deep Learning Framework for Multimodal Brain Imaging Classification

Over the past decade, "content-based" multimedia systems have realized success. By comparison, brain imaging and classification systems demand more efforts for improvement with respect to accuracy, generalization, and interpretation. The relationship between electroencephalogram (EEG) sign...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on human-machine systems Vol. 49; no. 6; pp. 611 - 622
Main Authors Jiang, Jianmin, Fares, Ahmed, Zhong, Sheng-Hua
Format Journal Article
LanguageEnglish
Published New York IEEE 01.12.2019
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Over the past decade, "content-based" multimedia systems have realized success. By comparison, brain imaging and classification systems demand more efforts for improvement with respect to accuracy, generalization, and interpretation. The relationship between electroencephalogram (EEG) signals and corresponding multimedia content needs to be further explored. In this paper, we integrate implicit and explicit learning modalities into a context-supported deep learning framework. We propose an improved solution for the task of brain imaging classification via EEG signals. In our proposed framework, we introduce a consistency test by exploiting the context of brain images and establishing a mapping between visual-level features and cognitive-level features inferred based on EEG signals. In this way, a multimodal approach can be developed to deliver an improved solution for brain imaging and its classification based on explicit learning modalities and research from the image processing community. In addition, a number of fusion techniques are investigated in this work to optimize individual classification results. Extensive experiments have been carried out, and their results demonstrate the effectiveness of our proposed framework. In comparison with the existing state-of-the-art approaches, our proposed framework achieves superior performance in terms of not only the standard visual object classification criteria, but also the exploitation of transfer learning. For the convenience of research dissemination, we make the source code publicly available for downloading at GitHub (https://github.com/aneeg/dual-modal-learning).
ISSN:2168-2291
2168-2305
DOI:10.1109/THMS.2019.2904615