A New Approach to Visual Classification Using Concatenated Deep Learning for Multimode Fusion of EEG and Image Data

In this work, we explore various approaches for automated visual classification of multimodal inputs such as EEG and Image data for the same item, focusing on finding an optimal solution. Our new technique examines the fusion of EEG and Image data using a concatenation of deep learning models for cl...

Full description

Saved in:
Bibliographic Details
Published inAdvances in Visual Computing Vol. 13598; pp. 225 - 236
Main Authors Mishra, Alankrit, Bajwa, Garima
Format Book Chapter
LanguageEnglish
Published Switzerland Springer International Publishing AG 2022
Springer International Publishing
SeriesLecture Notes in Computer Science
Subjects
Online AccessGet full text
ISBN9783031207129
3031207122
ISSN0302-9743
1611-3349
DOI10.1007/978-3-031-20713-6_17

Cover

More Information
Summary:In this work, we explore various approaches for automated visual classification of multimodal inputs such as EEG and Image data for the same item, focusing on finding an optimal solution. Our new technique examines the fusion of EEG and Image data using a concatenation of deep learning models for classification, where the EEG feature space is encoded with 8-bit-grayscale images. This concatenated-based model achieves a 95% accuracy for the 39 class EEG-ImageNet dataset, setting a new benchmark and surpassing all prior work. Furthermore, we show that it is computationally effective in multimodal classification when human subjects are presented with visual stimuli of objects in three-dimensional real-world space rather than images of the same. These findings will improve machine visual perception and bring it closer to human-learned vision.
ISBN:9783031207129
3031207122
ISSN:0302-9743
1611-3349
DOI:10.1007/978-3-031-20713-6_17