Parallel Spatial–Temporal Self-Attention CNN-Based Motor Imagery Classification for BCI

Motor imagery (MI) electroencephalography (EEG) classification is an important part of the brain-computer interface (BCI), allowing people with mobility problems to communicate with the outside world via assistive devices. However, EEG decoding is a challenging task because of its complexity, dynami...

Full description

Saved in:
Bibliographic Details
Published inFrontiers in neuroscience Vol. 14; p. 587520
Main Authors Liu, Xiuling, Shen, Yonglong, Liu, Jing, Yang, Jianli, Xiong, Peng, Lin, Feng
Format Journal Article
LanguageEnglish
Published Switzerland Frontiers Research Foundation 11.12.2020
Frontiers Media S.A
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Motor imagery (MI) electroencephalography (EEG) classification is an important part of the brain-computer interface (BCI), allowing people with mobility problems to communicate with the outside world via assistive devices. However, EEG decoding is a challenging task because of its complexity, dynamic nature, and low signal-to-noise ratio. Designing an end-to-end framework that fully extracts the high-level features of EEG signals remains a challenge. In this study, we present a parallel spatial–temporal self-attention-based convolutional neural network for four-class MI EEG signal classification. This study is the first to define a new spatial-temporal representation of raw EEG signals that uses the self-attention mechanism to extract distinguishable spatial–temporal features. Specifically, we use the spatial self-attention module to capture the spatial dependencies between the channels of MI EEG signals. This module updates each channel by aggregating features over all channels with a weighted summation, thus improving the classification accuracy and eliminating the artifacts caused by manual channel selection. Furthermore, the temporal self-attention module encodes the global temporal information into features for each sampling time step, so that the high-level temporal features of the MI EEG signals can be extracted in the time domain. Quantitative analysis shows that our method outperforms state-of-the-art methods for intra-subject and inter-subject classification, demonstrating its robustness and effectiveness. In terms of qualitative analysis, we perform a visual inspection of the new spatial–temporal representation estimated from the learned architecture. Finally, the proposed method is employed to realize control of drones based on EEG signal, verifying its feasibility in real-time applications.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
This article was submitted to Brain Imaging Methods, a section of the journal Frontiers in Neuroscience
Reviewed by: Davide Valeriani, Massachusetts Eye & Ear Infirmary, Harvard Medical School, United States; Jacobo Fernandez-Vargas, University of Essex, United Kingdom
Edited by: Saugat Bhattacharyya, Ulster University, United Kingdom
ISSN:1662-453X
1662-4548
1662-453X
DOI:10.3389/fnins.2020.587520