EISATC-Fusion: Inception Self-Attention Temporal Convolutional Network Fusion for Motor Imagery EEG Decoding

The motor imagery brain-computer interface (MI-BCI) based on electroencephalography (EEG) is a widely used human-machine interface paradigm. However, due to the non-stationarity and individual differences among subjects in EEG signals, the decoding accuracy is limited, affecting the application of t...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on neural systems and rehabilitation engineering Vol. 32; pp. 1535 - 1545
Main Authors Liang, Guangjin, Cao, Dianguo, Wang, Jinqiang, Zhang, Zhongcai, Wu, Yuqiang
Format Journal Article
LanguageEnglish
Published United States IEEE 2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The motor imagery brain-computer interface (MI-BCI) based on electroencephalography (EEG) is a widely used human-machine interface paradigm. However, due to the non-stationarity and individual differences among subjects in EEG signals, the decoding accuracy is limited, affecting the application of the MI-BCI. In this paper, we propose the EISATC-Fusion model for MI EEG decoding, consisting of inception block, multi-head self-attention (MSA), temporal convolutional network (TCN), and layer fusion. Specifically, we design a DS Inception block to extract multi-scale frequency band information. And design a new cnnCosMSA module based on CNN and cos attention to solve the attention collapse and improve the interpretability of the model. The TCN module is improved by the depthwise separable convolution to reduces the parameters of the model. The layer fusion consists of feature fusion and decision fusion, fully utilizing the features output by the model and enhances the robustness of the model. We improve the two-stage training strategy for model training. Early stopping is used to prevent model overfitting, and the accuracy and loss of the validation set are used as indicators for early stopping. The proposed model achieves within-subject classification accuracies of 84.57% and 87.58% on BCI Competition IV Datasets 2a and 2b, respectively. And the model achieves cross-subject classification accuracies of 67.42% and 71.23% (by transfer learning) when training the model with two sessions and one session of Dataset 2a, respectively. The interpretability of the model is demonstrated through weight visualization method.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1534-4320
1558-0210
DOI:10.1109/TNSRE.2024.3382226