Self-supervised contrastive learning for EEG-based cross-subject motor imagery recognition

Objective . The extensive application of electroencephalography (EEG) in brain-computer interfaces (BCIs) can be attributed to its non-invasive nature and capability to offer high-resolution data. The acquisition of EEG signals is a straightforward process, but the datasets associated with these sig...

Full description

Saved in:
Bibliographic Details
Published inJournal of neural engineering Vol. 21; no. 2; pp. 26038 - 26052
Main Authors Li, Wenjie, Li, Haoyu, Sun, Xinlin, Kang, Huicong, An, Shan, Wang, Guoxin, Gao, Zhongke
Format Journal Article
LanguageEnglish
Published England IOP Publishing 01.04.2024
Subjects
Online AccessGet full text
ISSN1741-2560
1741-2552
1741-2552
DOI10.1088/1741-2552/ad3986

Cover

Loading…
More Information
Summary:Objective . The extensive application of electroencephalography (EEG) in brain-computer interfaces (BCIs) can be attributed to its non-invasive nature and capability to offer high-resolution data. The acquisition of EEG signals is a straightforward process, but the datasets associated with these signals frequently exhibit data scarcity and require substantial resources for proper labeling. Furthermore, there is a significant limitation in the generalization performance of EEG models due to the substantial inter-individual variability observed in EEG signals. Approach . To address these issues, we propose a novel self-supervised contrastive learning framework for decoding motor imagery (MI) signals in cross-subject scenarios. Specifically, we design an encoder combining convolutional neural network and attention mechanism. In the contrastive learning training stage, the network undergoes training with the pretext task of data augmentation to minimize the distance between pairs of homologous transformations while simultaneously maximizing the distance between pairs of heterologous transformations. It enhances the amount of data utilized for training and improves the network’s ability to extract deep features from original signals without relying on the true labels of the data. Main results . To evaluate our framework’s efficacy, we conduct extensive experiments on three public MI datasets: BCI IV IIa, BCI IV IIb, and HGD datasets. The proposed method achieves cross-subject classification accuracies of 67.32 % , 82.34 % , and 81.13 % on the three datasets, demonstrating superior performance compared to existing methods. Significance . Therefore, this method has great promise for improving the performance of cross-subject transfer learning in MI-based BCI systems.
Bibliography:JNE-106994.R2
ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1741-2560
1741-2552
1741-2552
DOI:10.1088/1741-2552/ad3986