Temporal–spatial transformer based motor imagery classification for BCI using independent component analysis

Motor Imagery (MI) classification with electroencephalography (EEG) is a critical aspect of Brain–Computer Interface (BCI) systems, enabling individuals with mobility limitations to communicate with the outside world. However, the complexity, variability, and low signal-to-noise ratio of EEG data pr...

Full description

Saved in:
Bibliographic Details
Published inBiomedical signal processing and control Vol. 87; p. 105359
Main Authors Hameed, Adel, Fourati, Rahma, Ammar, Boudour, Ksibi, Amel, Alluhaidan, Ala Saleh, Ayed, Mounir Ben, Khleaf, Hussain Kareem
Format Journal Article
LanguageEnglish
Published Elsevier Ltd 01.01.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Motor Imagery (MI) classification with electroencephalography (EEG) is a critical aspect of Brain–Computer Interface (BCI) systems, enabling individuals with mobility limitations to communicate with the outside world. However, the complexity, variability, and low signal-to-noise ratio of EEG data present significant challenges in decoding these signals, particularly in a subject-independent manner. To overcome these challenges, we propose a transformer-based approach that employs a self-attention process to extract features in the temporal and spatial domains. To establish spatial correlations across MI EEG channels, the self-attention module periodically updates each channel by averaging its features across all channels. This weighted averaging improves classification accuracy and removes artifacts generated by manually selecting channels. Furthermore, the temporal self-attention mechanism encodes global sequential information into the features for each sample time step, allowing for the extraction of superior temporal properties in the time domain from MI EEG data. The effectiveness of the proposed strategy has been confirmed through testing against the BCI Competition IV 2a and 2b benchmarks. Overall, our proposed model outperforms state-of-the-art methods and demonstrates greater stability in both subject-dependent and subject-independent strategies. •A transformer encoder-based neural network with an attention mechanism is proposed.•A method of assigning weights to feature channels is incorporated.•Validation on public datasets 2a and 2b shows that the proposed model is competitive.
ISSN:1746-8094
1746-8108
DOI:10.1016/j.bspc.2023.105359