Motor imagery task classification using spatial–time–frequency features of EEG signals: a deep learning approach for improved performance
Classification of electroencephalogram (EEG) signals according to the user-intended motor imagery (MI) task is crucial for effective brain–computer interfaces (BCIs). Current methods often encounter difficulties in attaining high classification accuracy. This study aims to improve accuracy by utilis...
Saved in:
Published in | Evolving systems Vol. 16; no. 2; p. 67 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
Berlin/Heidelberg
Springer Berlin Heidelberg
01.06.2025
Springer Nature B.V |
Subjects | |
Online Access | Get full text |
ISSN | 1868-6478 1868-6486 |
DOI | 10.1007/s12530-025-09696-8 |
Cover
Summary: | Classification of electroencephalogram (EEG) signals according to the user-intended motor imagery (MI) task is crucial for effective brain–computer interfaces (BCIs). Current methods often encounter difficulties in attaining high classification accuracy. This study aims to improve accuracy by utilising spatial and time–frequency characteristics of multichannel EEG data using convolutional neural networks (CNN). EEG signals acquired from the sensory-motor region were subjected to time–frequency analysis, creating three-dimensional spatially informed time–frequency representations (SITFR). The CNN was trained and validated using SITFR matrices corresponding to four motor imagery tasks utilising the BCI Competition IV dataset IIa with a five-fold cross-validation technique. Gaussian noise data augmentation was applied to improve model robustness by increasing variability in EEG signals while preserving their structural integrity. Four time–frequency approaches, namely continuous wavelet transform (CWT), wavelet synchrosqueezed transform (WSST), Fourier synchrosqueezed transform (FSST) and synchroextracting transform (SET) were used for this experiment. The CNN model attained a mean test accuracy of 98.18% and kappa score of 0.98 for CWT-SITFR, outperforming other TFR methods. The accuracies obtained for FSST, WSST and SET were 97.47%, 94.38% and 91.82% with kappa scores of 0.97, 0.93 and 0.89 respectively. This approach enables the CNN to learn both time–frequency and spatial features, resulting in better performance compared with existing state-of-the-art techniques. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 1868-6478 1868-6486 |
DOI: | 10.1007/s12530-025-09696-8 |