Subject-adaptive SSVEP decoding based on time–frequency information
Steady-State Visual Evoked Potential (SSVEP) based Brain–Computer Interface (BCI) has been widely used. While unsupervised methods like Filter Bank Canonical Correlation Analysis (FBCCA) perform well in long time windows, it performances significantly declines in short time windows. Supervised metho...
Saved in:
Published in | Biomedical signal processing and control Vol. 110; p. 108141 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
Elsevier Ltd
01.12.2025
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Steady-State Visual Evoked Potential (SSVEP) based Brain–Computer Interface (BCI) has been widely used. While unsupervised methods like Filter Bank Canonical Correlation Analysis (FBCCA) perform well in long time windows, it performances significantly declines in short time windows. Supervised methods such as Task Related Component Analysis (TRCA), on the other hand, perform well in short time windows but exhibit weak under cross-subject generalization. To address these issues, this paper introduces the self-attention mechanism from the Transformer into the SSVEP decoding task to enhance the model’s cross-subject adaptability. To learn individualized SSVEP features, this method fully leverages the spatiotemporal, frequency, and phase information in EEG, using segment embedding and position embedding to differentiate these features. Additionally, a token as additional channel information is incorporated to gather other channels’ information for classification. The proposed approach achieved promising results on two commonly used public SSVEP datasets, demonstrating better performance in short time windows and cross-subject conditions compared to traditional unsupervised and supervised models, as well as supervised deep learning models. |
---|---|
ISSN: | 1746-8094 |
DOI: | 10.1016/j.bspc.2025.108141 |