Inter- and Intra-Subject transfer learning for High-Performance SSVEP-BCI with extremely little calibration effort

High-performance steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) typically requires large amounts of calibration data to derive individual-specific model parameters. This imposes a significant burden on the use of SSVEP-BCI and limits its practical applications. Exi...

Full description

Saved in:
Bibliographic Details
Published inExpert systems with applications Vol. 276; p. 127208
Main Authors Li, Hui, Xu, Guanghua, Li, Zejin, Zhang, Kai, Jiang, Hanli, Guo, Xiaobing, Zhu, Yongzhen, Yang, Xuwei, Zhao, Yihua, Han, Chengcheng
Format Journal Article
LanguageEnglish
Published Elsevier Ltd 01.06.2025
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:High-performance steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) typically requires large amounts of calibration data to derive individual-specific model parameters. This imposes a significant burden on the use of SSVEP-BCI and limits its practical applications. Existing transfer learning methods with poor transfer performance and inefficient use of the calibration data for SSVEP-BCI still rely on many calibration data from target or source subjects. This study proposed an effective inter- and intra-subject transfer learning framework (IISTLF), which requires only one source subject and one class calibration data from the target subject. The prior knowledge from limited calibration data of the target subject is utilized for inter-subject domain alignment and extracting intra-subject common knowledge. A conditional distribution alignment method, least-squares transformation (CSTL-LST), and the proposed marginal distribution alignment method, channel-wise alignment (CSTL-CWA), are employed for effective inter-subject transfer. Extensive experiments on the Benchmark dataset confirm the feasibility of CSTL-CWA in reducing spatial distribution differences of SSVEP signals between subjects. The results also reveal that IISTLF exhibits satisfactory performance, achieving an averaged classification accuracy of 77.11 ± 15.50 % across all signal lengths, significantly outperforming comparison methods FBCCA (65.11 ± 16.73 %), tt-CCA (64.81 ± 18.01 %), CSSFT (67.36 ± 16.58 %), LST-based method (42.24 ± 23.99 %), and stCCA (50.14 ± 14.29 %). Additionally, IISTLF exhibits the least negative transfer rate 2.10 ± 1.11 %, which is substantially lower than other methods. The IISTLF provides a promising solution for minimizing the required calibration data from both target and source subjects and promotes the practical application of SSVEP-BCI.
ISSN:0957-4174
DOI:10.1016/j.eswa.2025.127208