Multi-layer transfer learning algorithm based on improved common spatial pattern for brain–computer interfaces

In the application of brain–computer interface, the differences in imaging methods and brain structure between subjects hinder the effectiveness of decoding algorithms when applied on different subjects. Transfer learning has been designed to solve this problem. There have been many applications of...

Full description

Saved in:
Bibliographic Details
Published inJournal of neuroscience methods Vol. 415; p. 110332
Main Authors Cai, Zhuo, Gao, Yunyuan, Fang, Feng, Zhang, Yingchun, Du, Shunlan
Format Journal Article
LanguageEnglish
Published Netherlands Elsevier B.V 01.03.2025
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In the application of brain–computer interface, the differences in imaging methods and brain structure between subjects hinder the effectiveness of decoding algorithms when applied on different subjects. Transfer learning has been designed to solve this problem. There have been many applications of transfer learning in motor imagery (MI), however the effectiveness is still limited due to the inconsistent domain alignment, lack of prominent data features and allocation of weights in trails. In this paper, a Multi-layer transfer learning algorithm based on improved Common Spatial Patterns (MTICSP) was proposed to solve these problems. Firstly, the source domain data and target domain data were aligned by Target Alignment (TA)method to reduce distribution differences between subjects. Secondly, the mean covariance matrix of the two classes was re-weighted by calculating the distance between the covariance matrix of each trial in the source domain and the target domain. Thirdly, the improved Common Spatial Patterns (CSP) by introducing regularization coefficient was proposed to further reduce the difference between source domain and target domain to extract features. Finally, the feature blocks of the source domain and target domain were aligned again by Joint Distribution Adaptation (JDA) method. Experiments on two public datasets in two transfer paradigms multi-source to single-target (MTS) and single-source to single-target (STS) verified the effectiveness of our proposed method. The MTS and STS in the 5-person dataset were 80.21% and 77.58%, respectively, and 80.10% and 73.91%, respectively, in the 9-person dataset. Experimental results also showed that the proposed algorithm was superior to other state-of-the-art algorithms. In addition, the generalization ability of our algorithm MTICSP was validated on the fatigue EEG dataset collected by ourselves, and obtained 94.83% and 87.41% accuracy in MTS and STS experiments respectively. The proposed method combines improved CSP with transfer learning to extract the features of source and target domains effectively, providing a new method for combining transfer learning with motor imagination. •Construction of Multi-Layer Transfer Channel and Feature Retention: By integrating the two primary operational stages of transfer learning–preprocessing and feature extraction– into a multi-layer transfer channel, the approach aims to minimize the loss of dominant or recessive features during the transfer process, thereby enhancing the effectiveness of feature transfer.•Application of Weighted CSP Method Based on Euclidean Distance in Multi-Layer Transfer Learning Module: The CSP method with Euclidean distance in multi-layer transfer learning enhances feature extraction accuracy and robustness by weighting data closer to the subject more heavily, addressing uneven data distribution.•Integration of Regularized CSP with Transfer Learning Parameters and JDA for Feature Extraction Optimization: By minimizing the absolute difference of the covariance matrix between subjects in the source and target domains, the method balances the differences in extracted features after data alignment. This approach not only enhances the transferability of features but also strengthens the model’s generalization across different subjects.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0165-0270
1872-678X
1872-678X
DOI:10.1016/j.jneumeth.2024.110332