Transfer Learning for P300 Brain-Computer Interfaces by Joint Alignment of Feature Vectors

This paper presents a new transfer learning method named group learning, that jointly aligns multiple domains (many-to-many) and an extension named fast alignment that aligns any further domain to previously aligned group of domains (many-to-one). The proposed group alignment algorithm (GALIA) is ev...

Full description

Saved in:
Bibliographic Details
Published inIEEE journal of biomedical and health informatics Vol. 27; no. 10; pp. 1 - 11
Main Authors Altindis, Fatih, Banerjee, Antara, Phlypo, Ronald, Yilmaz, Bulent, Congedo, Marco
Format Journal Article
LanguageEnglish
Published United States IEEE 01.10.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Institute of Electrical and Electronics Engineers
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This paper presents a new transfer learning method named group learning, that jointly aligns multiple domains (many-to-many) and an extension named fast alignment that aligns any further domain to previously aligned group of domains (many-to-one). The proposed group alignment algorithm (GALIA) is evaluated on brain-computer interface (BCI) data and optimal hyper-parameter values of the algorithm are studied for classification performance and computational cost. Six publicly available P300 databases comprising 333 sessions from 177 subjects are used. As compared to the conventional subject-specific train/test pipeline, both group learning and fast alignment significantly improve the classification accuracy except for the database with clinical subjects (average improvement: 2.12±1.88%). GALIA utilizes cyclic approximate joint diagonalization (AJD) to find a set of linear transformations, one for each domain, jointly aligning the feature vectors of all domains. Group learning achieves a many-to-many transfer learning without compromising the classification performance on non-clinical BCI data. Fast alignment further extends the group learning for any unseen domains, allowing a many-to-one transfer learning with the same properties. The former method creates a single machine learning model using data from previous subjects and/or sessions, whereas the latter exploits the trained model for an unseen domain requiring no further training of the classifier.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:2168-2194
2168-2208
2168-2208
DOI:10.1109/JBHI.2023.3299837