Joint EEG Feature Transfer and Semisupervised Cross-Subject Emotion Recognition
Due to the weak and nonstationary properties, electroencephalogram (EEG) data present significant individual differences. To align data distributions of different subjects, transfer learning showed promising performance in cross-subject EEG emotion recognition. However, most of the existing models s...
Saved in:
Published in | IEEE transactions on industrial informatics Vol. 19; no. 7; pp. 8104 - 8115 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
Piscataway
IEEE
01.07.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Due to the weak and nonstationary properties, electroencephalogram (EEG) data present significant individual differences. To align data distributions of different subjects, transfer learning showed promising performance in cross-subject EEG emotion recognition. However, most of the existing models sequentially learned the domain-invariant features and estimated the target domain label information. Such a two-stage strategy breaks the inner connections of both processes, inevitably causing the suboptimality. In this article, we propose a joint EEG feature transfer and semisupervised cross-subject emotion recognition model in which the shared subspace projection matrix and target label are jointly optimized toward the optimum. Extensive experiments are conducted on SEED-IV and SEED, and the results show that the emotion recognition performance is significantly enhanced by the joint learning mode and the spatial-frequency activation patterns of critical EEG frequency bands and brain regions in cross-subject emotion expression are quantitatively identified by analyzing the learned shared subspace. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 1551-3203 1941-0050 |
DOI: | 10.1109/TII.2022.3217120 |