Multi-View Self-Supervised Domain Adaptation for EEG-Based Emotion Recognition

Research on emotion recognition based on EEG signals has made significant progress. Most of the existing studies have focused on supervised learning methods, but real-life data cannot meet the requirement of high quality with labels. In addition, EEG signals have individual variability and instabili...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on affective computing pp. 1 - 13
Main Authors Zhang, Lu, Shi, Hanwen, Li, Ziyi, Zheng, Wei-Long, Lu, Bao-Liang
Format Journal Article
LanguageEnglish
Published IEEE 2025
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Research on emotion recognition based on EEG signals has made significant progress. Most of the existing studies have focused on supervised learning methods, but real-life data cannot meet the requirement of high quality with labels. In addition, EEG signals have individual variability and instability, which requires transfer learning to enhance the model generalization. In this paper, we propose a multi-view self-supervised domain adaptation model that combines self-supervised learning techniques with domain-adaptive transfer learning algorithms, which can solve the last two problems mentioned above. Specifically, we add a multi-class domain discriminator to construct the adversarial relationship between the sub-networks so that distribution discrepancy of different subjects can be reduced effectively. We conduct both subject-dependent and subject-independent experiments on the SEED and SEED-IV datasets to thoroughly evaluate the performance of our model. The results show that our model achieves outstanding emotion recognition performance even with limited labeled data. In the subject-dependent experiments on both datasets, our model achieves accuracy rates of 85.91% and 87.19% respectively, surpassing the original self-supervised masked autoencoder model by about 3%. In subject-independent experiments, our model demonstrates strong data distribution adaptation capabilities, achieving an accuracy of 69.72% and 62.87%, respectively on the SEED and SEED-IV datasets using only 90 samples for subject-independent experiments. This effectively mitigates the accuracy degradation caused by differences in data distribution across subjects. Furthermore, our model is capable of extracting meaningful features from corrupted EEG data, highlighting its robustness and effectiveness.
ISSN:1949-3045
1949-3045
DOI:10.1109/TAFFC.2025.3574868