Joint Temporal Convolutional Networks and Adversarial Discriminative Domain Adaptation for EEG-Based Cross-Subject Emotion Recognition

Cross-subject emotion recognition is one of the most challenging tasks in electroencephalogram (EEG)-based emotion recognition. To guarantee the constancy of feature representations across domains and to eliminate differences between domains, we explored the feasibility of combining temporal convolu...

Full description

Saved in:
Bibliographic Details
Published inProceedings of the ... IEEE International Conference on Acoustics, Speech and Signal Processing (1998) pp. 3214 - 3218
Main Authors He, Zhipeng, Zhong, Yongshi, Pan, Jiahui
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.01.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Cross-subject emotion recognition is one of the most challenging tasks in electroencephalogram (EEG)-based emotion recognition. To guarantee the constancy of feature representations across domains and to eliminate differences between domains, we explored the feasibility of combining temporal convolutional networks (TCNs) and adversarial discriminative domain adaptation (ADDA) algorithms in solving the problem of domain shift in EEG-based cross-subject emotion recognition. In light of EEG signals that have specific temporal properties, we chose the temporal model TCN as the feature encoder. To verify the validity of the proposed method, we conducted experiments on two public datasets: DEAP and DREAMER. The experimental results show that for the leave-one-subject-out evaluation, average accuracies of 64.33% (valence) and 63.25% (arousal) were obtained on the DEAP dataset, and average accuracies of 66.56% (valence) and 63.69% (arousal) were achieved on the DREAMER dataset. Extensive experiments demonstrate that our method for EEG-based cross-subject emotion recognition is effective.
ISSN:2379-190X
DOI:10.1109/ICASSP43922.2022.9746600