EEG-based cross-subject emotion recognition using multi-source domain transfer learning

•A multi-representation variational autoencoder (MR-VAE) based on VAE is proposed.•An emotional EEG classification model based on multi-source domain selection and subdomain adaptation is designed. Emotion recognition based on electroencephalogram (EEG) has received extensive attention due to its ad...

Full description

Saved in:
Bibliographic Details
Published inBiomedical signal processing and control Vol. 84; p. 104741
Main Authors Quan, Jie, Li, Ying, Wang, Lingyue, He, Renjie, Yang, Shuo, Guo, Lei
Format Journal Article
LanguageEnglish
Published Elsevier Ltd 01.07.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:•A multi-representation variational autoencoder (MR-VAE) based on VAE is proposed.•An emotional EEG classification model based on multi-source domain selection and subdomain adaptation is designed. Emotion recognition based on electroencephalogram (EEG) has received extensive attention due to its advantages of being objective and not being controlled by subjective consciousness. However, inter-individual differences lead to insufficient generalization of the model on cross-subject recognition tasks. To solve this problem, a cross-subject emotional EEG classification algorithm based on multi-source domain selection and subdomain adaptation is proposed in this paper. We firstly design a multi-representation variational autoencoder (MR-VAE) to automatically extract emotion related features from multi-channel EEG to obtain a consistent EEG representation with as little prior knowledge as possible. Then, a multi-source domain selection algorithm is proposed to select the existing subjects’ EEG data that is closest to the target data distribution in the global distribution and sub-domain distribution, thereby improving the performance of the transfer learning model on the target subject. In this paper, we use a small amount of annotated target data to achieve knowledge transfer and improve the classification accuracy of the model on the target subject as much as possible, which has certain significance in clinical research. The proposed method achieves an average classification accuracy of 92.83% and 79.30% in our experiment on two public datasets SEED and SEED-IV, respectively, which are 26.37% and 22.80% higher than the baseline non-transfer learning method, respectively. Furthermore, we validate the proposed method on other two commonly used public datasets DEAP and DREAMER, which establish SOTA results on the binary classification task of the DEAP dataset. It also achieves comparable accuracy to several transfer learning based methods on the DREAMER dataset. The detailed recognition results on DEAP and DREAMER are in Appendix.
ISSN:1746-8094
1746-8108
DOI:10.1016/j.bspc.2023.104741