Multisource Associate Domain Adaptation for Cross-Subject and Cross-Session EEG Emotion Recognition

Emotion recognition is important in the application of brain-computer interface (BCI). Building a robust emotion recognition model across subjects and sessions is critical in emotion based BCI systems. Electroencephalogram (EEG) is a widely used tool to recognize different emotion states. However, E...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on instrumentation and measurement Vol. 72; p. 1
Main Authors She, Qingshan, Zhang, Chenqi, Fang, Feng, Ma, Yuliang, Zhang, Yingchun
Format Journal Article
LanguageEnglish
Published New York IEEE 01.01.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Emotion recognition is important in the application of brain-computer interface (BCI). Building a robust emotion recognition model across subjects and sessions is critical in emotion based BCI systems. Electroencephalogram (EEG) is a widely used tool to recognize different emotion states. However, EEG has disadvantages such as small amplitude, low signal-to-noise ratio, and non-stationary properties, resulting in large differences across subjects. To solve these problems, this paper proposes a new emotion recognition method based on a multi-source associate domain adaptation network, considering both domain invariant and domain-specific features. First, separate branches were constructed for multiple source domains, assuming that different EEG data shared the same low-level features. Secondly, the domain specific features were extracted by using the one-to-one associate domain adaptation. Then, the weighted scores of specific sources were obtained according to the distribution distance, and multiple source classifiers were deduced with the corresponding weighted scores. Finally, EEG emotion recognition experiments were conducted on different datasets, including SEED, DEAP, and SEED-IV dataset. Results indicated that, in the cross-subject experiment, the average accuracy in SEED dataset was 86.16%, DEAP dataset was 65.59%, and SEED-IV was 59.29%. In the cross-session experiment, the accuracies of SEED and SEED-IV datasets were 91.10% and 66.68%, respectively. Our proposed method has achieved better classification results compared to state-of-the-art domain adaptation methods.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0018-9456
1557-9662
DOI:10.1109/TIM.2023.3277985