Dynamic Domain Adaptation for Class-Aware Cross-Subject and Cross-Session EEG Emotion Recognition
It is vital to develop general models that can be shared across subjects and sessions in the real-world deployment of electroencephalogram (EEG) emotion recognition systems. Many prior studies have exploited domain adaptation algorithms to alleviate the inter-subject and inter-session discrepancies...
Saved in:
Published in | IEEE journal of biomedical and health informatics Vol. 26; no. 12; pp. 5964 - 5973 |
---|---|
Main Authors | , , , , , , |
Format | Journal Article |
Language | English |
Published |
United States
IEEE
01.12.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | It is vital to develop general models that can be shared across subjects and sessions in the real-world deployment of electroencephalogram (EEG) emotion recognition systems. Many prior studies have exploited domain adaptation algorithms to alleviate the inter-subject and inter-session discrepancies of EEG distributions. However, these methods only aligned the global domain divergence, but overlooked the local domain divergence with respect to each emotion category. This degenerates the emotion-discriminating ability of the domain invariant features. In this paper, we argue that aligning the EEG data within the same emotion categories is important for generalizable and discriminative features. Hence, we propose the dynamic domain adaptation (DDA) algorithm where the global and local divergences are disposed by minimizing the global domain discrepancy and local subdomain discrepancy , respectively. To tackle the absence of emotion labels in the target domain, we introduce a dynamic training strategy where the model focuses on optimizing the global domain discrepancy in the early training steps, and then gradually switches to the local subdomain discrepancy. The DDA algorithm is formally implemented as an unsupervised version and a semi-supervised version for different experimental settings. Based on the coarse-to-fine alignment, our model achieves the average peak accuracy of 91.08%, 92.89% on SEED, and 81.58%, 80.82% on SEED-IV in the cross-subject and cross-session scenarios, respectively. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
ISSN: | 2168-2194 2168-2208 2168-2208 |
DOI: | 10.1109/JBHI.2022.3210158 |