Cross-Subject EEG-Based Emotion Recognition via Semisupervised Multisource Joint Distribution Adaptation

Most emotion recognition systems still present limited applicability to new users due to the intersubject variability of electroencephalogram (EEG) signals. Although domain adaptation methods have been adopted to tackle this problem, most methodologies deal with unlabeled data from a target subject....

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on instrumentation and measurement Vol. 72; pp. 1 - 11
Main Authors Jimenez-Guarneros, Magdiel, Fuentes-Pineda, Gibran
Format Journal Article
LanguageEnglish
Published New York IEEE 2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Most emotion recognition systems still present limited applicability to new users due to the intersubject variability of electroencephalogram (EEG) signals. Although domain adaptation methods have been adopted to tackle this problem, most methodologies deal with unlabeled data from a target subject. However, a few labeled samples from a target subject could also be included to boost cross-subject emotion recognition. In this article, we present a semisupervised domain adaptation (SSDA) framework to align the joint distributions of subjects, assuming that fine-grained structures must be aligned to perform a greater knowledge transfer. To achieve this, the proposed framework performs a multisource alignment of features at the subject level, while predictions are aligned over the global feature space. To support joint distribution alignment, interclass separation and consistent predictions are ensured on the target subject. We perform experiments using two public benchmark datasets, SEED and SEED-IV, with two different sampling strategies to incorporate a few labeled samples from a target subject. Our proposal achieves an average accuracy of 93.55% and 87.96% on SEED and SEED-IV, using three labeled target samples of each class. Moreover, we obtained an average accuracy of 91.79% and 85.45% on SEED and SEED-IV by incorporating ten labeled samples from the first EEG trial of each class.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0018-9456
1557-9662
DOI:10.1109/TIM.2023.3302938