MMDA: A Multimodal and Multisource Domain Adaptation Method for Cross-Subject Emotion Recognition From EEG and Eye Movement Signals
Multimodal emotion recognition from electroencephalogram (EEG) and eye movement signals has shown to be a promising approach to provide more discriminative information about human emotional states. However, most current works rely on a subject-dependent approach, limiting their applicability to new...
Saved in:
Published in | IEEE transactions on computational social systems pp. 1 - 14 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
IEEE
2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Multimodal emotion recognition from electroencephalogram (EEG) and eye movement signals has shown to be a promising approach to provide more discriminative information about human emotional states. However, most current works rely on a subject-dependent approach, limiting their applicability to new users. Recently, some studies have explored multimodal domain adaptation to address the mentioned issue by transferring information from known subjects to new ones. Unfortunately, existing methods are still exposed to negative transfer as a suboptimal distribution alignment is performed between subjects, while irrelevant information is not discarded. In this article, we present a multimodal and multisource domain adaptation (MMDA) method, which adopts the following three strategies: 1) marginal and conditional distribution alignments must be performed between each known subject and a new one; 2) relevant distribution alignments must be prioritized to avoid a negative transfer; and 3) modality fusion results should be improved by extracting more discriminative features from EEG signals and selecting relevant features across modalities. Our proposed method was evaluated with leave-one-subject-out cross validation on four public datasets: SEED, SEED-GER, SEED-IV, and SEED-V. Experimental results show that our proposal outperforms state-of-the-art results for each dataset when subject data from different sessions are combined into a single dataset. Moreover, MMDA exceeds the state of the art in 8 out of 11 different sessions when each session is evaluated. |
---|---|
ISSN: | 2329-924X 2373-7476 |
DOI: | 10.1109/TCSS.2024.3519300 |