FMLAN: A novel framework for cross-subject and cross-session EEG emotion recognition
Emotion recognition is significant in brain-computer interface (BCI) applications. Electroencephalography (EEG) is extensively employed for emotion recognition because of its precise temporal resolution and dependability. However, EEG signals are variable across subjects and sessions, limiting the e...
Saved in:
Published in | Biomedical signal processing and control Vol. 100; p. 106912 |
---|---|
Main Authors | , , , , , , |
Format | Journal Article |
Language | English |
Published |
Elsevier Ltd
01.02.2025
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Emotion recognition is significant in brain-computer interface (BCI) applications. Electroencephalography (EEG) is extensively employed for emotion recognition because of its precise temporal resolution and dependability. However, EEG signals are variable across subjects and sessions, limiting the effectiveness of emotion recognition methods on new users. To address this problem, multi-source domain adaptation was introduced to EEG emotion recognition. Actually, for cross-subject and cross-session emotion recognition methods, there are two most important aspects: extracting features relevant to the emotion recognition task and aligning the features of labeled subjects or sessions(source domains) with those of the unlabeled subject or session(target domain). In this study, we propose a Fine-grained Mutual Learning Adaptation Network (FMLAN) to make innovative improvements in these two aspects. Specifically, we establish multiple separate domain adaptation sub-networks, each corresponding to a specific source domain. Additionally, we introduce a single joint domain adaptation sub-network that combines multiple source domains together. For EEG emotion recognition, we introduce mutual learning for the first time to connect separate domain adaptation networks and joint domain adaptation sub-network. This facilitates the transfer of complementary information between different domains, enabling each sub-network to extract more comprehensive and robust features. Additionally, we design a novel Fine-grained Alignment Module (FAM), which takes category and decision boundary information into account during the feature alignment, ensuring more accurate alignment. Extensive experiments on SEED and SEED-IV datasets demonstrate that our approach outperforms state-of-the-art methods in performance.
•Mutual learning is introduced to cross-subject and cross-session emotion recognition.•A unique Fine-Grained Alignment Module is designed to enhance feature alignment.•Extensive experiments show our approach outperforms state-of-the-art methods. |
---|---|
ISSN: | 1746-8094 |
DOI: | 10.1016/j.bspc.2024.106912 |