Spatiotemporal isomorphic cross-brain region interaction network for cross-subject EEG emotion recognition
Electroencephalogram (EEG) has high temporal resolution and low cost and has become one of the important tools for emotion recognition in human-computer interaction. The intricate architecture and functioning of the brain, along with substantial individual variances among participants, and existing...
Saved in:
Published in | Knowledge-based systems Vol. 327; p. 114115 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
Elsevier B.V
09.10.2025
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Electroencephalogram (EEG) has high temporal resolution and low cost and has become one of the important tools for emotion recognition in human-computer interaction. The intricate architecture and functioning of the brain, along with substantial individual variances among participants, and existing methods are difficult to simultaneously model the temporal and spatial consistency of brain area interactions and EEG signals between subjects, which limits the generalization performance of the model in cross-subject contexts. To meet this challenge, we propose a cross-subject EEG emotion recognition model based on a spatiotemporal isomorphic cross-brain region interaction network (STCBI-Nets). In this model, we first designed the cross-brain region interaction module (CBI), which dynamically models the interaction relationship between different brain regions through a multi-head cross-attention mechanism, captures heterogeneous information flow between local brain regions, enhances the long-range dependency modeling ability of EEG time series, and effectively integrates the collaborative activation mode of the whole brain. Secondly, we design a spatiotemporal isomorphic adaptive fusion (STIAF) block, which adopts a dual branch structure to mine hierarchical and complementary information of spatiotemporal features and introduces a negative sample weighted contrastive learning mechanism and dynamic fusion strategy to improve the robustness and discriminative power of cross-view shared representations, thereby enhancing the model's adaptability to different subject features. Finally, we propose a joint optimized adaptive domain alignment strategy (JOADAS), which combines global adversarial learning with an adaptive class center alignment mechanism to reduce domain bias between different subjects from both macro and micro levels, enhance intra-class aggregation and inter-class separability, and improve the model's discriminative performance and cross-subject generalization ability. Extensive experiments on multiple datasets demonstrated the superior performance of the proposed algorithm, and STCBI-Nets outperform state-of-the-art (SOTA) methods and exhibit stronger generalization ability and stability in cross-subject EEG emotion recognition tasks. |
---|---|
ISSN: | 0950-7051 |
DOI: | 10.1016/j.knosys.2025.114115 |