FreqDGT: Frequency-Adaptive Dynamic Graph Networks with Transformer for Cross-subject EEG Emotion Recognition

2025 International Conference on Machine Intelligence and Nature-InspireD Computing (MIND), 2025 Electroencephalography (EEG) serves as a reliable and objective signal for emotion recognition in affective brain-computer interfaces, offering unique advantages through its high temporal resolution and...

Full description

Saved in:
Bibliographic Details
Main Authors Li, Yueyang, Gong, Shengyu, Zeng, Weiming, Wang, Nizhuan, Siok, Wai Ting
Format Journal Article
LanguageEnglish
Published 19.08.2025
Subjects
Online AccessGet full text
DOI10.48550/arxiv.2506.22807

Cover

Loading…
More Information
Summary:2025 International Conference on Machine Intelligence and Nature-InspireD Computing (MIND), 2025 Electroencephalography (EEG) serves as a reliable and objective signal for emotion recognition in affective brain-computer interfaces, offering unique advantages through its high temporal resolution and ability to capture authentic emotional states that cannot be consciously controlled. However, cross-subject generalization remains a fundamental challenge due to individual variability, cognitive traits, and emotional responses. We propose FreqDGT, a frequency-adaptive dynamic graph transformer that systematically addresses these limitations through an integrated framework. FreqDGT introduces frequency-adaptive processing (FAP) to dynamically weight emotion-relevant frequency bands based on neuroscientific evidence, employs adaptive dynamic graph learning (ADGL) to learn input-specific brain connectivity patterns, and implements multi-scale temporal disentanglement network (MTDN) that combines hierarchical temporal transformers with adversarial feature disentanglement to capture both temporal dynamics and ensure cross-subject robustness. Comprehensive experiments demonstrate that FreqDGT significantly improves cross-subject emotion recognition accuracy, confirming the effectiveness of integrating frequency-adaptive, spatial-dynamic, and temporal-hierarchical modeling while ensuring robustness to individual differences. The code is available at https://github.com/NZWANG/FreqDGT.
DOI:10.48550/arxiv.2506.22807