DDNet: a hybrid network based on deep adaptive multi-head attention and dynamic graph convolution for EEG emotion recognition

Emotion recognition plays a crucial role in cognitive science and human-computer interaction. Existing techniques tend to ignore the significant differences between different subjects, resulting in limited accuracy and generalization ability. In addition, existing methods suffer from difficulties in...

Full description

Saved in:
Bibliographic Details
Published inSignal, image and video processing Vol. 19; no. 4
Main Authors Xu, Bingyue, Zhang, Xin, Zhang, Xiu, Sun, Baiwei, Wang, Yujie
Format Journal Article
LanguageEnglish
Published London Springer London 01.04.2025
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Emotion recognition plays a crucial role in cognitive science and human-computer interaction. Existing techniques tend to ignore the significant differences between different subjects, resulting in limited accuracy and generalization ability. In addition, existing methods suffer from difficulties in capturing the complex relationships among the channels of electroencephalography signals. A hybrid network is proposed to overcome the limitations. The proposed network is comprised of a deep adaptive multi-head attention (DAM) branch and a dynamic graph convolution (DGC) branch. The DAM branch uses residual convolution and adaptive multi-head attention mechanism. It can focus on multi-dimensional information from different representational subspaces at different locations. The DGC branch uses a dynamic graph convolutional neural network that learns topological features among the channels. The synergistic effect of these two branches enhances the model’s adaptability to subject differences. The extraction of local features and the understanding of global patterns are also optimized in the proposed network. Subject independent experiments were conducted on SEED and SEED-IV datasets. The average accuracy of SEED was 92.63% and the average F1-score was 92.43%. The average accuracy of SEED-IV was 85.03%, and the average F1-score was 85.01%. The results show that the proposed network has significant advantages in cross-subject emotion recognition, and can improve the accuracy and generalization ability in emotion recognition tasks.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1863-1703
1863-1711
DOI:10.1007/s11760-025-03876-4