Global Context MambaVision for EEG-based Emotion Recognition

Emotion recognition tasks based on physiological signals require the simultaneous capture of local features and global correlations of these signals. Although transformer-based models are widely used due to their superior ability to integrate information, their quadratic computational complexity lim...

Full description

Saved in:
Bibliographic Details
Published inProceedings of the ... IEEE International Conference on Acoustics, Speech and Signal Processing (1998) pp. 1 - 5
Main Authors Wang, Hao, Xu, Li, Yu, Yuntao, Ding, Weiyue, Xu, Yiming
Format Conference Proceeding
LanguageEnglish
Published IEEE 06.04.2025
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Emotion recognition tasks based on physiological signals require the simultaneous capture of local features and global correlations of these signals. Although transformer-based models are widely used due to their superior ability to integrate information, their quadratic computational complexity limits their efficiency in processing large-scale or high-resolution data. Recently, state space models (SSM) with efficient hardware-aware designs have demonstrated significant potential in modeling long sequences. However, existing SSMs face limitations in processing global information due to window constraints. Therefore, this paper introduces a novel Global Context (GC) MambaVision model, which combines the linear time complexity advantage of SSMs with a new type of local-global attention mechanism. GC MambaVision maintains high computational efficiency in emotion recognition tasks while providing a more comprehensive understanding of the dynamic changes in local and global emotional states. Experimental results on the DEAP and SEED-V datasets show that GC MambaVision achieves superior performance compared to current state-of-the-art models, with accuracies reaching 98.62% and 85.88%, respectively.
ISSN:2379-190X
DOI:10.1109/ICASSP49660.2025.10890602