A temporal-spatial feature fusion network for emotion recognition with individual differences reduction
•A lightweight deep learning model was developed for EEG-based emotion recognition.•A channel attention mechanism was employed to extract spatial features from EEG data.•A Transformer was used to capture temporal features in EEG signals.•A switchable whitening module was applied to reduce inter-subj...
Saved in:
Published in | Neuroscience Vol. 569; pp. 195 - 209 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
United States
Elsevier Inc
17.03.2025
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | •A lightweight deep learning model was developed for EEG-based emotion recognition.•A channel attention mechanism was employed to extract spatial features from EEG data.•A Transformer was used to capture temporal features in EEG signals.•A switchable whitening module was applied to reduce inter-subject variability.•Spatiotemporal feature fusion was performed for emotion prediction and recognition.
In the context of EEG-based emotion recognition tasks, a conventional strategy involves the extraction of spatial and temporal features, subsequently fused for emotion prediction. However, due to the pronounced individual variability in EEG and the constrained performance of conventional time-series models, cross-subject experiments often yield suboptimal results. To address this limitation, we propose a novel network named Time-Space Emotion Network (TSEN), which capitalizes on the fusion of spatiotemporal information for emotion recognition.
Diverging from prior models that integrate temporal and spatial features, our network introduces a Convolutional Block Attention Module (CBAM) during spatial feature extraction to judiciously allocate weights to feature channels and spatial positions. Furthermore, we bolster network stability and improve domain adaptation through the incorporation of a residual block featuring Switchable Whitening (SW). Temporal feature extraction is accomplished using a Temporal Convolutional Network (TCN), ensuring elevated prediction accuracy while maintaining a lightweight network structure.
We conduct experiments on the preprocessed DEAP dataset. Ultimately, the average accuracy for arousal prediction is 0.7032 with a variance of 0.0876, and the F1 score is 0.6843. For valence prediction, the accuracy is 0.6792 with a variance of 0.0853, and the F1 score is 0.6826.
TSEN exhibits high accuracy and low variance in cross-subject emotion prediction tasks, effectively reducing individual differences among different subjects. Additionally, TSEN has a smaller parameter count, enabling faster execution. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
ISSN: | 0306-4522 1873-7544 1873-7544 |
DOI: | 10.1016/j.neuroscience.2025.01.049 |