Warning: Full texts from electronic resources are only available from the university network. You are currently outside this network. Please log in to access full texts.
MDAC: EEG Emotion Recognition with Multi-Scale Dual Attention Capsule Network
In recent years, deep learning has exhibited significant prowess in the field of EEG-based affective recognition. The attention mechanism has always been a focal point of interest within the domain of deep learning. However, existing EEG analysis techniques still face challenges in accurately pinpoi...
Saved in:
Published in | Proceedings (IEEE International Conference on Bioinformatics and Biomedicine) pp. 4164 - 4171 |
---|---|
Main Authors | , , , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
03.12.2024
|
Subjects | |
Online Access | Get full text |
ISSN | 2156-1133 |
DOI | 10.1109/BIBM62325.2024.10822540 |
Cover
Summary: | In recent years, deep learning has exhibited significant prowess in the field of EEG-based affective recognition. The attention mechanism has always been a focal point of interest within the domain of deep learning. However, existing EEG analysis techniques still face challenges in accurately pinpointing emotion-related signals across both spatial and temporal scales. We propose a novel model, named MDAC, which is based on a multi-scale dual attention and a capsule network tuned for EEG data, to address the aforementioned issue. Initially, the MDA module applies varying scales of perceptual fields to the data and integrates them to simultaneously obtain attention weights at the pixel level and the sampling rate level, achieving precise weighting of EEG signals in both spatial and temporal resolutions. Furthermore, we have increased the number of convolutional channels and the dimensionality of the primary capsules in CapsNet to better align with the characteristics of EEG data, proposing a structural configuration more apt for EEG-based affective recognition. We conducted subject-dependent and subject-independent experiments on the DEAP dataset to validate our model. In the subject-dependent experiments, the accuracy rates for both the valence and arousal dimensions were 99.58%. In the subject-independent experiments, the accuracy rates for the valence and arousal dimensions were 98.15% and 98.04%, respectively. The experimental results corroborate the efficacy of the method we proposed in this paper for the task of emotion recognition. |
---|---|
ISSN: | 2156-1133 |
DOI: | 10.1109/BIBM62325.2024.10822540 |