Emotion Recognition in EEG Based on Multilevel Multidomain Feature Fusion

In emotion recognition tasks, electroencephalography (EEG) has gained significant favor among researchers as a powerful biological signal tool. However, existing studies often fail to fully utilize the high temporal resolution provided by EEG when combining spatiotemporal and frequency features for...

Full description

Saved in:
Bibliographic Details
Published inIEEE access Vol. 12; p. 1
Main Authors Li, Zhao Long, Cao, Hui, Zhang, Ji Sai
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 01.01.2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In emotion recognition tasks, electroencephalography (EEG) has gained significant favor among researchers as a powerful biological signal tool. However, existing studies often fail to fully utilize the high temporal resolution provided by EEG when combining spatiotemporal and frequency features for emotion recognition, and do not meet the needs of effective feature fusion. Therefore, this paper proposes a multilevel multidomain feature fusion network model called MMF-Net, aiming to obtain a more comprehensive representation of spatiotemporal-frequency features and achieve higher accuracy in emotion classification. The model takes the original EEG two-dimensional feature map as input, simultaneously extracting spatiotemporal and spatial-frequency domain features at different levels to effectively utilize temporal resolution. Next, at each level, a specially designed fusion network layer is employed to combine the captured temporal, spatial, and frequency domain features. In addition, the fusion network layer contributes positively to the convergence of the model and the enhancement of feature detectors. In subject-dependent experiments, MMF-Net achieved average accuracy rates of 99.50% and 99.59% for valence and arousal dimensions on the DEAP dataset, respectively. In subject-independent experiments, the average accuracy rates for these two dimensions reached 97.46% and 97.54%, respectively.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2024.3417525