MetaEmotionNet: Spatial–Spectral–Temporal-Based Attention 3-D Dense Network With Meta-Learning for EEG Emotion Recognition

Emotion recognition has become an important area in affective computing. Emotion recognition based on multichannel electroencephalogram (EEG) signals has gradually become popular in recent years. However, on one hand, how to make full use of different EEG features and the discriminative local patter...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on instrumentation and measurement Vol. 73; pp. 1 - 13
Main Authors Ning, Xiaojun, Wang, Jing, Lin, Youfang, Cai, Xiyang, Chen, Haobin, Gou, Haijun, Li, Xiaoli, Jia, Ziyu
Format Journal Article
LanguageEnglish
Published New York The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Emotion recognition has become an important area in affective computing. Emotion recognition based on multichannel electroencephalogram (EEG) signals has gradually become popular in recent years. However, on one hand, how to make full use of different EEG features and the discriminative local patterns among the features for various emotions is challenging. Existing methods ignore the complementarity among the spatial–spectral–temporal features and discriminative local patterns in all features, which limits the classification performance. On the other hand, when dealing with cross-subject emotion recognition, existing transfer learning (TL) methods need a lot of training data. At the same time, it is extremely expensive and time-consuming to collect the labeled EEG data, which is not conducive to the wide application of emotion recognition models for new subjects. To solve the above challenges, we propose a novel spatial–spectral–temporal-based attention 3-D dense network (SST-Net) with meta-learning, named MetaEmotionNet, for emotion recognition. Specifically, MetaEmotionNet integrates the spatial–spectral–temporal features simultaneously in a unified network framework through two-stream fusion. At the same time, the 3-D attention mechanism can adaptively explore discriminative local patterns. In addition, a meta-learning algorithm is applied to reduce dependence on training data. Experiments demonstrate that the MetaEmotionNet is superior to the baseline models on two benchmark datasets.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0018-9456
1557-9662
DOI:10.1109/TIM.2023.3338676