MI-EEG: Generalized model based on mutual information for EEG emotion recognition without adversarial training

EEG-based emotion classification is a vital aspect of human–machine interfaces. However, inter-subject variability poses a challenge for accurate domain-agnostic EEG emotion recognition, often requiring individual model calibration with a robust base model for fine-tuning. To overcome this limitatio...

Full description

Saved in:
Bibliographic Details
Published inExpert systems with applications Vol. 244; p. 122777
Main Authors Wang, Yingdong, Wu, Qingfeng, Wang, Shuocheng, Fang, XiQiao, Ruan, Qungsheng
Format Journal Article
LanguageEnglish
Published Elsevier Ltd 15.06.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:EEG-based emotion classification is a vital aspect of human–machine interfaces. However, inter-subject variability poses a challenge for accurate domain-agnostic EEG emotion recognition, often requiring individual model calibration with a robust base model for fine-tuning. To overcome this limitation and develop a generalized model, we propose a Generalized Model based on Mutual Information for EEG Emotion Recognition without Adversarial Training (MI-EEG). The MI-EEG model leverages disentanglement to extract shared features, wherein it separates EEG features into domain-invariant class-relevant features and other features. To avoid adversarial training, mutual information minimization is applied during the decoupling process. Additionally, mutual information maximization is used to enrich the features by strengthening the relationship between domain-invariant class-relevant features and emotion labels. Furthermore, the transformer-based feature extractor, which utilizes a multi-headed attention mechanism and pooling operations, enhances the feature quality in the time dimension. The experimental evaluation on two emotional EEG datasets demonstrates the superior performance of the proposed EEG-MI model compared to existing state-of-the-art methods. [Display omitted] •Mutual information up bound minimization makes Fre and Fir separated.•Mutual information lower bound maximization makes Fre and Fg more relevant.•The Multi-head attention neural network makes the model more robust.
ISSN:0957-4174
1873-6793
DOI:10.1016/j.eswa.2023.122777