Generating personalized facial emotions using emotional EEG signals and conditional generative adversarial networks

Facial expressions are one of the most effective and straightforward ways of conveying our emotions and intentions. Therefore, it is crucial to conduct research aimed at developing a Brain Computer Interface (BCI) that can assist individuals with facial motor disabilities in expressing their emotion...

Full description

Saved in:
Bibliographic Details
Published inMultimedia tools and applications Vol. 83; no. 12; pp. 36013 - 36038
Main Authors Esmaeili, Masoumeh, Kiani, Kourosh
Format Journal Article
LanguageEnglish
Published New York Springer US 01.04.2024
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Facial expressions are one of the most effective and straightforward ways of conveying our emotions and intentions. Therefore, it is crucial to conduct research aimed at developing a Brain Computer Interface (BCI) that can assist individuals with facial motor disabilities in expressing their emotions. This paper proposes a hybrid GAN-based model that reconstructs a personalized facial expression based on their corresponding emotional ElectroEncephaloGram (EEG) signals, utilizing a conditional Generative Adversarial Network (cGAN). To recognize emotions using EEG signals, a novel method called CSP-BiLSTM is introduced, which combines the Common Spatial Pattern (CSP) and Bidirectional Long Short-Term Memory (BiLSTM) network to explore the spatial and temporal dependencies of the raw EEG signals. Finally, a Fully Connected (FC) layer with a Softmax activation function are applied to the extracted spatiotemporal features to recognize the label of the EEG signals. The predicted emotional label is then input into a cGAN, along with a neutral facial image, to emotionally reconstruct the input image. Experimental results on the SEED dataset demonstrate that our proposed CSP-BiLSTM model outperforms previous models in both subject-dependent and cross-subject scenarios, achieving 99.97% and 99.93% accuracy, respectively, for three classification tasks. A thorough evaluation of the images generated by cGAN is conducted using three challenging facial expression benchmarks: AffectNet, CK + , and CelebA. The results of applying FID, emotion classification using pre-trained Arcface model, and human evaluation indicated that samples synthesized using this model outperformed those generated by recent techniques.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1573-7721
1380-7501
1573-7721
DOI:10.1007/s11042-023-17018-w