Unveiling the Influence of Modeling Approach and Gender in Subject Independent Multimodal Emotion Recognition Using EOG and PPG

Multimodal emotion recognition is identifying emotions from multiple modalities like facial expressions, speech, gestures, text and physiological signals such as electroencephalogram (EEG), electrooculogram (EOG) and plethysmograph (PPG). This work focuses on emotion recognition using EOG and PPG. V...

Full description

Saved in:
Bibliographic Details
Published inIEEE access Vol. 12; pp. 177342 - 177354
Main Authors Ramaswamy, Manju Priya Arthanarisamy, Palaniswamy, Suja
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Multimodal emotion recognition is identifying emotions from multiple modalities like facial expressions, speech, gestures, text and physiological signals such as electroencephalogram (EEG), electrooculogram (EOG) and plethysmograph (PPG). This work focuses on emotion recognition using EOG and PPG. Valence and arousal are the two fundamental dimensions of emotion. This study investigates whether joint prediction of arousal and valence emotion dimensions is preferable to independent prediction of each emotion dimension in subject independent multimodal emotion recognition using EOG and PPG. Additionally, the study explores the influence of gender on model evaluation metrics. The results based on the DEAP dataset indicate that the independent prediction of arousal and valence with gender included as an independent variable improves a few of the model evaluation metrics statistically. The inclusion of gender as an independent variable in the model improves the RMSE and F1-Measure for independent arousal prediction, while the ROC area improves for independent valence prediction. Independent models for valence and arousal provide improvement in accuracy and F1-Measure evaluation metrics by a minimum of 53.25% over the multi-class approach and 25.00% over the multi-label approach.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2024.3506157