Multimodal Techniques for Emotion Recognition

Human behaviour and actions are greatly affected by their emotions. Through human computer interactions (HCI) interpreting of emotions has become easier. Modals like Facial Emotion Recognition(FER) that considers the facial features of the human, Speech Emotion Recognition (SER) that concentrates on...

Full description

Saved in:
Bibliographic Details
Published in2021 International Conference on Computational Intelligence and Computing Applications (ICCICA) pp. 1 - 6
Main Authors Agarwal, Devangi, Desai, Sharmishta
Format Conference Proceeding
LanguageEnglish
Published IEEE 26.11.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Human behaviour and actions are greatly affected by their emotions. Through human computer interactions (HCI) interpreting of emotions has become easier. Modals like Facial Emotion Recognition(FER) that considers the facial features of the human, Speech Emotion Recognition (SER) that concentrates on the texture of human speech, Electroencephalography (EEG) that deals with brain waves and Electroencephalogram(ECG) that focuses on one's heart rate are few of the widely used unimodels that are in place for recognizing emotions. In this paper we see how multimodal system tends to provide higher accurate results than the unimodels in existence. In order to implement the multimodal system two fusion methods were considered that are Feature Level Fusion and Decision Level Fusion. It was observed that Feature Level Fusion was preferred by most researchers due to its capability of providing more valid results in case of compatible features. Facial-Speech, Speech-ECG and Speech-Facial are few of the well liked multimodals that have been implemented by varied researchers. Out of these Facial-EEG provided most robust and efficient outputs.
DOI:10.1109/ICCICA52458.2021.9697294