Multimodal Emotion Recognition Using Different Fusion Techniques
Human beings have the ability to understand and visualize various emotions on a daily basis. This could be done by noticing various features such as facial muscle movements, speech, hand gestures, etc. The automated emotion recognition is an important issue and has also been a lively research topic...
Saved in:
Published in | 2021 Seventh International conference on Bio Signals, Images, and Instrumentation (ICBSII) pp. 1 - 6 |
---|---|
Main Authors | , , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
25.03.2021
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Human beings have the ability to understand and visualize various emotions on a daily basis. This could be done by noticing various features such as facial muscle movements, speech, hand gestures, etc. The automated emotion recognition is an important issue and has also been a lively research topic for the modern time. At the moment, several research workers have taken part in inheriting two or more unimodals for better understanding. This paper shows an approach for emotion recognition that uses three modalities: facial images, audio signals, and electroencephalogram (EEG) signals from FER and Ck+, RAVDESS and SEED-IV datasets respectively. Finally, various fusion techniques were approached and each of these fusion methods gave different results. The maximum accuracy of 71.24% was obtained with help of an autoencoder approach when combined with SVM classifier. |
---|---|
DOI: | 10.1109/ICBSII51839.2021.9445146 |