Decision-Level Fusion Method for Emotion Recognition using Multimodal Emotion Recognition Information

Human emotion recognition is an important factor for social robots. In previous research, emotion recognizers with many modalities have been studied, but there are several problems that make recognition rates lower when a recognizer is applied to a robot. This paper proposes a decision level fusion...

Full description

Saved in:
Bibliographic Details
Published in2018 15th International Conference on Ubiquitous Robots (UR) pp. 472 - 476
Main Authors Song, Kyu-Seob, Nho, Young-Hoon, Seo, Ju-Hwan, Kwon, Dong-soo
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.06.2018
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Human emotion recognition is an important factor for social robots. In previous research, emotion recognizers with many modalities have been studied, but there are several problems that make recognition rates lower when a recognizer is applied to a robot. This paper proposes a decision level fusion method that takes the outputs of each recognizer as an input and confirms which combination of features achieves the highest accuracy. We used EdNet, which was developed in KAIST based Convolutional Neural Networks (CNNs), as a facial expression recognizer and a speech analytics engine developed for speech emotion recognition. Finally, we confirmed a higher accuracy 43.40% using an artificial neural network (ANN) or the k-Nearest Neighbor (k-NN) algorithm for classification of combinations of features from EdN et and the speech analytics engine.
DOI:10.1109/URAI.2018.8441795