Decision-Level Fusion Method for Emotion Recognition using Multimodal Emotion Recognition Information
Human emotion recognition is an important factor for social robots. In previous research, emotion recognizers with many modalities have been studied, but there are several problems that make recognition rates lower when a recognizer is applied to a robot. This paper proposes a decision level fusion...
Saved in:
Published in | 2018 15th International Conference on Ubiquitous Robots (UR) pp. 472 - 476 |
---|---|
Main Authors | , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
01.06.2018
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Human emotion recognition is an important factor for social robots. In previous research, emotion recognizers with many modalities have been studied, but there are several problems that make recognition rates lower when a recognizer is applied to a robot. This paper proposes a decision level fusion method that takes the outputs of each recognizer as an input and confirms which combination of features achieves the highest accuracy. We used EdNet, which was developed in KAIST based Convolutional Neural Networks (CNNs), as a facial expression recognizer and a speech analytics engine developed for speech emotion recognition. Finally, we confirmed a higher accuracy 43.40% using an artificial neural network (ANN) or the k-Nearest Neighbor (k-NN) algorithm for classification of combinations of features from EdN et and the speech analytics engine. |
---|---|
DOI: | 10.1109/URAI.2018.8441795 |