Real-time facial expression recognition using smoothed deep neural network ensemble

Facial emotion recognition (FER) has been extensively researched over the past two decades due to its direct impact in the computer vision and affective robotics fields. However, the available datasets to train these models include often miss-labelled data due to the labellers bias that drives the m...

Full description

Saved in:
Bibliographic Details
Published inIntegrated computer-aided engineering Vol. 28; no. 1; pp. 97 - 111
Main Authors Benamara, Nadir Kamel, Val-Calvo, Mikel, Álvarez-Sánchez, Jose Ramón, Díaz-Morcillo, Alejandro, Ferrández-Vicente, Jose Manuel, Fernández-Jover, Eduardo, Stambouli, Tarik Boudghene
Format Journal Article
LanguageEnglish
Published London, England SAGE Publications 01.01.2021
Sage Publications Ltd
Subjects
Online AccessGet full text
ISSN1069-2509
1875-8835
DOI10.3233/ICA-200643

Cover

Loading…
More Information
Summary:Facial emotion recognition (FER) has been extensively researched over the past two decades due to its direct impact in the computer vision and affective robotics fields. However, the available datasets to train these models include often miss-labelled data due to the labellers bias that drives the model to learn incorrect features. In this paper, a facial emotion recognition system is proposed, addressing automatic face detection and facial expression recognition separately, the latter is performed by a set of only four deep convolutional neural network respect to an ensembling approach, while a label smoothing technique is applied to deal with the miss-labelled training data. The proposed system takes only 13.48 ms using a dedicated graphics processing unit (GPU) and 141.97 ms using a CPU to recognize facial emotions and reaches the current state-of-the-art performances regarding the challenging databases, FER2013, SFEW 2.0, and ExpW, giving recognition accuracies of 72.72%, 51.97%, and 71.82% respectively.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1069-2509
1875-8835
DOI:10.3233/ICA-200643