Improving the Accuracy of Automatic Facial Expression Recognition in Speaking Subjects with Deep Learning

When automatic facial expression recognition is applied to video sequences of speaking subjects, the recognition accuracy has been noted to be lower than with video sequences of still subjects. This effect known as the speaking effect arises during spontaneous conversations, and along with the affec...

Full description

Saved in:
Bibliographic Details
Published inApplied sciences Vol. 10; no. 11; p. 4002
Main Authors Bursic, Sathya, Boccignone, Giuseppe, Ferrara, Alfio, D’Amelio, Alessandro, Lanzarotti, Raffaella
Format Journal Article
LanguageEnglish
Published Basel MDPI AG 01.06.2020
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:When automatic facial expression recognition is applied to video sequences of speaking subjects, the recognition accuracy has been noted to be lower than with video sequences of still subjects. This effect known as the speaking effect arises during spontaneous conversations, and along with the affective expressions the speech articulation process influences facial configurations. In this work we question whether, aside from facial features, other cues relating to the articulation process would increase emotion recognition accuracy when added in input to a deep neural network model. We develop two neural networks that classify facial expressions in speaking subjects from the RAVDESS dataset, a spatio-temporal CNN and a GRU cell RNN. They are first trained on facial features only, and afterwards both on facial features and articulation related cues extracted from a model trained for lip reading, while varying the number of consecutive frames provided in input as well. We show that using DNNs the addition of features related to articulation increases classification accuracy up to 12%, the increase being greater with more consecutive frames provided in input to the model.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2076-3417
2076-3417
DOI:10.3390/app10114002