Automatic facial expression localization and recognition across a large range of emotions

Automatic facial expression recognition (AFER) has been shown to work well when restricted to subjects showing a limited range of 6-basic expressions (BE). Expression recognition in subjects showing a large range of 22-compound expressions (CE) is harder as it has been shown that CE and BE are parti...

Full description

Saved in:
Bibliographic Details
Published inSignal, image and video processing Vol. 19; no. 3
Main Authors Al-Garaawi, Nora, Morris, Tim, Cootes, Tim F.
Format Journal Article
LanguageEnglish
Published London Springer London 01.03.2025
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Automatic facial expression recognition (AFER) has been shown to work well when restricted to subjects showing a limited range of 6-basic expressions (BE). Expression recognition in subjects showing a large range of 22-compound expressions (CE) is harder as it has been shown that CE and BE are partially similar which might lead to huge confusion in AFER. We present a discriminative system that predicts expression across a large range of emotions. We first build a fully automatic facial feature detector using Random Forest Regression Voting in a Constrained Local Models (RFRV-CLM) framework used to automatically detect facial points, and study the effect of CE on the accuracy of point localization task. Second, a set of expression recognizers is trained from the extracted features including shape, texture, and appearance, to analyze the effect of the CE on the facial features and subsequently on the performance of AFER. The performance was evaluated using the CE dataset of 22 emotions. The results show the system to be accurate and robust against a wide variety of expressions. Evaluation of point localization and expression recognition against ground truth data was obtained and compared with the existing results of alternative approaches tested on the same data. The quantitative results with 55.6 recognition rates, 2.1% error rates using manual points, and 51.8 recognition rates, 2.1% error rates using automatic points demonstrated that our system was encouraging in comparison with the state-of-the-art systems.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1863-1703
1863-1711
DOI:10.1007/s11760-025-03822-4