Deep Recognition of Facial Expressions in Movies

Consumer feedback is used in lots of fields for various purposes. However, traditional paper questionnaires or online surveys cannot fully meet these demands for obtaining accurate and useful feedback from consumers. Therefore, we propose a deep learning based deep recognition of facial micro expres...

Full description

Saved in:
Bibliographic Details
Published in2022 International Conference on Technologies and Applications of Artificial Intelligence (TAAI) pp. 60 - 65
Main Authors Chen, Lieu-Hen, Wu, Hsiao-Kuang, Shimokawara, Eri, Hung, Hao-Ming, Ong-Lim, Wei-Chek
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.12.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Consumer feedback is used in lots of fields for various purposes. However, traditional paper questionnaires or online surveys cannot fully meet these demands for obtaining accurate and useful feedback from consumers. Therefore, we propose a deep learning based deep recognition of facial micro expressions in order to receive more realistic feedback from users in this study. To achieve this goal, we integrate several approaches including: 1. using trained face detection model to capture face image from input. 2. training a high accurate 468-point landmark detection model with multiple face dataset. Based on the FACS (Facial Action Coding System) table, we categorize these landmarks into 13 groups of facial regions. These regions with specific emotion labels are used as our target units of AU (Action Unit) detection. 3. training CNN model to detect and analysis AUs from facial landmark data. 4. implying FACS to evaluate the facial expressions and emotions. and 5. using a straightforward GUI plotter to show the digitized emotions. The experiment results show that not only the primary emotion but also the secondary emotion of users in movies can be detected and evaluated successfully. Therefore, our system has a great potential for detecting micro expressions in a more accurate and comprehensive manner.
ISSN:2376-6824
DOI:10.1109/TAAI57707.2022.00020