Dep-FER: Facial Expression Recognition in Depressed Patients Based on Voluntary Facial Expression Mimicry

Facial expressions are important nonverbal behaviors that humans use to express their feelings. Clinical research have shown that depressed patients have poor facial expressiveness and mimicry. As a result, we propose a VFEM experiment with seven expressions to explore variations in facial expressio...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on affective computing Vol. 15; no. 3; pp. 1725 - 1738
Main Authors Ye, Jiayu, Yu, Yanhong, Zheng, Yunshao, Liu, Yang, Wang, Qingxiang
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 01.07.2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Facial expressions are important nonverbal behaviors that humans use to express their feelings. Clinical research have shown that depressed patients have poor facial expressiveness and mimicry. As a result, we propose a VFEM experiment with seven expressions to explore variations in facial expression features between depressed patients and normal people, including anger, disgust, fear, happiness, neutrality, sadness, and surprise. It has been discovered through VFEM experiments that depressed patients frequently exhibit negative facial expressions. Meanwhile, we propose a depression facial expression recognition (Dep-FER) model in this research. Dep-FER involves three innovative and crucial components: Mask Multi-head Self-Attention (MMSA), facial action unit similarity loss function (AUs Loss), and case-control loss function (CC Loss). MMSA can filter out disturbing samples and force to learn the relationship between different samples. AUs Loss utilizes the similarity between each expression AU and the model output to improve the generalization ability of the model. CC Loss addresses the intrinsic link between the depressed and normal patient categories. Dep-FER achieves excellent performance in VFEM and outperforms existing comparative models.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1949-3045
1949-3045
DOI:10.1109/TAFFC.2024.3370103