Training Against Disguises: Addressing and Mitigating Bias in Facial Emotion Recognition with Synthetic Data

Facial Emotion Recognition (FER) is a challenging problem due to various challenges such as variability in expressions and ambiguity in data. Several popular benchmarking datasets, specifically employed for FER tasks exhibit bias towards ethnicity, demography and image capture mechanisms. More speci...

Full description

Saved in:
Bibliographic Details
Published inIEEE International Conference and Workshops on Automatic Face and Gesture Recognition : FG pp. 1 - 6
Main Authors Sukumar, Aadith, Desai, Aditya, Singhal, Peeyush, Gokhale, Sai, Jain, Deepak Kumar, Walambe, Rahee, Kotecha, Ketan
Format Conference Proceeding
LanguageEnglish
Published IEEE 27.05.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Facial Emotion Recognition (FER) is a challenging problem due to various challenges such as variability in expressions and ambiguity in data. Several popular benchmarking datasets, specifically employed for FER tasks exhibit bias towards ethnicity, demography and image capture mechanisms. More specifically, the images in such datasets are captured in a controlled environment and are taken in good light, with straight head orientation, no occlusion or other facial artefacts. When employed for FER, these biases may impair a model's generalizability, rendering it ineffective for FER in novel and unseen datasets. Especially, in applications involving security (access control) and identification of mal-intentions from facial expressions, it may prove inefficient. A criminal may disguise their face with make-up, headgear, and religious facial accessories and can fool the FER models trained on these biased datasets. To that end, this work focuses on understanding these datasets better by identifying such "good-image" bias. Methods to mitigate such bias which allows the FER models to perform better and improve the robustness are also demonstrated. A simple yet effective FER framework for studying bias mitigation is proposed. Using this framework, the performance on popular dataset is analyzed and a significant difference in model performance is observed. Additionally, a knowledge transfer technique and a synthetic image generation technique are proposed to mitigate the identified bias. Finally, using the SFEW dataset, the findings are validated on the FER task, demonstrating the effectiveness of our techniques in mitigating real-world "good-image" bias. The experiments show that the proposed techniques outperform baseline methods by averaged fourfold improvement.
ISSN:2770-8330
DOI:10.1109/FG59268.2024.10582007