Are Face Detection Models Biased?

The presence of bias in deep models leads to unfair outcomes for certain demographic subgroups. Research in bias focuses primarily on facial recognition and attribute prediction with scarce emphasis on face detection. Existing studies consider face detection as binary classification into 'face&...

Full description

Saved in:
Bibliographic Details
Published in2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG) pp. 1 - 7
Main Authors Mittal, Surbhi, Thakral, Kartik, Majumdar, Puspita, Vatsa, Mayank, Singh, Richa
Format Conference Proceeding
LanguageEnglish
Published IEEE 05.01.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The presence of bias in deep models leads to unfair outcomes for certain demographic subgroups. Research in bias focuses primarily on facial recognition and attribute prediction with scarce emphasis on face detection. Existing studies consider face detection as binary classification into 'face' and 'non-face' classes. In this work, we investigate possible bias in the domain of face detection through facial region localization which is currently unexplored. Since facial region localization is an essential task for all face recognition pipelines, it is imperative to analyze the presence of such bias in popular deep models. Most existing face detection datasets lack suitable annotation for such analysis. Therefore, we web-curate the Fair Face Localization with Attributes (F2LA) dataset and manually annotate more than 10 attributes per face, including facial localization information. Utilizing the extensive annotations from F2LA, an experimental setup is designed to study the performance of four pre-trained face detectors. We observe (i) a high disparity in detection accuracies across gender and skin-tone, and (ii) interplay of confounding factors beyond demography. The F2LA data and associated annotations can be accessed at http://iab-rubric.org/index.php/F2LA.
DOI:10.1109/FG57933.2023.10042564