Are Face Detection Models Biased?
The presence of bias in deep models leads to unfair outcomes for certain demographic subgroups. Research in bias focuses primarily on facial recognition and attribute prediction with scarce emphasis on face detection. Existing studies consider face detection as binary classification into 'face&...
Saved in:
Main Authors | , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
07.11.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | The presence of bias in deep models leads to unfair outcomes for certain
demographic subgroups. Research in bias focuses primarily on facial recognition
and attribute prediction with scarce emphasis on face detection. Existing
studies consider face detection as binary classification into 'face' and
'non-face' classes. In this work, we investigate possible bias in the domain of
face detection through facial region localization which is currently
unexplored. Since facial region localization is an essential task for all face
recognition pipelines, it is imperative to analyze the presence of such bias in
popular deep models. Most existing face detection datasets lack suitable
annotation for such analysis. Therefore, we web-curate the Fair Face
Localization with Attributes (F2LA) dataset and manually annotate more than 10
attributes per face, including facial localization information. Utilizing the
extensive annotations from F2LA, an experimental setup is designed to study the
performance of four pre-trained face detectors. We observe (i) a high disparity
in detection accuracies across gender and skin-tone, and (ii) interplay of
confounding factors beyond demography. The F2LA data and associated annotations
can be accessed at http://iab-rubric.org/index.php/F2LA. |
---|---|
DOI: | 10.48550/arxiv.2211.03588 |