Towards a deep learning framework for unconstrained face detection

Robust face detection is one of the most important preprocessing steps to support facial expression analysis, facial landmarking, face recognition, pose estimation, building of 3D facial models, etc. Although this topic has been intensely studied for decades, it is still challenging due to numerous...

Full description

Saved in:
Bibliographic Details
Published in2016 IEEE 8th International Conference on Biometrics Theory, Applications and Systems (BTAS) pp. 1 - 8
Main Authors Yutong Zheng, Chenchen Zhu, Khoa Luu, Bhagavatula, Chandrasekhar, Le, T. Hoang Ngan, Savvides, Marios
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.09.2016
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Robust face detection is one of the most important preprocessing steps to support facial expression analysis, facial landmarking, face recognition, pose estimation, building of 3D facial models, etc. Although this topic has been intensely studied for decades, it is still challenging due to numerous variants of face images in real-world scenarios. In this paper, we present a novel approach named Multiple Scale Faster Region-based Convolutional Neural Network (MS-FRCNN) to robustly detect human facial regions from images collected under various challenging conditions, e.g. large occlusions, extremely low resolutions, facial expressions, strong illumination variations, etc. The proposed approach is benchmarked on two challenging face detection databases, i.e. the Wider Face database and the Face Detection Dataset and Benchmark (FDDB), and compared against recent other face detection methods, e.g. Two-stage CNN, Multi-scale Cascade CNN, Faceness, Aggregate Chanel Features, HeadHunter, Multi-view Face Detection, Cascade CNN, etc. The experimental results show that our proposed approach consistently achieves highly competitive results with the state-of-the-art performance against other recent face detection methods.
DOI:10.1109/BTAS.2016.7791203