Face-Specific Data Augmentation for Unconstrained Face Recognition
We identify two issues as key to developing effective face recognition systems: maximizing the appearance variations of training images and minimizing appearance variations in test images. The former is required to train the system for whatever appearance variations it will ultimately encounter and...
Saved in:
Published in | International journal of computer vision Vol. 127; no. 6-7; pp. 642 - 667 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
New York
Springer US
01.06.2019
Springer Springer Nature B.V |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | We identify two issues as key to developing effective face recognition systems: maximizing the appearance variations of
training
images and minimizing appearance variations in
test
images. The former is required to train the system for whatever appearance variations it will ultimately encounter and is often addressed by collecting massive training sets with millions of face images. The latter involves various forms of appearance normalization for removing distracting nuisance factors at test time and making test faces easier to compare. We describe novel, efficient
face-specific
data augmentation techniques and show them to be ideally suited for
both
purposes. By using knowledge of faces, their 3D shapes, and appearances, we show the following: (a) We can artificially enrich training data for face recognition with face-specific appearance variations. (b) This synthetic training data can be efficiently produced online, thereby reducing the massive storage requirements of large-scale training sets and simplifying training for many appearance variations. Finally, (c) The same, fast data augmentation techniques can be applied at test time to reduce appearance variations and improve face representations. Together, with additional technical novelties, we describe a highly effective face recognition pipeline which, at the time of submission, obtains state-of-the-art results across multiple benchmarks. Portions of this paper were previously published by Masi et al. (European conference on computer vision, Springer, pp 579–596,
2016b
, International conference on automatic face and gesture recognition,
2017
). |
---|---|
ISSN: | 0920-5691 1573-1405 |
DOI: | 10.1007/s11263-019-01178-0 |