Coupled Attribute Learning for Heterogeneous Face Recognition

Heterogeneous face recognition (HFR) is a challenging problem in face recognition and subject to large textural and spatial structure differences of face images. Different from conventional face recognition in homogeneous environments, there exist many face images taken from different sources (inclu...

Full description

Saved in:
Bibliographic Details
Published inIEEE transaction on neural networks and learning systems Vol. 31; no. 11; pp. 4699 - 4712
Main Authors Liu, Decheng, Gao, Xinbo, Wang, Nannan, Li, Jie, Peng, Chunlei
Format Journal Article
LanguageEnglish
Published United States IEEE 01.11.2020
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Heterogeneous face recognition (HFR) is a challenging problem in face recognition and subject to large textural and spatial structure differences of face images. Different from conventional face recognition in homogeneous environments, there exist many face images taken from different sources (including different sensors or different mechanisms) in reality. In addition, limited training samples of cross-modality pairs make HFR more challenging due to the complex generation procedure of these images. Despite the great progress that has been achieved in recent years, existing works mainly focus on HFR from only cross-modality image matching. However, it is more practical to obtain both facial images and semantic descriptions about facial attributes in real-world situations, in which the semantic description clues are nearly always obtained during the process of image generation. Motivated by human cognitive mechanisms, we naturally utilize the explicit invariant semantic description, i.e., face attributes, to help address the gap among face images of different modalities. Existing facial attributes-related face recognition methods primarily regard attributes as the high-level features used to enhance recognition performance, ignoring the inherent relationship between face attributes and identities. In this article, we propose novel coupled attribute learning for the HFR (CAL-HFR) method without labeling the attributes manually. Deep convolutional networks are employed to directly map face images in heterogeneous scenarios to a compact common space where distances are taken as dissimilarities of pairs. Coupled attribute guided triplet loss (CAGTL) is designed to train an end-to-end HFR network that can effectively eliminate defects of incorrectly estimated attributes. Extensive experiments on multiple heterogeneous scenarios demonstrate that the proposed method achieves superior performance compared with that of state-of-the-art methods. Furthermore, we make publicly available our generated pairwise annotated heterogeneous facial attribute database for evaluation and promoting related research.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:2162-237X
2162-2388
2162-2388
DOI:10.1109/TNNLS.2019.2957285