Learning Domain-Invariant Discriminative Features for Heterogeneous Face Recognition
Heterogeneous face recognition (HFR), referring to matching face images across different domains, is a challenging problem due to the vast cross-domain discrepancy and insufficient pairwise cross-domain training data. This article proposes a quadruplet framework for learning domain-invariant discrim...
Saved in:
Published in | IEEE access Vol. 8; pp. 209790 - 209801 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
Piscataway
IEEE
2020
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Heterogeneous face recognition (HFR), referring to matching face images across different domains, is a challenging problem due to the vast cross-domain discrepancy and insufficient pairwise cross-domain training data. This article proposes a quadruplet framework for learning domain-invariant discriminative features (DIDF) for HFR, which integrates domain-level and class-level alignment in one unified network. The domain-level alignment reduces the cross-domain distribution discrepancy. The class-level alignment based on a special quadruplet loss is developed to further diminish the intra-class variations and enlarge the inter-class separability among instances, thus handling the misalignment and adversarial equilibrium problems confronted by the domain-level alignment. With a bidirectional cross-domain data selection strategy, the quadruplet loss-based method prominently enriches the training set and further eliminates the cross-modality shift. Benefiting from the joint supervision and mutual reinforcement of these two components, the domain invariance and class discrimination of identity features are guaranteed. Extensive experiments on the challenging CASIA NIR-VIS 2.0 database, the Oulu-CASIA NIR&VIS database, the BUAA-VisNir database, and the IIIT-D viewed sketch database demonstrate the effectiveness and preferable generalization capability of the proposed method. |
---|---|
ISSN: | 2169-3536 2169-3536 |
DOI: | 10.1109/ACCESS.2020.3038906 |