Information Maximization for Extreme Pose Face Recognition
In this paper, we seek to draw connections between the frontal and profile face images in an abstract embedding space. We exploit this connection using a coupled-encoder network to project frontal/profile face images into a common latent embedding space. The proposed model forces the similarity of r...
Saved in:
Main Authors | , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
07.09.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | In this paper, we seek to draw connections between the frontal and profile
face images in an abstract embedding space. We exploit this connection using a
coupled-encoder network to project frontal/profile face images into a common
latent embedding space. The proposed model forces the similarity of
representations in the embedding space by maximizing the mutual information
between two views of the face. The proposed coupled-encoder benefits from three
contributions for matching faces with extreme pose disparities. First, we
leverage our pose-aware contrastive learning to maximize the mutual information
between frontal and profile representations of identities. Second, a memory
buffer, which consists of latent representations accumulated over past
iterations, is integrated into the model so it can refer to relatively much
more instances than the mini-batch size. Third, a novel pose-aware adversarial
domain adaptation method forces the model to learn an asymmetric mapping from
profile to frontal representation. In our framework, the coupled-encoder learns
to enlarge the margin between the distribution of genuine and imposter faces,
which results in high mutual information between different views of the same
identity. The effectiveness of the proposed model is investigated through
extensive experiments, evaluations, and ablation studies on four benchmark
datasets, and comparison with the compelling state-of-the-art algorithms. |
---|---|
DOI: | 10.48550/arxiv.2209.03456 |