Boosting Unconstrained Face Recognition with Targeted Style Adversary

While deep face recognition models have demonstrated remarkable performance, they often struggle on the inputs from domains beyond their training data. Recent attempts aim to expand the training set by relying on computationally expensive and inherently challenging image-space augmentation of image...

Full description

Saved in:
Bibliographic Details
Main Authors Saadabadi, Mohammad Saeed Ebrahimi, Malakshan, Sahar Rahimi, Hosseini, Seyed Rasoul, Nasrabadi, Nasser M
Format Journal Article
LanguageEnglish
Published 14.08.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:While deep face recognition models have demonstrated remarkable performance, they often struggle on the inputs from domains beyond their training data. Recent attempts aim to expand the training set by relying on computationally expensive and inherently challenging image-space augmentation of image generation modules. In an orthogonal direction, we present a simple yet effective method to expand the training data by interpolating between instance-level feature statistics across labeled and unlabeled sets. Our method, dubbed Targeted Style Adversary (TSA), is motivated by two observations: (i) the input domain is reflected in feature statistics, and (ii) face recognition model performance is influenced by style information. Shifting towards an unlabeled style implicitly synthesizes challenging training instances. We devise a recognizability metric to constraint our framework to preserve the inherent identity-related information of labeled instances. The efficacy of our method is demonstrated through evaluations on unconstrained benchmarks, outperforming or being on par with its competitors while offering nearly a 70\% improvement in training speed and 40\% less memory consumption.
DOI:10.48550/arxiv.2408.07642