Batch Normalization Embeddings for Deep Domain Generalization
Domain generalization aims at training machine learning models to perform robustly across different and unseen domains. Several recent methods use multiple datasets to train models to extract domain-invariant features, hoping to generalize to unseen domains. Instead, first we explicitly train domain...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
25.11.2020
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Domain generalization aims at training machine learning models to perform
robustly across different and unseen domains. Several recent methods use
multiple datasets to train models to extract domain-invariant features, hoping
to generalize to unseen domains. Instead, first we explicitly train
domain-dependant representations by using ad-hoc batch normalization layers to
collect independent domain's statistics. Then, we propose to use these
statistics to map domains in a shared latent space, where membership to a
domain can be measured by means of a distance function. At test time, we
project samples from an unknown domain into the same space and infer properties
of their domain as a linear combination of the known ones. We apply the same
mapping strategy at training and test time, learning both a latent
representation and a powerful but lightweight ensemble model. We show a
significant increase in classification accuracy over current state-of-the-art
techniques on popular domain generalization benchmarks: PACS, Office-31 and
Office-Caltech. |
---|---|
DOI: | 10.48550/arxiv.2011.12672 |