Pivot-Guided Embedding for Domain Generalization

Neural networks have suffered from a distribution gap between training and test data, known as domain shift. Domain generalization (DG) methods aim to learn domain invariant representations only with limited source domain data to cope with unseen target domains. The main assumption is that the model...

Full description

Saved in:
Bibliographic Details
Published inIEEE access Vol. 10; pp. 126858 - 126870
Main Authors Seong, Hyun Seok, Choi, Jaehyun, Jeong, Woojin, Heo, Jae-Pil
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Neural networks have suffered from a distribution gap between training and test data, known as domain shift. Domain generalization (DG) methods aim to learn domain invariant representations only with limited source domain data to cope with unseen target domains. The main assumption is that the model trained to extract semantically consistent features without any domain specific information is highly adaptable to the unseen target domain. Metric learning allows embedding representations to be class-separated and domain-mixed, which is an optimal condition for DG but has been downplayed in recent works. Even the most popular triplet embedding has limitations in forming an optimal embedding space for DG due to instability. In this paper, we present a novel deep metric learning method for domain invariant representations. Specifically, we propose Pivot-Guided Embedding (PGE), which explicitly forms the entire feature distribution of the embedding space with a novel pivot-guided attraction-repulsion mechanism, to address the instability problem that triplet embedding has. In particular, we leverage pivot features representing a coarse distribution of the entire space as reference points to guide other features toward domain invariant feature distribution. To this end, a pivot selection algorithm is presented to reliably reflect the entire feature distribution. Furthermore, we define Guide-Field, a subspace spanned by a subset of pivots chosen for individual samples, to guide each sample to domain invariant feature space. In a nutshell, the attraction-repulsion mechanism based on pivots, the reliable set of features representing the entire feature distribution, enables the model to extract domain invariant feature representations and also settles the instability problem of triplet loss. Experimental results on three different benchmarks validate the performance advantages of the proposed method over the state-of-the-art DG techniques.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2022.3225970