Consistency Regularization for Domain Generalization with Logit Attribution Matching

Domain generalization (DG) is about training models that generalize well under domain shift. Previous research on DG has been conducted mostly in single-source or multi-source settings. In this paper, we consider a third, lesser-known setting where a training domain is endowed with a collection of p...

Full description

Saved in:
Bibliographic Details
Main Authors Gao, Han, Li, Kaican, Xie, Weiyan, Lin, Zhi, Huang, Yongxiang, Wang, Luning, Cao, Caleb Chen, Zhang, Nevin L
Format Journal Article
LanguageEnglish
Published 13.05.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Domain generalization (DG) is about training models that generalize well under domain shift. Previous research on DG has been conducted mostly in single-source or multi-source settings. In this paper, we consider a third, lesser-known setting where a training domain is endowed with a collection of pairs of examples that share the same semantic information. Such semantic sharing (SS) pairs can be created via data augmentation and then utilized for consistency regularization (CR). We present a theory showing CR is conducive to DG and propose a novel CR method called Logit Attribution Matching (LAM). We conduct experiments on five DG benchmarks and four pretrained models with SS pairs created by both generic and targeted data augmentation methods. LAM outperforms representative single/multi-source DG methods and various CR methods that leverage SS pairs. The code and data of this project are available at https://github.com/Gaohan123/LAM
DOI:10.48550/arxiv.2305.07888