Adversarial style discrepancy minimization for unsupervised domain adaptation

Mainstream unsupervised domain adaptation (UDA) methods align feature distributions across different domains via adversarial learning. However, most of them focus on global distribution alignment, ignoring the fine-grained domain discrepancy. Besides, they generally require auxiliary models, bringin...

Full description

Saved in:
Bibliographic Details
Published inNeural networks Vol. 157; pp. 216 - 225
Main Authors Luo, Xin, Chen, Wei, Liang, Zhengfa, Li, Chen, Tan, Yusong
Format Journal Article
LanguageEnglish
Published United States Elsevier Ltd 01.01.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Mainstream unsupervised domain adaptation (UDA) methods align feature distributions across different domains via adversarial learning. However, most of them focus on global distribution alignment, ignoring the fine-grained domain discrepancy. Besides, they generally require auxiliary models, bringing extra computation costs. To tackle these issues, this study proposes an UDA method that differentiates individual samples without the help of extra models. To this end, we introduce a novel discrepancy metric, termed style discrepancy, to distinguish different target samples. We also propose a paradigm for adversarial style discrepancy minimization (ASDM). Specifically, we fix the parameters of the feature extractor and maximize style discrepancy to update the classifier, which helps detect more hard samples. Adversely, we fix the parameters of the classifier and minimize the style discrepancy to update the feature extractor, pushing those hard samples near the support of the source distribution. Such adversary helps to progressively detect and adapt more hard samples, leading to fine-grained domain adaptation. Experiments on different UDA tasks validate the effectiveness of ASDM. Overall, without any extra models, ASDM reaches a 46.9% mIoU in the GTA5 to Cityscapes benchmark and an 84.7% accuracy in the VisDA-2017 benchmark, outperforming many existing adversarial-learning-based methods.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0893-6080
1879-2782
DOI:10.1016/j.neunet.2022.10.015