SF(DA)$^2$: Source-free Domain Adaptation Through the Lens of Data Augmentation
In the face of the deep learning model's vulnerability to domain shift, source-free domain adaptation (SFDA) methods have been proposed to adapt models to new, unseen target domains without requiring access to source domain data. Although the potential benefits of applying data augmentation to...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
16.03.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | In the face of the deep learning model's vulnerability to domain shift,
source-free domain adaptation (SFDA) methods have been proposed to adapt models
to new, unseen target domains without requiring access to source domain data.
Although the potential benefits of applying data augmentation to SFDA are
attractive, several challenges arise such as the dependence on prior knowledge
of class-preserving transformations and the increase in memory and
computational requirements. In this paper, we propose Source-free Domain
Adaptation Through the Lens of Data Augmentation (SF(DA)$^2$), a novel approach
that leverages the benefits of data augmentation without suffering from these
challenges. We construct an augmentation graph in the feature space of the
pretrained model using the neighbor relationships between target features and
propose spectral neighborhood clustering to identify partitions in the
prediction space. Furthermore, we propose implicit feature augmentation and
feature disentanglement as regularization loss functions that effectively
utilize class semantic information within the feature space. These regularizers
simulate the inclusion of an unlimited number of augmented target features into
the augmentation graph while minimizing computational and memory demands. Our
method shows superior adaptation performance in SFDA scenarios, including 2D
image and 3D point cloud datasets and a highly imbalanced dataset. |
---|---|
DOI: | 10.48550/arxiv.2403.10834 |