Robust self-supervised learning for source-free domain adaptation

Source-free domain adaptation (SFDA) is from unsupervised domain adaptation (UDA) and do apply to the special situation in reality that the source domain data is not accessible. In this subject, self-supervised learning is widely used in previous works. However, inaccurate pseudo-labels are hardly a...

Full description

Saved in:
Bibliographic Details
Published inSignal, image and video processing Vol. 17; no. 5; pp. 2405 - 2413
Main Authors Tian, Liang, Zhou, Lihua, Zhang, Hao, Wang, Zhenbin, Ye, Mao
Format Journal Article
LanguageEnglish
Published London Springer London 01.07.2023
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Source-free domain adaptation (SFDA) is from unsupervised domain adaptation (UDA) and do apply to the special situation in reality that the source domain data is not accessible. In this subject, self-supervised learning is widely used in previous works. However, inaccurate pseudo-labels are hardly avoidable and that degenerates the adapted target model. In this work, we propose an effective method, named RS2L (robust self-supervised learning), to reduce the negative impact due to inaccurate pseudo-labels. Two strategies are adapted. The first is called structure-preserved pseudo-labeling strategy which generates much better pseudo-labels by stored predictions of k -closest neighbors. Another is self-supervised learning with mask. We use threshold masks to select samples for different operations, i.e., self-supervised learning and structure-preserved learning. For different masks, the threshold values are different. So it is not excluded that some samples participate in both two operations. Experiments on three benchmark datasets show that our method achieves the state-of-the-art results.
AbstractList Source-free domain adaptation (SFDA) is from unsupervised domain adaptation (UDA) and do apply to the special situation in reality that the source domain data is not accessible. In this subject, self-supervised learning is widely used in previous works. However, inaccurate pseudo-labels are hardly avoidable and that degenerates the adapted target model. In this work, we propose an effective method, named RS2L (robust self-supervised learning), to reduce the negative impact due to inaccurate pseudo-labels. Two strategies are adapted. The first is called structure-preserved pseudo-labeling strategy which generates much better pseudo-labels by stored predictions of k -closest neighbors. Another is self-supervised learning with mask. We use threshold masks to select samples for different operations, i.e., self-supervised learning and structure-preserved learning. For different masks, the threshold values are different. So it is not excluded that some samples participate in both two operations. Experiments on three benchmark datasets show that our method achieves the state-of-the-art results.
Source-free domain adaptation (SFDA) is from unsupervised domain adaptation (UDA) and do apply to the special situation in reality that the source domain data is not accessible. In this subject, self-supervised learning is widely used in previous works. However, inaccurate pseudo-labels are hardly avoidable and that degenerates the adapted target model. In this work, we propose an effective method, named RS2L (robust self-supervised learning), to reduce the negative impact due to inaccurate pseudo-labels. Two strategies are adapted. The first is called structure-preserved pseudo-labeling strategy which generates much better pseudo-labels by stored predictions of k-closest neighbors. Another is self-supervised learning with mask. We use threshold masks to select samples for different operations, i.e., self-supervised learning and structure-preserved learning. For different masks, the threshold values are different. So it is not excluded that some samples participate in both two operations. Experiments on three benchmark datasets show that our method achieves the state-of-the-art results.
Author Zhou, Lihua
Tian, Liang
Zhang, Hao
Ye, Mao
Wang, Zhenbin
Author_xml – sequence: 1
  givenname: Liang
  surname: Tian
  fullname: Tian, Liang
  organization: School of Computer Science and Engineering, University of Electronic Science and Technology of China
– sequence: 2
  givenname: Lihua
  surname: Zhou
  fullname: Zhou, Lihua
  email: lihua.zhou@std.uestc.edu.cn
  organization: School of Computer Science and Engineering, University of Electronic Science and Technology of China
– sequence: 3
  givenname: Hao
  surname: Zhang
  fullname: Zhang, Hao
  organization: School of Computer Science and Engineering, University of Electronic Science and Technology of China
– sequence: 4
  givenname: Zhenbin
  surname: Wang
  fullname: Wang, Zhenbin
  organization: School of Computer Science and Engineering, University of Electronic Science and Technology of China
– sequence: 5
  givenname: Mao
  surname: Ye
  fullname: Ye, Mao
  organization: School of Computer Science and Engineering, University of Electronic Science and Technology of China
BookMark eNp9kFtLAzEQhYNUsNb-AZ8WfI7msrn0sRRvUBCk7yG7mZQt282abAX7642uKPjQgWHm4Xwzh3OJJl3oAKFrSm4pIeouUaokwYSx3KVQ-HiGplRLjqmidPK7E36B5intSC7OlJZ6ipavoTqkoUjQepwOPcT3JoErWrCxa7pt4UMsUjjEGrCPAIULe9t0hXW2H-zQhO4KnXvbJpj_zBnaPNxvVk94_fL4vFqucc3pYsDMU11qCU7a2npeaqgXghJVCqecrIDIiipRWs2s9lCxmjgnhPRSuhKI5zN0M57tY3g7QBrMLrvq8kfDNBVc8lKIrNKjqo4hpQje1M1oc4i2aQ0l5isyM0ZmcmTmOzJzzCj7h_ax2dv4cRriI5SyuNtC_HN1gvoEzkyBWA
CitedBy_id crossref_primary_10_1007_s11760_024_03200_6
crossref_primary_10_1038_s41598_024_53311_w
crossref_primary_10_1109_JSTARS_2024_3408817
crossref_primary_10_1109_TPAMI_2024_3370978
crossref_primary_10_1007_s10489_024_05713_6
crossref_primary_10_1080_01431161_2024_2358547
Cites_doi 10.1109/WACV45572.2020.9093626
10.1016/j.neunet.2022.05.015
10.1109/CVPR.2018.00392
10.24963/ijcai.2021/402
10.1109/CVPR46437.2021.01636
10.1109/ICCV.2015.293
10.1016/j.neunet.2023.08.005
10.1109/CVPR.2019.00503
10.1109/ACCESS.2021.3110605
10.1109/CVPR.2016.90
10.1109/LSP.2022.3194414
10.1109/CVPR.2017.316
10.1109/ACCESS.2021.3136567
10.1109/ICCV48922.2021.00885
10.1007/978-3-642-15561-1_16
10.1109/CVPR52688.2022.00784
10.1109/ICCV.2017.244
10.1016/j.imavis.2022.104504
10.1109/TPAMI.2021.3103390
10.1109/TCSVT.2021.3111034
10.1109/TAI.2021.3110179
10.1109/CVPR.2017.572
10.1214/aoms/1177729694
10.1109/CVPR42600.2020.00966
ContentType Journal Article
Copyright The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
Copyright_xml – notice: The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
DBID AAYXX
CITATION
DOI 10.1007/s11760-022-02457-z
DatabaseName CrossRef
DatabaseTitle CrossRef
DatabaseTitleList

DeliveryMethod fulltext_linktorsrc
Discipline Engineering
Computer Science
EISSN 1863-1711
EndPage 2413
ExternalDocumentID 10_1007_s11760_022_02457_z
GrantInformation_xml – fundername: National Natural Science Foundation of China
  grantid: 62276048
  funderid: http://dx.doi.org/10.13039/501100001809
– fundername: Sichuan Province Science and Technology Support Program
  grantid: 2020YFG0476
  funderid: http://dx.doi.org/10.13039/100012542
– fundername: Sichuan Province Science and Technology Support Program
  grantid: 2020YFG0476; 2020YFG0476; 2020YFG0476; 2020YFG0476
  funderid: http://dx.doi.org/10.13039/100012542
– fundername: National Natural Science Foundation of China
  grantid: 62276048; 62276048; 62276048; 62276048
  funderid: http://dx.doi.org/10.13039/501100001809
GroupedDBID -5B
-5G
-BR
-EM
-Y2
-~C
.VR
06D
0R~
123
1N0
203
29~
2J2
2JN
2JY
2KG
2KM
2LR
2VQ
2~H
30V
4.4
406
408
409
40D
40E
5VS
67Z
6NX
875
8TC
95-
95.
95~
AAAVM
AABHQ
AACDK
AAHNG
AAIAL
AAJBT
AAJKR
AANZL
AARHV
AARTL
AASML
AATNV
AATVU
AAUYE
AAWCG
AAYIU
AAYQN
AAYTO
AAYZH
ABAKF
ABBXA
ABDZT
ABECU
ABFTV
ABHQN
ABJNI
ABJOX
ABKCH
ABMNI
ABMQK
ABNWP
ABQBU
ABSXP
ABTEG
ABTHY
ABTKH
ABTMW
ABULA
ABWNU
ABXPI
ACAOD
ACBXY
ACDTI
ACGFS
ACHSB
ACHXU
ACKNC
ACMDZ
ACMLO
ACOKC
ACOMO
ACPIV
ACSNA
ACZOJ
ADHHG
ADHIR
ADINQ
ADKNI
ADKPE
ADRFC
ADTPH
ADURQ
ADYFF
ADZKW
AEBTG
AEFQL
AEGAL
AEGNC
AEJHL
AEJRE
AEMSY
AENEX
AEOHA
AEPYU
AESKC
AETLH
AEVLU
AEXYK
AFBBN
AFGCZ
AFLOW
AFQWF
AFWTZ
AFZKB
AGAYW
AGDGC
AGJBK
AGMZJ
AGQEE
AGQMX
AGRTI
AGWIL
AGWZB
AGYKE
AHAVH
AHBYD
AHSBF
AHYZX
AIAKS
AIGIU
AIIXL
AILAN
AITGF
AJBLW
AJRNO
AJZVZ
ALMA_UNASSIGNED_HOLDINGS
ALWAN
AMKLP
AMXSW
AMYLF
AMYQR
AOCGG
ARMRJ
AXYYD
AYJHY
B-.
BA0
BDATZ
BGNMA
BSONS
CAG
COF
CS3
CSCUP
DDRTE
DNIVK
DPUIP
EBLON
EBS
EIOEI
EJD
ESBYG
FERAY
FFXSO
FIGPU
FINBP
FNLPD
FRRFC
FSGXE
FWDCC
GGCAI
GGRSB
GJIRD
GNWQR
GQ6
GQ7
GQ8
GXS
H13
HF~
HG5
HG6
HLICF
HMJXF
HQYDN
HRMNR
HZ~
IJ-
IKXTQ
IWAJR
IXC
IXD
IXE
IZIGR
IZQ
I~X
I~Z
J-C
J0Z
JBSCW
JCJTX
JZLTJ
KDC
KOV
LLZTM
M4Y
MA-
NPVJJ
NQJWS
NU0
O9-
O93
O9J
OAM
P9O
PF0
PT4
QOS
R89
R9I
RIG
ROL
RPX
RSV
S16
S1Z
S27
S3B
SAP
SDH
SEG
SHX
SISQX
SJYHP
SNE
SNPRN
SNX
SOHCF
SOJ
SPISZ
SRMVM
SSLCW
STPWE
SZN
T13
TSG
TSK
TSV
TUC
U2A
UG4
UOJIU
UTJUX
UZXMN
VC2
VFIZW
W48
YLTOR
Z45
Z5O
Z7R
Z7X
Z83
Z88
ZMTXR
~A9
AAPKM
AAYXX
ABBRH
ABDBE
ABFSG
ACSTC
ADKFA
AEZWR
AFDZB
AFHIU
AFOHR
AHPBZ
AHWEU
AIXLP
ATHPR
AYFIA
CITATION
ABRTQ
ID FETCH-LOGICAL-c319t-2f18486ed6acaf348ec9510745d7d6be06b1754a82a8feb2c0dd556f66d4e0f3
IEDL.DBID U2A
ISSN 1863-1703
IngestDate Fri Jul 25 07:48:35 EDT 2025
Tue Jul 01 03:24:18 EDT 2025
Thu Apr 24 22:54:52 EDT 2025
Fri Feb 21 02:44:53 EST 2025
IsPeerReviewed true
IsScholarly true
Issue 5
Keywords Local structure clustering
Source-free domain adaptation
Self-supervised learning
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c319t-2f18486ed6acaf348ec9510745d7d6be06b1754a82a8feb2c0dd556f66d4e0f3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
PQID 2815363455
PQPubID 2044169
PageCount 9
ParticipantIDs proquest_journals_2815363455
crossref_citationtrail_10_1007_s11760_022_02457_z
crossref_primary_10_1007_s11760_022_02457_z
springer_journals_10_1007_s11760_022_02457_z
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2023-07-01
PublicationDateYYYYMMDD 2023-07-01
PublicationDate_xml – month: 07
  year: 2023
  text: 2023-07-01
  day: 01
PublicationDecade 2020
PublicationPlace London
PublicationPlace_xml – name: London
– name: Heidelberg
PublicationTitle Signal, image and video processing
PublicationTitleAbbrev SIViP
PublicationYear 2023
Publisher Springer London
Springer Nature B.V
Publisher_xml – name: Springer London
– name: Springer Nature B.V
References IqbalJRawalHHafizRChiY-TAliMDistribution regularized self-supervised learning for domain adaptation of semantic segmentationImage Vis. Comput.202212410.1016/j.imavis.2022.104504
Ganin, Y., Ustinova, E., Ajakan, H., Germain, P., Larochelle, H., Laviolette, F., Marchand, M., Lempitsky, V.: Domain-adversarial training of neural networks. J. Mach. Learn. Res. 17(1), 2096–2030 (2016)
Ding, Y., Sheng, L., Liang, J., Zheng, A., He, R.: Proxymix: proxy-based mixup training with label refinery for source-free domain adaptation. arXiv preprint arXiv:2205.14566 (2022)
Zellinger, W., Grubinger, T., Lughofer, E., Natschläger, T., Saminger-Platz, S.: Central moment discrepancy (CMD) for domain-invariant representation learning. arXiv preprint arXiv:1702.08811 (2017)
Kang, G., Jiang, L., Yang, Y., Hauptmann, A.G.: Contrastive adaptation network for unsupervised domain adaptation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4893–4902 (2019)
YangSvan de WeijerJHerranzLJuiSExploiting the intrinsic neighborhood structure for source-free domain adaptationAdv. Neural. Inf. Process. Syst.2021342939329405
Saito, K., Watanabe, K., Ushiku, Y., Harada, T.: Maximum classifier discrepancy for unsupervised domain adaptation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3723–3732 (2018)
Liang, J., Hu, D., Feng, J.: Do we really need to access the source data? Source hypothesis transfer for unsupervised domain adaptation. In: International Conference on Machine Learning, pp. 6028–6039. PMLR (2020)
Peng, X., Usman, B., Kaushik, N., Hoffman, J., Wang, D., Saenko, K.: VisDA: the visual domain adaptation challenge. arXiv preprint arXiv:1710.06924 (2017)
Yang, S., Wang, Y., van de Weijer, J., Herranz, L., Jui, S.: Generalized source-free domain adaptation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 8978–8987 (2021)
Liang, J., Hu, D., Feng, J.: Domain adaptation with auxiliary target domain-oriented classifier. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16632–16642 (2021)
Tzeng, E., Hoffman, J., Saenko, K., Darrell, T.: Adversarial discriminative domain adaptation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7167–7176 (2017)
Tang, S., Zou, Y., Song, Z., Lyu, J., Chen, L., Ye, M., Zhong, S., Zhang, J.: Semantic consistency learning on manifold for source data-free unsupervised domain adaptation. Neural Netw. (2022)
Zhu, J.-Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2223–2232 (2017)
Liu, M.-Y., Tuzel, O.: Coupled generative adversarial networks. In: Advances in Neural Information Processing Systems, vol. 29 (2016)
Yang, S., Wang, Y., Weijer, J.V.D., Herranz, L., Jui, S.: Unsupervised domain adaptation without source data by casting a bait. arXiv preprint arXiv:2010.12427 (2020)
Ghifary, M., Kleijn, W.B., Zhang, M., Balduzzi, D.: Domain generalization for object recognition with multi-task autoencoders. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2551–2559 (2015)
Saenko, K., Kulis, B., Fritz, M., Darrell, T.: Adapting visual category models to new domains. In: European Conference on Computer Vision, pp. 213–226. Springer, Berlin (2010)
Qiu, Z., Zhang, Y., Lin, H., Niu, S., Liu, Y., Du, Q., Tan, M.: Source-free domain adaptation via avatar prototype generation and adaptation. arXiv preprint arXiv:2106.15326 (2021)
NgoBHParkJHParkSJChoSISemi-supervised domain adaptation using explicit class-wise matching for domain-invariant and class-discriminative feature learningIEEE Access2021912846712848010.1109/ACCESS.2021.3110605
Tian, J., Zhang, J., Li, W., Xu, D.: VDM-DA: virtual domain modeling for source data-free domain adaptation. In: IEEE Transactions on Circuits and Systems for Video Technology (2021)
Long, M., Cao, Z., Wang, J., Jordan, M.I.: Conditional adversarial domain adaptation. In: Advances in Neural Information Processing Systems, vol. 31 (2018)
Liang, J., Hu, D., Feng, J., He, R.: Dine: Domain adaptation from single and multiple black-box predictors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8003–8013 (2022)
Liang, J., Hu, D., Wang, Y., He, R., Feng, J.: Source data-absent unsupervised domain adaptation through hypothesis transfer and labeling transfer. In: IEEE Transactions on Pattern Analysis and Machine Intelligence (2021)
KimYChoDHanKPandaPHongSDomain adaptation without source dataIEEE Trans. Artif. Intell.20212650851810.1109/TAI.2021.3110179
Li, R., Jiao, Q., Cao, W., Wong, H.-S., Wu, S.: Model adaptation: unsupervised domain adaptation without source data. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9641–9650 (2020)
LiuCZhouLYeMLiXSelf-alignment for black-box domain adaptation of image classificationIEEE Signal Process. Lett.2022291709171310.1109/LSP.2022.3194414
Iqbal, J., Ali, M.: MLSL: multi-level self-supervised learning for domain adaptation with spatially independent and semantically consistent labeling. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1864–1873 (2020)
Van der Maaten, L., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9(11) (2008)
NgoBHKimJHChaeYJChoSIMulti-view collaborative learning for semi-supervised domain adaptationIEEE Access2021916648816650110.1109/ACCESS.2021.3136567
Venkateswara, H., Eusebio, J., Chakraborty, S., Panchanathan, S.: Deep hashing network for unsupervised domain adaptation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5018–5027 (2017)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
KullbackSLeiblerRAOn information and sufficiencyAnn. Math. Stat.195122179863996810.1214/aoms/11777296940042.38403
Sun, Y., Tzeng, E., Darrell, T., Efros, A.A.: Unsupervised domain adaptation through self-supervision. arXiv preprint arXiv:1909.11825 (2019)
2457_CR28
2457_CR29
J Iqbal (2457_CR12) 2022; 124
2457_CR22
2457_CR23
2457_CR20
2457_CR21
2457_CR26
2457_CR24
2457_CR25
BH Ngo (2457_CR15) 2021; 9
2457_CR1
S Kullback (2457_CR27) 1951; 22
2457_CR4
2457_CR19
2457_CR5
2457_CR2
2457_CR17
2457_CR3
2457_CR18
2457_CR9
2457_CR6
C Liu (2457_CR11) 2022; 29
2457_CR7
2457_CR33
2457_CR34
2457_CR31
2457_CR10
2457_CR16
2457_CR13
S Yang (2457_CR32) 2021; 34
Y Kim (2457_CR8) 2021; 2
BH Ngo (2457_CR14) 2021; 9
2457_CR30
References_xml – reference: Tang, S., Zou, Y., Song, Z., Lyu, J., Chen, L., Ye, M., Zhong, S., Zhang, J.: Semantic consistency learning on manifold for source data-free unsupervised domain adaptation. Neural Netw. (2022)
– reference: Ding, Y., Sheng, L., Liang, J., Zheng, A., He, R.: Proxymix: proxy-based mixup training with label refinery for source-free domain adaptation. arXiv preprint arXiv:2205.14566 (2022)
– reference: Liang, J., Hu, D., Wang, Y., He, R., Feng, J.: Source data-absent unsupervised domain adaptation through hypothesis transfer and labeling transfer. In: IEEE Transactions on Pattern Analysis and Machine Intelligence (2021)
– reference: NgoBHParkJHParkSJChoSISemi-supervised domain adaptation using explicit class-wise matching for domain-invariant and class-discriminative feature learningIEEE Access2021912846712848010.1109/ACCESS.2021.3110605
– reference: Kang, G., Jiang, L., Yang, Y., Hauptmann, A.G.: Contrastive adaptation network for unsupervised domain adaptation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4893–4902 (2019)
– reference: Saenko, K., Kulis, B., Fritz, M., Darrell, T.: Adapting visual category models to new domains. In: European Conference on Computer Vision, pp. 213–226. Springer, Berlin (2010)
– reference: Zellinger, W., Grubinger, T., Lughofer, E., Natschläger, T., Saminger-Platz, S.: Central moment discrepancy (CMD) for domain-invariant representation learning. arXiv preprint arXiv:1702.08811 (2017)
– reference: Liang, J., Hu, D., Feng, J.: Domain adaptation with auxiliary target domain-oriented classifier. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16632–16642 (2021)
– reference: Liu, M.-Y., Tuzel, O.: Coupled generative adversarial networks. In: Advances in Neural Information Processing Systems, vol. 29 (2016)
– reference: Venkateswara, H., Eusebio, J., Chakraborty, S., Panchanathan, S.: Deep hashing network for unsupervised domain adaptation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5018–5027 (2017)
– reference: Iqbal, J., Ali, M.: MLSL: multi-level self-supervised learning for domain adaptation with spatially independent and semantically consistent labeling. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1864–1873 (2020)
– reference: Saito, K., Watanabe, K., Ushiku, Y., Harada, T.: Maximum classifier discrepancy for unsupervised domain adaptation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3723–3732 (2018)
– reference: Zhu, J.-Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2223–2232 (2017)
– reference: NgoBHKimJHChaeYJChoSIMulti-view collaborative learning for semi-supervised domain adaptationIEEE Access2021916648816650110.1109/ACCESS.2021.3136567
– reference: KullbackSLeiblerRAOn information and sufficiencyAnn. Math. Stat.195122179863996810.1214/aoms/11777296940042.38403
– reference: KimYChoDHanKPandaPHongSDomain adaptation without source dataIEEE Trans. Artif. Intell.20212650851810.1109/TAI.2021.3110179
– reference: Ghifary, M., Kleijn, W.B., Zhang, M., Balduzzi, D.: Domain generalization for object recognition with multi-task autoencoders. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2551–2559 (2015)
– reference: Peng, X., Usman, B., Kaushik, N., Hoffman, J., Wang, D., Saenko, K.: VisDA: the visual domain adaptation challenge. arXiv preprint arXiv:1710.06924 (2017)
– reference: Long, M., Cao, Z., Wang, J., Jordan, M.I.: Conditional adversarial domain adaptation. In: Advances in Neural Information Processing Systems, vol. 31 (2018)
– reference: YangSvan de WeijerJHerranzLJuiSExploiting the intrinsic neighborhood structure for source-free domain adaptationAdv. Neural. Inf. Process. Syst.2021342939329405
– reference: Tzeng, E., Hoffman, J., Saenko, K., Darrell, T.: Adversarial discriminative domain adaptation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7167–7176 (2017)
– reference: Qiu, Z., Zhang, Y., Lin, H., Niu, S., Liu, Y., Du, Q., Tan, M.: Source-free domain adaptation via avatar prototype generation and adaptation. arXiv preprint arXiv:2106.15326 (2021)
– reference: Ganin, Y., Ustinova, E., Ajakan, H., Germain, P., Larochelle, H., Laviolette, F., Marchand, M., Lempitsky, V.: Domain-adversarial training of neural networks. J. Mach. Learn. Res. 17(1), 2096–2030 (2016)
– reference: Liang, J., Hu, D., Feng, J.: Do we really need to access the source data? Source hypothesis transfer for unsupervised domain adaptation. In: International Conference on Machine Learning, pp. 6028–6039. PMLR (2020)
– reference: Yang, S., Wang, Y., Weijer, J.V.D., Herranz, L., Jui, S.: Unsupervised domain adaptation without source data by casting a bait. arXiv preprint arXiv:2010.12427 (2020)
– reference: Li, R., Jiao, Q., Cao, W., Wong, H.-S., Wu, S.: Model adaptation: unsupervised domain adaptation without source data. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9641–9650 (2020)
– reference: Liang, J., Hu, D., Feng, J., He, R.: Dine: Domain adaptation from single and multiple black-box predictors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8003–8013 (2022)
– reference: Van der Maaten, L., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9(11) (2008)
– reference: LiuCZhouLYeMLiXSelf-alignment for black-box domain adaptation of image classificationIEEE Signal Process. Lett.2022291709171310.1109/LSP.2022.3194414
– reference: Tian, J., Zhang, J., Li, W., Xu, D.: VDM-DA: virtual domain modeling for source data-free domain adaptation. In: IEEE Transactions on Circuits and Systems for Video Technology (2021)
– reference: Yang, S., Wang, Y., van de Weijer, J., Herranz, L., Jui, S.: Generalized source-free domain adaptation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 8978–8987 (2021)
– reference: IqbalJRawalHHafizRChiY-TAliMDistribution regularized self-supervised learning for domain adaptation of semantic segmentationImage Vis. Comput.202212410.1016/j.imavis.2022.104504
– reference: Sun, Y., Tzeng, E., Darrell, T., Efros, A.A.: Unsupervised domain adaptation through self-supervision. arXiv preprint arXiv:1909.11825 (2019)
– reference: He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
– ident: 2457_CR13
  doi: 10.1109/WACV45572.2020.9093626
– ident: 2457_CR23
– ident: 2457_CR9
  doi: 10.1016/j.neunet.2022.05.015
– ident: 2457_CR1
  doi: 10.1109/CVPR.2018.00392
– ident: 2457_CR5
  doi: 10.24963/ijcai.2021/402
– ident: 2457_CR28
  doi: 10.1109/CVPR46437.2021.01636
– volume: 34
  start-page: 29393
  year: 2021
  ident: 2457_CR32
  publication-title: Adv. Neural. Inf. Process. Syst.
– ident: 2457_CR22
  doi: 10.1109/ICCV.2015.293
– ident: 2457_CR10
  doi: 10.1016/j.neunet.2023.08.005
– ident: 2457_CR3
– ident: 2457_CR17
  doi: 10.1109/CVPR.2019.00503
– volume: 9
  start-page: 128467
  year: 2021
  ident: 2457_CR14
  publication-title: IEEE Access
  doi: 10.1109/ACCESS.2021.3110605
– ident: 2457_CR33
  doi: 10.1109/CVPR.2016.90
– ident: 2457_CR16
– ident: 2457_CR18
– ident: 2457_CR31
– volume: 29
  start-page: 1709
  year: 2022
  ident: 2457_CR11
  publication-title: IEEE Signal Process. Lett.
  doi: 10.1109/LSP.2022.3194414
– ident: 2457_CR19
  doi: 10.1109/CVPR.2017.316
– volume: 9
  start-page: 166488
  year: 2021
  ident: 2457_CR15
  publication-title: IEEE Access
  doi: 10.1109/ACCESS.2021.3136567
– ident: 2457_CR25
  doi: 10.1109/ICCV48922.2021.00885
– ident: 2457_CR29
  doi: 10.1007/978-3-642-15561-1_16
– ident: 2457_CR26
  doi: 10.1109/CVPR52688.2022.00784
– ident: 2457_CR20
– ident: 2457_CR21
  doi: 10.1109/ICCV.2017.244
– volume: 124
  year: 2022
  ident: 2457_CR12
  publication-title: Image Vis. Comput.
  doi: 10.1016/j.imavis.2022.104504
– ident: 2457_CR6
– ident: 2457_CR2
– ident: 2457_CR7
  doi: 10.1109/TPAMI.2021.3103390
– ident: 2457_CR24
  doi: 10.1109/TCSVT.2021.3111034
– volume: 2
  start-page: 508
  issue: 6
  year: 2021
  ident: 2457_CR8
  publication-title: IEEE Trans. Artif. Intell.
  doi: 10.1109/TAI.2021.3110179
– ident: 2457_CR34
– ident: 2457_CR30
  doi: 10.1109/CVPR.2017.572
– volume: 22
  start-page: 79
  issue: 1
  year: 1951
  ident: 2457_CR27
  publication-title: Ann. Math. Stat.
  doi: 10.1214/aoms/1177729694
– ident: 2457_CR4
  doi: 10.1109/CVPR42600.2020.00966
SSID ssj0000327868
Score 2.3318248
Snippet Source-free domain adaptation (SFDA) is from unsupervised domain adaptation (UDA) and do apply to the special situation in reality that the source domain data...
SourceID proquest
crossref
springer
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 2405
SubjectTerms Adaptation
Computer Imaging
Computer Science
Domains
Image Processing and Computer Vision
Labels
Masks
Multimedia Information Systems
Original Paper
Pattern Recognition and Graphics
Robustness
Self-supervised learning
Signal,Image and Speech Processing
Vision
Title Robust self-supervised learning for source-free domain adaptation
URI https://link.springer.com/article/10.1007/s11760-022-02457-z
https://www.proquest.com/docview/2815363455
Volume 17
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV07T8MwED7xWGDgjSgveWADS2liO-5YEAWBYEAgwRT5cUZI0FakXfj1nJuEAgIkpgxxPHyXO3_newEcpNoSbyAnxyH5qsLqjHecCxwTRQQarUCMtcNX1-r8Tlzcy_u6KKxsst2bkOTEUk-L3dq5SnjMPo_hwpy_zcK8JN89JnLdpd2Pm5UkS3Nd1cBpFftvJlldLfPzNl9PpCnN_BYZnRw4vRVYqpki61aiXYUZ7K_BcjOFgdVKuQaLn1oKrkP3ZmDH5YiV-Bx4OR5GU1CiZ_V0iEdGJJVVN_Y8vCIyP3gxT31mvBlWUfkNuO2d3p6c83pMAnekPyOeBvLSCFevjDMhExpdpE25kD73yhLqljiCMDo1OpAj7RLvpVRBKS8wCdkmzPUHfdwCZmVuDC3piNARbZRWWYdOk1FLsO1E3oJ2g1Th6hbicZLFczFtfhzRLQjdYoJu8daCw49vhlUDjT9X7zYCKGplKotUk1lWmZCyBUeNUKavf99t-3_Ld2AhDpOvknF3YW70OsY9ohwjuw_z3d7x8XV8nj1cnu5P_rh3xvbQiw
linkProvider Springer Nature
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LT8MwDLZ4HIADb8R45sANInVtmmbHCYEGDA5oSNyqPByEBNtEtwu_HmdtN0CAxLluDp9j53Mc2wAnsTLEGyjIsUixqjAq4S1rPcdIEoFGIxBD7fDtnew8iOvH9LEqCivq1-51SnLiqWfFbs1MRjy8Pg_pwoy_z8MikQEV9vJD3J7erERJnKmyBk7J0H8zSqpqmZ-X-XoizWjmt8zo5MC5XIfViimydqnaDZjD_ias1VMYWGWUm7DyqaXgFrTvB2ZcjFiBL54X42FwBQU6Vk2HeGJEUll5Y8_9GyJzg1f93Gfa6WGZld-G3uVF77zDqzEJ3JL9jHjsKUojXJ3UVvtEKLSBNmUidZmThlA3xBGEVrFWngJpGzmXptJL6QRGPtmBhf6gj7vATJppTSIt4VuiiamRxqJV5NQibFqRNaBZI5XbqoV4mGTxks-aHwd0c0I3n6CbvzfgdPrPsGyg8af0Qa2AvDKmIo8VuWWZiDRtwFmtlNnn31fb-5_4MSx1erfdvHt1d7MPy2GwfPkw9wAWRm9jPCT6MTJHk932Aa7I0G4
linkToPdf http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LTxsxEB5BkCo4AKWtCKTUh95ai314vc4xKo2AtqiqQOK28mOMkNJNlN1c8usZZ3dJWrWVet5ZH2Y84288_mYA3ifKEG6gJMci5arCqJQPrfUcI0kAGo1ADNzhbzfy8k5c32f3Gyz-1Wv3riTZcBpCl6ayPp85f74mvsW5jHh4iR5KhzlfbsOOCGxg2tF3yej5liVKk1w1fDglQy_OKG2ZM39e5tfTaQ05f6uSrg6f8SHst6iRjRozv4QtLI_goJvIwFoHPYK9jfaCr2D0Y2oWVc0qnHheLWYhLFToWDsp4oERYGXN7T33c0Tmpj_1Y8m007OmQv8absefbz9d8nZkArfkSzVPPGVspGMntdU-FQptgFC5yFzupCELGMILQqtEK09JtY2cyzLppXQCI5--gV45LfEYmMlyrUlkKPxQxJgZaSxaRQEuwtiKvA9xp6nCtu3Ew1SLSbFuhBy0W5B2i5V2i2UfPjz_M2uaafxTetAZoGgdqyoSRSFapiLL-vCxM8r6899XO_k_8Xfw4vvFuPh6dfPlFHbDjPnmje4AevV8gW8JidTmbLXZngDQKdSh
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Robust+self-supervised+learning+for+source-free+domain+adaptation&rft.jtitle=Signal%2C+image+and+video+processing&rft.au=Tian%2C+Liang&rft.au=Zhou%2C+Lihua&rft.au=Zhang%2C+Hao&rft.au=Wang%2C+Zhenbin&rft.date=2023-07-01&rft.issn=1863-1703&rft.eissn=1863-1711&rft.volume=17&rft.issue=5&rft.spage=2405&rft.epage=2413&rft_id=info:doi/10.1007%2Fs11760-022-02457-z&rft.externalDBID=n%2Fa&rft.externalDocID=10_1007_s11760_022_02457_z
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1863-1703&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1863-1703&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1863-1703&client=summon