Maximum Classifier Discrepancy for Unsupervised Domain Adaptation
In this work, we present a method for unsupervised domain adaptation. Many adversarial learning methods train domain classifier networks to distinguish the features as either a source or target and train a feature generator network to mimic the discriminator. Two problems exist with these methods. F...
Saved in:
Published in | 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition pp. 3723 - 3732 |
---|---|
Main Authors | , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
01.06.2018
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | In this work, we present a method for unsupervised domain adaptation. Many adversarial learning methods train domain classifier networks to distinguish the features as either a source or target and train a feature generator network to mimic the discriminator. Two problems exist with these methods. First, the domain classifier only tries to distinguish the features as a source or target and thus does not consider task-specific decision boundaries between classes. Therefore, a trained generator can generate ambiguous features near class boundaries. Second, these methods aim to completely match the feature distributions between different domains, which is difficult because of each domain's characteristics. To solve these problems, we introduce a new approach that attempts to align distributions of source and target by utilizing the task-specific decision boundaries. We propose to maximize the discrepancy between two classifiers' outputs to detect target samples that are far from the support of the source. A feature generator learns to generate target features near the support to minimize the discrepancy. Our method outperforms other methods on several datasets of image classification and semantic segmentation. The codes are available at https://github.com/mil-tokyo/MCD_DA |
---|---|
AbstractList | In this work, we present a method for unsupervised domain adaptation. Many adversarial learning methods train domain classifier networks to distinguish the features as either a source or target and train a feature generator network to mimic the discriminator. Two problems exist with these methods. First, the domain classifier only tries to distinguish the features as a source or target and thus does not consider task-specific decision boundaries between classes. Therefore, a trained generator can generate ambiguous features near class boundaries. Second, these methods aim to completely match the feature distributions between different domains, which is difficult because of each domain's characteristics. To solve these problems, we introduce a new approach that attempts to align distributions of source and target by utilizing the task-specific decision boundaries. We propose to maximize the discrepancy between two classifiers' outputs to detect target samples that are far from the support of the source. A feature generator learns to generate target features near the support to minimize the discrepancy. Our method outperforms other methods on several datasets of image classification and semantic segmentation. The codes are available at https://github.com/mil-tokyo/MCD_DA |
Author | Watanabe, Kohei Ushiku, Yoshitaka Harada, Tatsuya Saito, Kuniaki |
Author_xml | – sequence: 1 givenname: Kuniaki surname: Saito fullname: Saito, Kuniaki – sequence: 2 givenname: Kohei surname: Watanabe fullname: Watanabe, Kohei – sequence: 3 givenname: Yoshitaka surname: Ushiku fullname: Ushiku, Yoshitaka – sequence: 4 givenname: Tatsuya surname: Harada fullname: Harada, Tatsuya |
BookMark | eNotzLtOwzAUAFCDQKKUzAws-YGU6_iR6zFKKSAVgRBlrfyUjBonilNE_54BprOda3KRhuQJuaWwohTUfff59r6qgeIKgKn6jBSqQSoYSslrUOdkQUGySiqqrkiR8xcA1BIZcrEg7Yv-if2xL7uDzjmG6KdyHbOd_KiTPZVhmMpdysfRT98xe1euh17HVLZOj7Oe45BuyGXQh-yLf5dkt3n46J6q7evjc9duq0gbMVc1F4yjtBSRUcuEYYq5AMYoLa1XWjTaWO4EUjQOuatDAOAcEa3Dxhi2JHd_b_Te78cp9no67VE0yBWwX48BTF8 |
CODEN | IEEPAD |
ContentType | Conference Proceeding |
DBID | 6IE 6IH CBEJK RIE RIO |
DOI | 10.1109/CVPR.2018.00392 |
DatabaseName | IEEE Electronic Library (IEL) Conference Proceedings IEEE Proceedings Order Plan (POP) 1998-present by volume IEEE Xplore All Conference Proceedings IEEE Electronic Library (IEL) IEEE Proceedings Order Plans (POP) 1998-present |
DatabaseTitleList | |
Database_xml | – sequence: 1 dbid: RIE name: IEEE/IET Electronic Library url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Applied Sciences |
EISBN | 9781538664209 1538664208 |
EISSN | 1063-6919 |
EndPage | 3732 |
ExternalDocumentID | 8578490 |
Genre | orig-research |
GroupedDBID | 6IE 6IH 6IL 6IN AAWTH ABLEC ADZIZ ALMA_UNASSIGNED_HOLDINGS BEFXN BFFAM BGNUA BKEBE BPEOZ CBEJK CHZPO IEGSK IJVOP OCL RIE RIL RIO |
ID | FETCH-LOGICAL-i175t-2453486c18831c35b393df0bb9a6ce9a57abc4d5818bd84d2ff0044888cd87bb3 |
IEDL.DBID | RIE |
IngestDate | Wed Aug 27 02:52:16 EDT 2025 |
IsPeerReviewed | false |
IsScholarly | true |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-i175t-2453486c18831c35b393df0bb9a6ce9a57abc4d5818bd84d2ff0044888cd87bb3 |
PageCount | 10 |
ParticipantIDs | ieee_primary_8578490 |
PublicationCentury | 2000 |
PublicationDate | 2018-Jun |
PublicationDateYYYYMMDD | 2018-06-01 |
PublicationDate_xml | – month: 06 year: 2018 text: 2018-Jun |
PublicationDecade | 2010 |
PublicationTitle | 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition |
PublicationTitleAbbrev | CVPR |
PublicationYear | 2018 |
Publisher | IEEE |
Publisher_xml | – name: IEEE |
SSID | ssj0002683845 ssj0003211698 |
Score | 2.6337247 |
Snippet | In this work, we present a method for unsupervised domain adaptation. Many adversarial learning methods train domain classifier networks to distinguish the... |
SourceID | ieee |
SourceType | Publisher |
StartPage | 3723 |
SubjectTerms | Feature extraction Generators Learning systems Neural networks Semantics Task analysis Training |
Title | Maximum Classifier Discrepancy for Unsupervised Domain Adaptation |
URI | https://ieeexplore.ieee.org/document/8578490 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3PS8MwFA5zJ09TN_E3OXg0W9skbXIcm2MIkyFOdhv5VSiybrgWxL_el7ZOEQ_e2kIgJOn73sv73vcQuvWabELElgDUUsJ46IgOpSFpJI02USDSmuX7GE8X7GHJly10t6-Fcc5V5DPX949VLt9uTOmvygYCjheTEKAfQOBW12rt71OiWFDRZMj8O4XIJpaiUfMJAzkYvcyfPJfLkyepT33-aKdSocmkg2Zf86hJJK_9stB98_FLovG_Ez1Cve-6PTzfI9Ixarn8BHUaRxM3v_Gui4Yz9Z6tyzWuWmJmKUAjHmdgQACbwNhicGTxIt-VW29IdjB0vFmrLMdDq7Z16r6HFpP759GUNL0USAYOQkEixikTsQmFoKGhXFNJbRpoLVVsnFQ8UdowywG_tRXMRmnqc70QHxsrEq3pKWrnm9ydIazjRAcsYTDasJQnkisqjdVOGK_dY85R16_IalvLZayaxbj4-_MlOvR7UrOvrlC7eCvdNeB8oW-qDf4E8b2mxA |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwzV3JTsMwEB2xHOBUVrHjAxxTkthO7AMHREFlFUIUcSvxEilCTSvaiOVb-BX-jXESCkJckbglkRzJntG8Gc-bGYAd15NNiMh4CLXUYzywngqk9tJQaqVDX6QVy_cyanfY6R2_m4C3cS2MtbYkn9mmeyxz-aavC3dVtidQvZj0awrlmX15wgBtuH_SQmnuhuHx0c1h26tnCHgZAuPICxmnTEQ6EIIGmnJFJTWpr5RMIm1lwuNEaWY44pYygpkwTV2OE-NCbUSsFMX_TsI0-hk8rKrDxjc4YSSoqHNy7p1iLBVJUfcPCny5d3h7de3YY46uSV2y9dsAlxK_jhvw_rnzirby0CxGqqlffzSF_K9HMwdLX5WJ5GqMufMwYfMFaNSuNKkN1XARDi6S56xX9Eg59DNLEfxJK0MTieiLcELQVSedfFgMnKkc4tJWv5dkOTkwyaAiJyxB50-2swxTeT-3K0BUFCufxQxXa5byWPKESm2UFdp1J9KrsOgk0B1UDUG69eGv_f55G2baNxfn3fOTy7N1mHX6UHHNNmBq9FjYTfRqRmqrVC4C938tsg_JLQQ6 |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=proceeding&rft.title=2018+IEEE%2FCVF+Conference+on+Computer+Vision+and+Pattern+Recognition&rft.atitle=Maximum+Classifier+Discrepancy+for+Unsupervised+Domain+Adaptation&rft.au=Saito%2C+Kuniaki&rft.au=Watanabe%2C+Kohei&rft.au=Ushiku%2C+Yoshitaka&rft.au=Harada%2C+Tatsuya&rft.date=2018-06-01&rft.pub=IEEE&rft.eissn=1063-6919&rft.spage=3723&rft.epage=3732&rft_id=info:doi/10.1109%2FCVPR.2018.00392&rft.externalDocID=8578490 |