AttentionGAN: Unpaired Image-to-Image Translation Using Attention-Guided Generative Adversarial Networks

State-of-the-art methods in the image-to-image translation are capable of learning a mapping from a source domain to a target domain with unpaired image data. Though the existing methods have achieved promising results, they still produce visual artifacts, being able to translate low-level informati...

Full description

Saved in:
Bibliographic Details
Published inIEEE transaction on neural networks and learning systems Vol. 34; no. 4; pp. 1972 - 1987
Main Authors Tang, Hao, Liu, Hong, Xu, Dan, Torr, Philip H. S., Sebe, Nicu
Format Journal Article
LanguageEnglish
Published United States IEEE 01.04.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN2162-237X
2162-2388
2162-2388
DOI10.1109/TNNLS.2021.3105725

Cover

Loading…
Abstract State-of-the-art methods in the image-to-image translation are capable of learning a mapping from a source domain to a target domain with unpaired image data. Though the existing methods have achieved promising results, they still produce visual artifacts, being able to translate low-level information but not high-level semantics of input images. One possible reason is that generators do not have the ability to perceive the most discriminative parts between the source and target domains, thus making the generated images low quality. In this article, we propose a new Attention-Guided Generative Adversarial Networks (AttentionGAN) for the unpaired image-to-image translation task. AttentionGAN can identify the most discriminative foreground objects and minimize the change of the background. The attention-guided generators in AttentionGAN are able to produce attention masks, and then fuse the generation output with the attention masks to obtain high-quality target images. Accordingly, we also design a novel attention-guided discriminator which only considers attended regions. Extensive experiments are conducted on several generative tasks with eight public datasets, demonstrating that the proposed method is effective to generate sharper and more realistic images compared with existing competitive models. The code is available at https://github.com/Ha0Tang/AttentionGAN .
AbstractList State-of-the-art methods in the image-to-image translation are capable of learning a mapping from a source domain to a target domain with unpaired image data. Though the existing methods have achieved promising results, they still produce visual artifacts, being able to translate low-level information but not high-level semantics of input images. One possible reason is that generators do not have the ability to perceive the most discriminative parts between the source and target domains, thus making the generated images low quality. In this article, we propose a new Attention-Guided Generative Adversarial Networks (AttentionGAN) for the unpaired image-to-image translation task. AttentionGAN can identify the most discriminative foreground objects and minimize the change of the background. The attention-guided generators in AttentionGAN are able to produce attention masks, and then fuse the generation output with the attention masks to obtain high-quality target images. Accordingly, we also design a novel attention-guided discriminator which only considers attended regions. Extensive experiments are conducted on several generative tasks with eight public datasets, demonstrating that the proposed method is effective to generate sharper and more realistic images compared with existing competitive models. The code is available at https://github.com/Ha0Tang/AttentionGAN .
State-of-the-art methods in the image-to-image translation are capable of learning a mapping from a source domain to a target domain with unpaired image data. Though the existing methods have achieved promising results, they still produce visual artifacts, being able to translate low-level information but not high-level semantics of input images. One possible reason is that generators do not have the ability to perceive the most discriminative parts between the source and target domains, thus making the generated images low quality. In this article, we propose a new Attention-Guided Generative Adversarial Networks (AttentionGAN) for the unpaired image-to-image translation task. AttentionGAN can identify the most discriminative foreground objects and minimize the change of the background. The attention-guided generators in AttentionGAN are able to produce attention masks, and then fuse the generation output with the attention masks to obtain high-quality target images. Accordingly, we also design a novel attention-guided discriminator which only considers attended regions. Extensive experiments are conducted on several generative tasks with eight public datasets, demonstrating that the proposed method is effective to generate sharper and more realistic images compared with existing competitive models. The code is available at https://github.com/Ha0Tang/AttentionGAN.State-of-the-art methods in the image-to-image translation are capable of learning a mapping from a source domain to a target domain with unpaired image data. Though the existing methods have achieved promising results, they still produce visual artifacts, being able to translate low-level information but not high-level semantics of input images. One possible reason is that generators do not have the ability to perceive the most discriminative parts between the source and target domains, thus making the generated images low quality. In this article, we propose a new Attention-Guided Generative Adversarial Networks (AttentionGAN) for the unpaired image-to-image translation task. AttentionGAN can identify the most discriminative foreground objects and minimize the change of the background. The attention-guided generators in AttentionGAN are able to produce attention masks, and then fuse the generation output with the attention masks to obtain high-quality target images. Accordingly, we also design a novel attention-guided discriminator which only considers attended regions. Extensive experiments are conducted on several generative tasks with eight public datasets, demonstrating that the proposed method is effective to generate sharper and more realistic images compared with existing competitive models. The code is available at https://github.com/Ha0Tang/AttentionGAN.
Author Torr, Philip H. S.
Sebe, Nicu
Liu, Hong
Tang, Hao
Xu, Dan
Author_xml – sequence: 1
  givenname: Hao
  orcidid: 0000-0002-2077-1246
  surname: Tang
  fullname: Tang, Hao
  email: hao.tang@vision.ee.ethz.ch
  organization: Department of Information Technology and Electrical Engineering, ETH Zurich, Zurich, Switzerland
– sequence: 2
  givenname: Hong
  orcidid: 0000-0002-7498-6541
  surname: Liu
  fullname: Liu, Hong
  email: hongliu@pku.edu.cn
  organization: Shenzhen Graduate School, Peking University, Shenzhen, China
– sequence: 3
  givenname: Dan
  orcidid: 0000-0003-0136-9603
  surname: Xu
  fullname: Xu, Dan
  organization: Department of Computer Science and Engineering, Hong Kong University of Science and Technology (HKUST), Hong Kong
– sequence: 4
  givenname: Philip H. S.
  surname: Torr
  fullname: Torr, Philip H. S.
  organization: Department of Engineering Science, University of Oxford, Oxford, U.K
– sequence: 5
  givenname: Nicu
  orcidid: 0000-0002-6597-7248
  surname: Sebe
  fullname: Sebe, Nicu
  organization: Department of Information Engineering and Computer Science (DISI), University of Trento, Trento, Italy
BackLink https://www.ncbi.nlm.nih.gov/pubmed/34473628$$D View this record in MEDLINE/PubMed
BookMark eNp9kU1r3DAQhkVJaT6aP9BCMeSSizfSyPrqbQnNJrBsDt2F3oRsj1OlXnkr2Sn99_V-ZA85dC4zgucZxLzn5CR0AQn5xOiEMWpulovF_PsEKLAJZ1QoEO_IGTAJOXCtT46z-nFKLlN6pmNJKmRhPpBTXhSKS9Bn5Oe07zH0vguz6eJrtgob5yPW2cPaPWHed_luyJbRhdS6LZetkg9P2dHLZ4OvR2OGAeNIvGA2rV8wJhe9a7MF9n-6-Ct9JO8b1ya8PPQLsrr7try9z-ePs4fb6TyvCsn6nBccseESK6GasmQFM6JqVK1qIwEkGFe58UElyFprVjQ1BVmCoAIRTGn4Bbne793E7veAqbdrnypsWxewG5IFIQ1XhZF0RK_eoM_dEMP4OwvKCE01SD5SXw7UUK6xtpvo1y7-ta83HAG9B6rYpRSxsZXvd6fqo_OtZdRuE7O7xOw2MXtIbFThjfq6_b_S573kEfEoGAGKa8P_AdploOg
CODEN ITNNAL
CitedBy_id crossref_primary_10_1109_TIP_2024_3381833
crossref_primary_10_1109_TNNLS_2023_3321076
crossref_primary_10_1016_j_optlaseng_2024_108042
crossref_primary_10_1109_TCE_2023_3347274
crossref_primary_10_1016_j_engstruct_2025_119636
crossref_primary_10_3390_math13010177
crossref_primary_10_1109_TASLP_2024_3515794
crossref_primary_10_1145_3698105
crossref_primary_10_1016_j_cmpb_2024_108007
crossref_primary_10_1109_TCE_2023_3329574
crossref_primary_10_3389_fbioe_2024_1330713
crossref_primary_10_1109_TAFFC_2022_3207007
crossref_primary_10_1109_TMM_2021_3091847
crossref_primary_10_1109_ACCESS_2025_3531366
crossref_primary_10_3390_math12203244
crossref_primary_10_1109_TCSVT_2024_3404256
crossref_primary_10_1016_j_eswa_2024_123167
crossref_primary_10_1007_s11263_022_01722_5
crossref_primary_10_1016_j_compbiomed_2024_108472
crossref_primary_10_1515_phys_2024_0060
crossref_primary_10_1007_s00259_024_06961_x
crossref_primary_10_1007_s10489_022_04352_z
crossref_primary_10_1109_TPAMI_2023_3298868
crossref_primary_10_1117_1_JEI_33_4_043023
crossref_primary_10_1109_TPAMI_2024_3355248
crossref_primary_10_1007_s11018_024_02346_6
crossref_primary_10_1016_j_bspc_2024_107159
crossref_primary_10_32446_0368_1025it_2024_4_23_31
crossref_primary_10_1109_ACCESS_2023_3338629
crossref_primary_10_1109_TMI_2023_3288940
crossref_primary_10_1088_1612_202X_ad26eb
crossref_primary_10_1109_TII_2023_3252410
crossref_primary_10_1007_s10489_024_05522_x
crossref_primary_10_1007_s12046_022_01807_4
crossref_primary_10_1109_TAI_2024_3483731
crossref_primary_10_1371_journal_pone_0310097
crossref_primary_10_1109_ACCESS_2023_3321118
crossref_primary_10_1016_j_cag_2024_104086
crossref_primary_10_1109_TNNLS_2023_3282306
crossref_primary_10_1007_s00521_023_09345_8
crossref_primary_10_1109_JBHI_2023_3252665
crossref_primary_10_1007_s10278_024_01385_3
crossref_primary_10_1109_ACCESS_2024_3491792
crossref_primary_10_1007_s11042_024_19361_y
crossref_primary_10_1007_s00371_023_03115_2
crossref_primary_10_1109_TCSVT_2024_3382621
crossref_primary_10_1016_j_aei_2024_103070
crossref_primary_10_1117_1_JEI_32_6_063030
crossref_primary_10_3390_info16020157
crossref_primary_10_1007_s40747_023_01079_3
crossref_primary_10_1109_TMM_2023_3274990
crossref_primary_10_1007_s10409_024_24076_x
crossref_primary_10_1016_j_neunet_2024_106877
crossref_primary_10_1117_1_JRS_16_044520
crossref_primary_10_1007_s11042_021_11252_w
crossref_primary_10_1109_TPAMI_2022_3155989
crossref_primary_10_1080_01431161_2023_2169593
crossref_primary_10_1109_TNNLS_2023_3315778
crossref_primary_10_1002_acm2_14212
crossref_primary_10_1109_LRA_2024_3414270
crossref_primary_10_1109_TAI_2022_3187384
crossref_primary_10_1029_2024EA003565
crossref_primary_10_1038_s41467_023_44385_7
crossref_primary_10_1109_TNNLS_2023_3274221
crossref_primary_10_1016_j_ndteint_2024_103174
crossref_primary_10_1016_j_neunet_2022_01_013
crossref_primary_10_7555_JBR_36_20220037
crossref_primary_10_1109_TIFS_2024_3372803
crossref_primary_10_1016_j_patcog_2024_110445
crossref_primary_10_1360_SSI_2022_0092
crossref_primary_10_1109_ACCESS_2023_3322146
crossref_primary_10_1007_s42979_023_02040_4
crossref_primary_10_1016_j_compbiomed_2025_109889
crossref_primary_10_1109_TPAMI_2023_3298721
crossref_primary_10_1016_j_compbiomed_2022_105878
crossref_primary_10_1109_MWC_004_2100362
crossref_primary_10_1109_TIFS_2023_3301729
crossref_primary_10_1007_s10489_025_06379_4
crossref_primary_10_1145_3672400
crossref_primary_10_1109_TIP_2021_3109531
crossref_primary_10_1109_TMM_2023_3328176
crossref_primary_10_1016_j_media_2024_103390
crossref_primary_10_1109_TAFFC_2023_3327118
crossref_primary_10_1109_ACCESS_2024_3438992
crossref_primary_10_3390_s24113424
crossref_primary_10_3390_rs16020242
crossref_primary_10_1038_s41598_023_32398_7
crossref_primary_10_1016_j_compbiomed_2024_108380
crossref_primary_10_1109_TPAMI_2022_3212915
Cites_doi 10.1109/CVPR.2019.00252
10.1109/CVPRW.2018.00122
10.1109/CVPR.2019.00820
10.1007/978-3-030-58595-2_43
10.1109/TMM.2021.3091847
10.1109/CVPR.2017.683
10.1007/978-3-030-20887-5_1
10.1145/3343031.3350980
10.1109/CVPR.2018.00917
10.1109/CVPR.2018.00573
10.1109/CVPR.2017.632
10.1007/978-3-030-01216-8_11
10.1007/978-3-030-01246-5_3
10.1109/WACV.2016.7477553
10.1109/CVPR.2018.00593
10.1109/CVPR.2017.241
10.1007/978-3-030-01258-8_40
10.1109/CVPR42600.2020.00789
10.1145/3474085.3475596
10.18772/10539/20690
10.1063/1.4902458
10.1109/ICCV.2017.310
10.1145/3240508.3240704
10.1109/IJCNN.2019.8851881
10.1007/978-3-030-01249-6_50
10.1109/TIP.2019.2914583
10.1109/ICCV.2015.425
10.1145/3394171.3416270
10.1007/978-3-030-01219-9_11
10.1109/TIP.2020.3021789
10.1080/02699930903485076
10.1109/CVPR.2018.00412
10.1007/978-3-030-01261-8_34
10.1109/CVPR.2018.00916
10.1109/IVMSPW.2018.8448850
10.1109/ICCV.2017.244
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023
DBID 97E
RIA
RIE
AAYXX
CITATION
NPM
7QF
7QO
7QP
7QQ
7QR
7SC
7SE
7SP
7SR
7TA
7TB
7TK
7U5
8BQ
8FD
F28
FR3
H8D
JG9
JQ2
KR7
L7M
L~C
L~D
P64
7X8
DOI 10.1109/TNNLS.2021.3105725
DatabaseName IEEE Xplore (IEEE)
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
PubMed
Aluminium Industry Abstracts
Biotechnology Research Abstracts
Calcium & Calcified Tissue Abstracts
Ceramic Abstracts
Chemoreception Abstracts
Computer and Information Systems Abstracts
Corrosion Abstracts
Electronics & Communications Abstracts
Engineered Materials Abstracts
Materials Business File
Mechanical & Transportation Engineering Abstracts
Neurosciences Abstracts
Solid State and Superconductivity Abstracts
METADEX
Technology Research Database
ANTE: Abstracts in New Technology & Engineering
Engineering Research Database
Aerospace Database
Materials Research Database
ProQuest Computer Science Collection
Civil Engineering Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
Biotechnology and BioEngineering Abstracts
MEDLINE - Academic
DatabaseTitle CrossRef
PubMed
Materials Research Database
Technology Research Database
Computer and Information Systems Abstracts – Academic
Mechanical & Transportation Engineering Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Materials Business File
Aerospace Database
Engineered Materials Abstracts
Biotechnology Research Abstracts
Chemoreception Abstracts
Advanced Technologies Database with Aerospace
ANTE: Abstracts in New Technology & Engineering
Civil Engineering Abstracts
Aluminium Industry Abstracts
Electronics & Communications Abstracts
Ceramic Abstracts
Neurosciences Abstracts
METADEX
Biotechnology and BioEngineering Abstracts
Computer and Information Systems Abstracts Professional
Solid State and Superconductivity Abstracts
Engineering Research Database
Calcium & Calcified Tissue Abstracts
Corrosion Abstracts
MEDLINE - Academic
DatabaseTitleList
MEDLINE - Academic
PubMed
Materials Research Database
Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 2162-2388
EndPage 1987
ExternalDocumentID 34473628
10_1109_TNNLS_2021_3105725
9527389
Genre orig-research
Journal Article
GrantInformation_xml – fundername: EU H2020 AI4Media Project
  grantid: 951911
– fundername: Shenzhen Fundamental Research Program
  grantid: GXWD20201231165807007-20200807164903001
  funderid: 10.13039/501100017607
– fundername: National Natural Science Foundation of China
  grantid: 62073004
  funderid: 10.13039/501100001809
– fundername: Italy-China Collaboration Project TALENT
  grantid: 2018YFE0118400
GroupedDBID 0R~
4.4
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACIWK
ACPRK
AENEX
AFRAH
AGQYO
AGSQL
AHBIQ
AKJIK
AKQYR
ALMA_UNASSIGNED_HOLDINGS
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
EBS
EJD
IFIPE
IPLJI
JAVBF
M43
MS~
O9-
OCL
PQQKQ
RIA
RIE
RNS
AAYXX
CITATION
RIG
NPM
7QF
7QO
7QP
7QQ
7QR
7SC
7SE
7SP
7SR
7TA
7TB
7TK
7U5
8BQ
8FD
F28
FR3
H8D
JG9
JQ2
KR7
L7M
L~C
L~D
P64
7X8
ID FETCH-LOGICAL-c461t-343eef36ec57fbb14195cf7d7d9622629acad7d0626d8814fd026b2505ee29b93
IEDL.DBID RIE
ISSN 2162-237X
2162-2388
IngestDate Fri Jul 11 01:47:20 EDT 2025
Mon Jun 30 06:52:41 EDT 2025
Thu Jan 02 22:51:38 EST 2025
Tue Jul 01 00:27:41 EDT 2025
Thu Apr 24 22:50:51 EDT 2025
Wed Aug 27 02:14:18 EDT 2025
IsDoiOpenAccess false
IsOpenAccess true
IsPeerReviewed false
IsScholarly true
Issue 4
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c461t-343eef36ec57fbb14195cf7d7d9622629acad7d0626d8814fd026b2505ee29b93
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ORCID 0000-0003-0136-9603
0000-0002-7498-6541
0000-0002-2077-1246
0000-0002-6597-7248
OpenAccessLink https://doi.org/10.1109/TNNLS.2021.3105725
PMID 34473628
PQID 2795808263
PQPubID 85436
PageCount 16
ParticipantIDs ieee_primary_9527389
pubmed_primary_34473628
crossref_citationtrail_10_1109_TNNLS_2021_3105725
proquest_miscellaneous_2569374960
proquest_journals_2795808263
crossref_primary_10_1109_TNNLS_2021_3105725
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2023-04-01
PublicationDateYYYYMMDD 2023-04-01
PublicationDate_xml – month: 04
  year: 2023
  text: 2023-04-01
  day: 01
PublicationDecade 2020
PublicationPlace United States
PublicationPlace_xml – name: United States
– name: Piscataway
PublicationTitle IEEE transaction on neural networks and learning systems
PublicationTitleAbbrev TNNLS
PublicationTitleAlternate IEEE Trans Neural Netw Learn Syst
PublicationYear 2023
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref13
ref15
ref14
Liu (ref44)
ref10
Mejjati (ref11)
Zhu (ref38)
ref17
ref16
ref19
ref18
Bińkowski (ref52)
ref51
Donahue (ref49)
ref46
ref48
ref47
Kim (ref5)
ref42
Goodfellow (ref1)
ref41
ref8
ref7
ref9
ref4
Mo (ref37)
ref3
ref6
ref40
Benaim (ref32)
ref35
ref34
ref36
Kim (ref12)
ref31
ref30
ref33
ref2
ref39
Dumoulin (ref50)
ref24
Perarnau (ref23)
ref25
ref20
Liu (ref45)
ref22
ref28
ref27
Tang (ref26)
ref29
Mirza (ref21) 2014
Li (ref43) 2016
Heusel (ref53)
References_xml – ident: ref28
  doi: 10.1109/CVPR.2019.00252
– ident: ref33
  doi: 10.1109/CVPRW.2018.00122
– volume-title: Proc. ICLR
  ident: ref12
  article-title: U-GAT-IT: Unsupervised generative attentional networks with adaptive layer-instance normalization for image-to-image translation
– ident: ref16
  doi: 10.1109/CVPR.2019.00820
– ident: ref27
  doi: 10.1007/978-3-030-58595-2_43
– ident: ref20
  doi: 10.1109/TMM.2021.3091847
– start-page: 700
  volume-title: Proc. NeurIPS
  ident: ref44
  article-title: Unsupervised image-to-image translation networks
– ident: ref48
  doi: 10.1109/CVPR.2017.683
– ident: ref34
  doi: 10.1007/978-3-030-20887-5_1
– ident: ref24
  doi: 10.1145/3343031.3350980
– ident: ref31
  doi: 10.1109/CVPR.2018.00917
– ident: ref35
  doi: 10.1109/CVPR.2018.00573
– ident: ref2
  doi: 10.1109/CVPR.2017.632
– ident: ref8
  doi: 10.1007/978-3-030-01216-8_11
– volume-title: Proc. BMVC
  ident: ref26
  article-title: Bipartite graph reasoning gans for person image generation
– volume-title: Proc. NeurIPS
  ident: ref1
  article-title: Generative adversarial nets
– volume-title: Proc. ICLR
  ident: ref50
  article-title: Adversarially learned inference
– ident: ref15
  doi: 10.1007/978-3-030-01246-5_3
– ident: ref51
  doi: 10.1109/WACV.2016.7477553
– ident: ref17
  doi: 10.1109/CVPR.2018.00593
– volume-title: Proc. NeurIPS Workshop
  ident: ref23
  article-title: Invertible conditional GANs for image editing
– ident: ref46
  doi: 10.1109/CVPR.2017.241
– ident: ref6
  doi: 10.1007/978-3-030-01258-8_40
– ident: ref30
  doi: 10.1109/CVPR42600.2020.00789
– ident: ref19
  doi: 10.1145/3474085.3475596
– volume-title: arXiv:1411.1784
  year: 2014
  ident: ref21
  article-title: Conditional generative adversarial nets
– start-page: 465
  volume-title: Proc. NeurIPS
  ident: ref38
  article-title: Toward multimodal image-to-image translation
– ident: ref41
  doi: 10.18772/10539/20690
– start-page: 469
  volume-title: Proc. NeurIPS
  ident: ref45
  article-title: Coupled generative adversarial networks
– volume-title: arXiv:1610.05586
  year: 2016
  ident: ref43
  article-title: Deep identity-aware transfer of facial attributes
– volume-title: Proc. ICLR
  ident: ref37
  article-title: InstaGAN: Instance-aware image-to-image translation
– ident: ref42
  doi: 10.1063/1.4902458
– ident: ref4
  doi: 10.1109/ICCV.2017.310
– ident: ref25
  doi: 10.1145/3240508.3240704
– ident: ref10
  doi: 10.1109/IJCNN.2019.8851881
– start-page: 3693
  volume-title: Proc. NeurIPS
  ident: ref11
  article-title: Unsupervised attention-guided image to image translation
– ident: ref47
  doi: 10.1007/978-3-030-01249-6_50
– ident: ref13
  doi: 10.1109/TIP.2019.2914583
– ident: ref39
  doi: 10.1109/ICCV.2015.425
– ident: ref29
  doi: 10.1145/3394171.3416270
– ident: ref14
  doi: 10.1007/978-3-030-01219-9_11
– ident: ref22
  doi: 10.1109/TIP.2020.3021789
– ident: ref40
  doi: 10.1080/02699930903485076
– volume-title: Proc. ICLR
  ident: ref52
  article-title: Demystifying MMD GANs
– ident: ref36
  doi: 10.1109/CVPR.2018.00412
– ident: ref7
  doi: 10.1007/978-3-030-01261-8_34
– start-page: 6629
  volume-title: Proc. NeurIPS
  ident: ref53
  article-title: GANs trained by a two time-scale update rule converge to a local nash equilibrium
– ident: ref18
  doi: 10.1109/CVPR.2018.00916
– ident: ref9
  doi: 10.1109/IVMSPW.2018.8448850
– start-page: 752
  volume-title: Proc. NeurIPS
  ident: ref32
  article-title: One-sided unsupervised domain mapping
– ident: ref3
  doi: 10.1109/ICCV.2017.244
– volume-title: Proc. ICLR
  ident: ref49
  article-title: Adversarial feature learning
– start-page: 1857
  volume-title: Proc. ICML
  ident: ref5
  article-title: Learning to discover cross-domain relations with generative adversarial networks
SSID ssj0000605649
Score 2.6674566
Snippet State-of-the-art methods in the image-to-image translation are capable of learning a mapping from a source domain to a target domain with unpaired image data....
SourceID proquest
pubmed
crossref
ieee
SourceType Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 1972
SubjectTerms Attention guided
Computational modeling
Domains
Generative adversarial networks
generative adversarial networks (GANs)
Generators
Image quality
Masks
Semantics
Target masking
Task analysis
Training
Training data
Translation
unpaired image-to-image translation
Title AttentionGAN: Unpaired Image-to-Image Translation Using Attention-Guided Generative Adversarial Networks
URI https://ieeexplore.ieee.org/document/9527389
https://www.ncbi.nlm.nih.gov/pubmed/34473628
https://www.proquest.com/docview/2795808263
https://www.proquest.com/docview/2569374960
Volume 34
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9QwEB61PXGhQHkECjISN_A2Thwn5rZCtAXRXOhKe4tieyIQkK3Y5MKvZ2wnkUCAuCWKx0k0r2_s8QzAi86pshVoeWWs41KTzhlXSk6uGjtbGonOLw1c1epyI99vi-0BvFrOwiBiSD7Dlb8Me_luZ0e_VHamQ7UwfQiHFLjFs1rLekpKuFwFtJsJlfEsL7fzGZlUn13X9YePFA1mgoJUgiiZ71jji92R_a5-cUmhx8rf4WZwO-fHcDV_cMw2-bIaB7OyP36r5fi_f3QHbk_4k62jwNyFA-zvwfHc24FNqn4Cn9bDEDMhL9b1a7bpb1oyjo69-0YGiA87Hi5YcHUxnY6F7AO20PGL8bMjiljY2ltVFro_71sv86yO-ef7-7A5f3v95pJPXRm4lUoMPJc5YpcrtEXZGSOk0IXtSlc6rQjLZbq1Ld2kFCm5qhKycxTmGY-0EDNtdP4Ajvpdj4-AuY6oRVG1moBcZopKKGeQ5ncpSiO6BMTMmMZOJct954yvTQhdUt0Evjaer83E1wReLjQ3sWDHP0efeKYsIyd-JHA687-ZdHrfZKUuKkJMKk_g-fKYtNFvsbQ97kYaUyjCe5LCwgQeRrlZ5p7F7fGf3_kEbvlW9jEr6BSOhu8jPiXAM5hnQdJ_AoWo-l4
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9QwEB6VcoALBQo0UMBI3MDbOHGcmNsK0W5hNxd2pb1F8SMqgmYrNrnw6xnbSSQQIG6J4nESzesbezwD8LoxIq-Z1bRQ2lAuUeeUyTlFV20bnStujVsaWJViseEft9n2AN5OZ2GstT75zM7cpd_LNzvdu6WyM-mrhclbcBv9fsbCaa1pRSVGZC483k2YSGiS5tvxlEwsz9ZlufyM8WDCMExFkJK4njWu3B1a8OIXp-S7rPwdcHrHc34Eq_GTQ77J11nfqZn-8Vs1x__9p_twb0CgZB5E5gEc2PYhHI3dHcig7MdwNe-6kAt5MS_fkU17U6N5NOTyGk0Q7XbUXxDv7EJCHfH5B2Sioxf9F4MUobS1s6vE93_e107qSRky0PePYHP-Yf1-QYe-DFRzwTqa8tTaJhVWZ3mjFONMZrrJTW6kQDSXyFrXeBNjrGSKgvHGYKCnHNayNpFKpo_hsN219gSIaZCaZUUtEcolKiuYMMri_Ca2XLEmAjYyptJD0XLXO-Nb5YOXWFaer5XjazXwNYI3E81NKNnxz9HHjinTyIEfEZyO_K8Grd5XSS6zAjGTSCN4NT1GfXSbLHVrdz2OyQQiPo6BYQRPgtxMc4_i9vTP73wJdxbr1bJaXpafnsFd19g-5AidwmH3vbfPEf506oWX-p-6uf2n
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=AttentionGAN%3A+Unpaired+Image-to-Image+Translation+Using+Attention-Guided+Generative+Adversarial+Networks&rft.jtitle=IEEE+transaction+on+neural+networks+and+learning+systems&rft.au=Tang%2C+Hao&rft.au=Liu%2C+Hong&rft.au=Xu%2C+Dan&rft.au=Torr%2C+Philip+H.+S.&rft.date=2023-04-01&rft.pub=IEEE&rft.issn=2162-237X&rft.volume=34&rft.issue=4&rft.spage=1972&rft.epage=1987&rft_id=info:doi/10.1109%2FTNNLS.2021.3105725&rft_id=info%3Apmid%2F34473628&rft.externalDocID=9527389
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2162-237X&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2162-237X&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2162-237X&client=summon