A review on Generative Adversarial Networks for image generation

Generative Adversarial Networks (GANs) are a type of deep learning architecture that uses two networks namely a generator and a discriminator that, by competing against each other, pursue to create realistic but previously unseen samples. They have become a popular research topic in recent years, pa...

Full description

Saved in:
Bibliographic Details
Published inComputers & graphics Vol. 114; pp. 13 - 25
Main Authors Trevisan de Souza, Vinicius Luis, Marques, Bruno Augusto Dorta, Batagelo, Harlen Costa, Gois, João Paulo
Format Journal Article
LanguageEnglish
Published Elsevier Ltd 01.08.2023
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Generative Adversarial Networks (GANs) are a type of deep learning architecture that uses two networks namely a generator and a discriminator that, by competing against each other, pursue to create realistic but previously unseen samples. They have become a popular research topic in recent years, particularly for image processing and synthesis, leading to many advances and applications in various fields. With the profusion of published works and interest from professionals of different areas, surveys on GANs are necessary, mainly for those who aim starting on this topic. In this work, we cover the basics and notable architectures of GANs, focusing on their applications in image generation. We also discuss how the challenges to be addressed in GANs architectures have been faced, such as mode coverage, stability, convergence, and evaluating image quality using metrics. [Display omitted] •A review on GANs for image generation, aiming at readers who are new to the area.•A comprehensive overview of GAN fundamentals, and methods to address the most common issues.•A detailed explanation of how various works applied GANs in image-based applications.•A discussion of future directions for this area.
AbstractList Generative Adversarial Networks (GANs) are a type of deep learning architecture that uses two networks namely a generator and a discriminator that, by competing against each other, pursue to create realistic but previously unseen samples. They have become a popular research topic in recent years, particularly for image processing and synthesis, leading to many advances and applications in various fields. With the profusion of published works and interest from professionals of different areas, surveys on GANs are necessary, mainly for those who aim starting on this topic. In this work, we cover the basics and notable architectures of GANs, focusing on their applications in image generation. We also discuss how the challenges to be addressed in GANs architectures have been faced, such as mode coverage, stability, convergence, and evaluating image quality using metrics. [Display omitted] •A review on GANs for image generation, aiming at readers who are new to the area.•A comprehensive overview of GAN fundamentals, and methods to address the most common issues.•A detailed explanation of how various works applied GANs in image-based applications.•A discussion of future directions for this area.
Author Gois, João Paulo
Marques, Bruno Augusto Dorta
Batagelo, Harlen Costa
Trevisan de Souza, Vinicius Luis
Author_xml – sequence: 1
  givenname: Vinicius Luis
  orcidid: 0009-0009-4524-9006
  surname: Trevisan de Souza
  fullname: Trevisan de Souza, Vinicius Luis
  email: vinicius.trevisan@ufabc.edu.br
– sequence: 2
  givenname: Bruno Augusto Dorta
  surname: Marques
  fullname: Marques, Bruno Augusto Dorta
– sequence: 3
  givenname: Harlen Costa
  surname: Batagelo
  fullname: Batagelo, Harlen Costa
– sequence: 4
  givenname: João Paulo
  surname: Gois
  fullname: Gois, João Paulo
BookMark eNp9kL1OwzAUhT0UibbwAGx-gYR7nR_bYqGqoCBVsMBsOc515VISZEeteHtS2omB6Sz3uzrfmbFJ13fE2A1CjoD17TZ3dpMLEEUOVQ4IEzYF0DJTpS4u2SylLQAIUZdTdr_gkfaBDrzv-Io6inYIe-KLdk8x2Rjsjr_QcOjjR-K-jzx82g3xzfmy767Yhbe7RNfnnLP3x4e35VO2fl09LxfrzAkth8w3JYqmQtBaQUsSVIPOkkRZq8JDgSjKopZKlQpUrTRobHzbNKiqEpzzxZzh6a-LfUqRvPmKY5f4bRDMUdtszahtjtoGKjNqj4z8w7gw_LYeog27f8m7E0mj0jhPNMkF6hy1IZIbTNuHf-gfYdh0-w
CitedBy_id crossref_primary_10_3390_brainsci14040367
crossref_primary_10_4108_eetiot_5336
crossref_primary_10_1111_spc3_70021
crossref_primary_10_3390_app14167178
crossref_primary_10_1007_s11227_024_06108_7
crossref_primary_10_1109_ACCESS_2024_3482989
crossref_primary_10_1007_s10278_024_01334_0
crossref_primary_10_3390_app14125049
crossref_primary_10_3390_bioengineering10121435
crossref_primary_10_1111_1365_2478_13646
crossref_primary_10_1016_j_cag_2023_11_004
crossref_primary_10_3390_buildings14093011
crossref_primary_10_2339_politeknik_1357144
crossref_primary_10_1007_s44295_024_00038_z
crossref_primary_10_1186_s40494_024_01424_w
crossref_primary_10_3390_electronics14061101
crossref_primary_10_1016_j_cag_2023_08_026
crossref_primary_10_3390_app15073534
crossref_primary_10_1007_s11760_024_03596_1
crossref_primary_10_1016_j_aej_2024_12_031
crossref_primary_10_1021_acsomega_3c09762
crossref_primary_10_3390_app14188125
crossref_primary_10_3390_s23218757
crossref_primary_10_1109_JSTARS_2024_3449097
crossref_primary_10_1371_journal_pone_0315721
crossref_primary_10_1007_s44163_024_00107_6
crossref_primary_10_3390_commodities3030016
crossref_primary_10_1111_php_14006
crossref_primary_10_1145_3712263
Cites_doi 10.1007/s00894-021-04674-8
10.1109/CVPR.2019.00482
10.1109/CVPR52688.2022.01565
10.1109/CVPR.2016.90
10.1145/3450626.3459838
10.1016/j.neunet.2021.02.003
10.1109/CVPR.2018.00068
10.1109/CVPR.2018.00917
10.1109/ICCV48922.2021.00209
10.1109/ICCV.2017.244
10.1016/j.ymssp.2021.108035
10.1109/TMM.2021.3109419
10.1109/TIP.2022.3222918
10.1109/CVPR.2019.00453
10.1145/3463475
10.1109/CVPR42600.2020.00813
10.1109/ICCV.2017.629
10.1109/CVPR52688.2022.01042
10.1109/CVPR46437.2021.01268
10.1109/CVPR.2016.350
10.1109/TVCG.2019.2921336
10.1109/TIP.2003.819861
10.1109/CVPR.2017.19
10.1109/CVPR.2016.308
10.1109/ICCVW54120.2021.00217
10.1145/3474838
10.1109/TKDE.2019.2961882
10.1016/j.procs.2019.01.256
10.1109/CVPR.2017.632
10.1109/ICCV.2017.167
10.1145/3446374
10.1038/s41524-020-00352-0
10.1145/3528223.3530164
10.1111/cgf.14503
10.1109/CVPR42600.2020.00832
10.1145/3559540
10.1109/CVPR52688.2022.00361
10.1109/CVPR.2019.00244
10.1109/CVPR46437.2021.00905
10.1007/s41095-021-0234-8
10.1109/CVPR.2018.00813
10.1109/ICCV.2019.00453
10.1007/978-3-030-11021-5_5
ContentType Journal Article
Copyright 2023 Elsevier Ltd
Copyright_xml – notice: 2023 Elsevier Ltd
DBID AAYXX
CITATION
DOI 10.1016/j.cag.2023.05.010
DatabaseName CrossRef
DatabaseTitle CrossRef
DatabaseTitleList
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EndPage 25
ExternalDocumentID 10_1016_j_cag_2023_05_010
S009784932300064X
GroupedDBID --K
--M
-~X
.DC
.~1
0R~
1B1
1~.
1~5
29F
4.4
457
4G.
5GY
5VS
6TJ
7-5
71M
8P~
9JN
AACTN
AAEDT
AAEDW
AAIKJ
AAKOC
AALRI
AAOAW
AAQFI
AAQXK
AATTM
AAXKI
AAXUO
AAYFN
ABAOU
ABBOA
ABDPE
ABEFU
ABJNI
ABMAC
ABTAH
ABWVN
ABXDB
ACDAQ
ACGFS
ACNNM
ACRLP
ACRPL
ACZNC
ADBBV
ADEZE
ADGUI
ADJOM
ADMUD
ADNMO
AEBSH
AEIPS
AEKER
AFFNX
AFJKZ
AFTJW
AGHFR
AGSOS
AGUBO
AGYEJ
AHHHB
AHZHX
AI.
AIALX
AIEXJ
AIGVJ
AIKHN
AITUG
ALMA_UNASSIGNED_HOLDINGS
AMRAJ
ANKPU
AOUOD
ARUGR
ASPBG
AVWKF
AXJTR
AZFZN
BKOJK
BLXMC
BNPGV
CS3
EBS
EFJIC
EJD
EO8
EO9
EP2
EP3
F5P
FDB
FEDTE
FGOYB
FIRID
FNPLU
FYGXN
G-Q
GBLVA
GBOLZ
HLZ
HVGLF
HZ~
H~9
IHE
J1W
K-O
KOM
LG9
M41
MHUIS
MO0
N9A
O-L
O9-
OAUVE
OZT
P-8
P-9
PC.
Q38
R2-
RIG
ROL
RPZ
SBC
SDF
SDG
SDP
SES
SEW
SPC
SPCBC
SSH
SSV
SSW
SSZ
T5K
TN5
UHS
VH1
VOH
WH7
WUQ
XPP
ZMT
ZY4
~02
~G-
AAYWO
AAYXX
AFXIZ
AGCQF
AGQPQ
AGRNS
AIIUN
APXCP
CITATION
ID FETCH-LOGICAL-c297t-fb412b5109980de708b1cae717683f031124367884808689091bfdbb18540ccf3
IEDL.DBID .~1
ISSN 0097-8493
IngestDate Tue Jul 01 03:26:55 EDT 2025
Thu Apr 24 23:04:11 EDT 2025
Sun Apr 06 06:53:41 EDT 2025
IsPeerReviewed true
IsScholarly true
Keywords Deep image synthesis
Generative Adversarial Network
Generative models
Image generation
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c297t-fb412b5109980de708b1cae717683f031124367884808689091bfdbb18540ccf3
ORCID 0009-0009-4524-9006
PageCount 13
ParticipantIDs crossref_primary_10_1016_j_cag_2023_05_010
crossref_citationtrail_10_1016_j_cag_2023_05_010
elsevier_sciencedirect_doi_10_1016_j_cag_2023_05_010
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate August 2023
2023-08-00
PublicationDateYYYYMMDD 2023-08-01
PublicationDate_xml – month: 08
  year: 2023
  text: August 2023
PublicationDecade 2020
PublicationTitle Computers & graphics
PublicationYear 2023
Publisher Elsevier Ltd
Publisher_xml – name: Elsevier Ltd
References Radford, Kim, Hallacy, Ramesh, Goh, Agarwal (b71) 2021
Deng, Yang, Ramanan, Zhu (b12) 2023
Ledig C, Theis L, Huszár F, Caballero J, Cunningham A, Acosta A, et al. Photo-realistic single image super-resolution using a generative adversarial network. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2017, p. 4681–90.
Parmar, Vaswani, Uszkoreit, Kaiser, Shazeer, Ku (b74) 2018
Arjovsky, Chintala, Bottou (b56) 2017
Karras T, Laine S, Aila T. A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019, p. 4401–10.
Miyato, Kataoka, Koyama, Yoshida (b59) 2018
Sauer, Chitta, Müller, Geiger (b61) 2021; 34
Bińkowski M, Sutherland DJ, Arbel M, Gretton A. Demystifying MMD GANs. In: International conference on learning representations. 2018.
Shu, Park, Kwon (b15) 2019
Gao, Shen, Wang, Chen, Yin, Li (b13) 2022
Gui, Sun, Wen, Tao, Ye (b2) 2021
Singh, Raza (b3) 2021
Kingma, Welling (b93) 2013
Wang X, Yu K, Wu S, Gu J, Liu Y, Dong C, et al. ESRGAN: Enhanced super-resolution generative adversarial networks. In: The European conference on computer vision workshops. ECCVW, 2018.
Heusel, Ramsauer, Unterthiner, Nessler, Hochreiter (b54) 2017; 30
Wang, Wang, Wang, Zhao, Zhang, Zhang (b9) 2021; 33
Ioffe, Szegedy (b49) 2015
Simonyan, Zisserman (b86) 2014
Gao, Xue, Shao, Zhao, Qin, Prabowo (b11) 2022; 13
Karras, Aittala, Laine, Härkönen, Hellsten, Lehtinen (b22) 2021
Kynkäänniemi, Karras, Aittala, Aila, Lehtinen (b63) 2022
Brophy, Wang, She, Ward (b10) 2023; 55
Crowson, Biderman, Kornis, Stander, Hallahan, Castricato (b27) 2022
Wang X, Xie L, Dong C, Shan Y. Real-ESRGAN: Training real-world blind super-resolution with pure synthetic data. In: Proceedings of the IEEE/CVF international conference on computer vision. 2021, p. 1905–14.
Deng J, Guo J, Xue N, Zafeiriou S. Arcface: Additive angular margin loss for deep face recognition. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019, p. 4690–9.
Mirza, Osindero (b52) 2014
Brock A, Donahue J, Simonyan K. Large Scale GAN Training for High Fidelity Natural Image Synthesis. In: International conference on learning representations. 2019.
Zhang, Liang, Song, Liu, Wang, Li (b8) 2022; 162
Arjovsky M, Bottou L. Towards principled methods for training generative adversarial networks. In: International conference on learning representations. 2017.
Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z. Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016, p. 2818–26.
He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016, p. 770–8.
Goodfellow, Pouget-Abadie, Mirza, Xu, Warde-Farley, Ozair (b1) 2014; 27
Roich, Mokady, Bermano, Cohen-Or (b32) 2021
Zhang R, Isola P, Efros AA, Shechtman E, Wang O. The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2018, p. 586–95.
Rombach R, Blattmann A, Lorenz D, Esser P, Ommer B. High-resolution image synthesis with latent diffusion models. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022, p. 10684–95.
Pang, Lin, Qin, Chen (b46) 2021; 24
Jolicoeur-Martineau (b87) 2018
Karras T, Laine S, Aittala M, Hellsten J, Lehtinen J, Aila T. Analyzing and improving the image quality of stylegan. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020, p. 8110–9.
Wang X, Girshick R, Gupta A, He K. Non-local neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2018, p. 7794–803.
Esser P, Rombach R, Ommer B. Taming transformers for high-resolution image synthesis. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021, p. 12873–83.
Yadav, Salmani (b94) 2019
Abdal R, Qin Y, Wonka P. Image2stylegan++: How to edit the embedded images?. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020, p. 8296–305.
Goodfellow, Bengio, Courville (b47) 2016
Zhang, Zhong, Dong, Wang, Wang (b7) 2019; 147
Karras, Aittala, Hellsten, Laine, Lehtinen, Aila (b21) 2020
Wang T-C, Liu M-Y, Zhu J-Y, Tao A, Kautz J, Catanzaro B. High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2018, p. 8798–807.
Bermano, Gal, Alaluf, Mokady, Nitzan, Tov (b44) 2022; 41
Saharia, Chan, Saxena, Li, Whang, Denton (b79) 2022
Sohl-Dickstein, Weiss, Maheswaranathan, Ganguli (b90) 2015
Deng, Dong, Socher, Li, Li, Fei-Fei (b51) 2009
Vaswani, Shazeer, Parmar, Uszkoreit, Jones, Gomez (b73) 2017; 30
Isola P, Zhu J-Y, Zhou T, Efros AA. Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2017, p. 1125–34.
Patashnik O, Wu Z, Shechtman E, Cohen-Or D, Lischinski D. Styleclip: Text-driven manipulation of stylegan imagery. In: Proceedings of the IEEE/CVF international conference on computer vision. 2021, p. 2085–94.
Gulrajani, Ahmed, Arjovsky, Dumoulin, Courville (b57) 2017; 30
Jing, Yang, Feng, Ye, Yu, Song (b83) 2020; 26
Xue, Guo, Zhang, Xu, Zhang, Huang (b67) 2022; 8
Liu, Yuan, Hou, Hamzaoui, Gao (b16) 2022; 31
Dan, Zhao, Li, Li, Hu, Hu (b4) 2020; 6
Zhu, Shen, Zhao, Zhou (b31) 2020
Wang X, Li Y, Zhang H, Shan Y. Towards Real-World Blind Face Restoration with Generative Facial Prior. In: The IEEE conference on computer vision and pattern recognition. CVPR, 2021.
Chan ER, Lin CZ, Chan MA, Nagano K, Pan B, De Mello S, et al. Efficient geometry-aware 3D generative adversarial networks. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022, p. 16123–33.
Zhang, Goodfellow, Metaxas, Odena (b23) 2019
Xiao Z, Kreis K, Vahdat A. Tackling the Generative Learning Trilemma with Denoising Diffusion GANs. In: International conference on learning representations. ICLR, 2022.
Yu, Seff, Zhang, Song, Funkhouser, Xiao (b50) 2015
Huang X, Belongie S. Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE international conference on computer vision. 2017, p. 1501–10.
Salimans, Goodfellow, Zaremba, Cheung, Radford, Chen (b55) 2016; 29
Bihlo (b6) 2021; 139
Trevisan De Souza, Marques, Gois (b42) 2022
Wang, Bovik, Sheikh, Simoncelli (b65) 2004; 13
Ho, Jain, Abbeel (b91) 2020; 33
Cordts M, Omran M, Ramos S, Rehfeld T, Enzweiler M, Benenson R, et al. The Cityscapes Dataset for Semantic Urban Scene Understanding. In: Proc. of the IEEE conference on computer vision and pattern recognition. CVPR, 2016.
Zhou, Zhang, Chen, Li, Tensmeyer, Yu (b80) 2021
Park T, Liu M-Y, Wang T-C, Zhu J-Y. Semantic image synthesis with spatially-adaptive normalization. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019, p. 2337–46.
Skorokhodov I, Tulyakov S, Elhoseiny M. Stylegan-v: A continuous video generator with the price, image quality and perks of stylegan2. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022, p. 3626–36.
Zhang H, Xu T, Li H, Zhang S, Wang X, Huang X, et al. StackGAN: Text to photo-realistic image synthesis with stacked generative adversarial networks. In: Proceedings of the IEEE international conference on computer vision. 2017, p. 5907–15.
Dhariwal, Nichol (b92) 2021; 34
Achlioptas, Diamanti, Mitliagkas, Guibas (b14) 2018; vol. 80
Abdal R, Qin Y, Wonka P. Image2stylegan: How to embed images into the stylegan latent space?. In: Proceedings of the IEEE/CVF international conference on computer vision. 2019, p. 4432–41.
Radford A, Metz L, Chintala S. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. In: International conference on learning representations. 2016.
Karras T, Aila T, Laine S, Lehtinen J. Progressive Growing of GANs for Improved Quality, Stability, and Variation. In: International conference on learning representations. 2018.
Jabbar, Li, Omar (b43) 2022; 54
Gal, Patashnik, Maron, Bermano, Chechik, Cohen-Or (b70) 2022; 41
Sauer, Karras, Laine, Geiger, Aila (b28) 2023
Wang, She, Ward (b18) 2021; 54
Tov, Alaluf, Nitzan, Patashnik, Cohen-Or (b82) 2021; 40
Bian, Xie (b5) 2021; 27
Ronneberger, Fischer, Brox (b88) 2015
Zhu J-Y, Park T, Isola P, Efros AA. Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE international conference on computer vision. 2017, p. 2223–32.
Saxena, Cao (b45) 2022; 54
Odena, Buckman, Olsson, Brown, Olah, Raffel (b60) 2018
Sauer, Schwarz, Geiger (b25) 2022
Park, Zhu, Wang, Lu, Shechtman, Efros (b36) 2020; 33
Ramesh, Dhariwal, Nichol, Chu, Chen (b77) 2022
Brophy (10.1016/j.cag.2023.05.010_b10) 2023; 55
Saxena (10.1016/j.cag.2023.05.010_b45) 2022; 54
Yu (10.1016/j.cag.2023.05.010_b50) 2015
Arjovsky (10.1016/j.cag.2023.05.010_b56) 2017
Xue (10.1016/j.cag.2023.05.010_b67) 2022; 8
Vaswani (10.1016/j.cag.2023.05.010_b73) 2017; 30
Jabbar (10.1016/j.cag.2023.05.010_b43) 2022; 54
Trevisan De Souza (10.1016/j.cag.2023.05.010_b42) 2022
Gulrajani (10.1016/j.cag.2023.05.010_b57) 2017; 30
Radford (10.1016/j.cag.2023.05.010_b71) 2021
Zhang (10.1016/j.cag.2023.05.010_b7) 2019; 147
10.1016/j.cag.2023.05.010_b19
Singh (10.1016/j.cag.2023.05.010_b3) 2021
Salimans (10.1016/j.cag.2023.05.010_b55) 2016; 29
10.1016/j.cag.2023.05.010_b17
Park (10.1016/j.cag.2023.05.010_b36) 2020; 33
Zhu (10.1016/j.cag.2023.05.010_b31) 2020
Gao (10.1016/j.cag.2023.05.010_b13) 2022
Ronneberger (10.1016/j.cag.2023.05.010_b88) 2015
Sauer (10.1016/j.cag.2023.05.010_b61) 2021; 34
Dhariwal (10.1016/j.cag.2023.05.010_b92) 2021; 34
Bermano (10.1016/j.cag.2023.05.010_b44) 2022; 41
10.1016/j.cag.2023.05.010_b53
Odena (10.1016/j.cag.2023.05.010_b60) 2018
10.1016/j.cag.2023.05.010_b58
Wang (10.1016/j.cag.2023.05.010_b9) 2021; 33
Bihlo (10.1016/j.cag.2023.05.010_b6) 2021; 139
Yadav (10.1016/j.cag.2023.05.010_b94) 2019
10.1016/j.cag.2023.05.010_b40
10.1016/j.cag.2023.05.010_b84
Tov (10.1016/j.cag.2023.05.010_b82) 2021; 40
Zhang (10.1016/j.cag.2023.05.010_b8) 2022; 162
10.1016/j.cag.2023.05.010_b81
Karras (10.1016/j.cag.2023.05.010_b22) 2021
Sauer (10.1016/j.cag.2023.05.010_b25) 2022
Ramesh (10.1016/j.cag.2023.05.010_b77) 2022
Saharia (10.1016/j.cag.2023.05.010_b79) 2022
Heusel (10.1016/j.cag.2023.05.010_b54) 2017; 30
Mirza (10.1016/j.cag.2023.05.010_b52) 2014
Zhou (10.1016/j.cag.2023.05.010_b80) 2021
10.1016/j.cag.2023.05.010_b41
10.1016/j.cag.2023.05.010_b85
10.1016/j.cag.2023.05.010_b48
Jolicoeur-Martineau (10.1016/j.cag.2023.05.010_b87) 2018
10.1016/j.cag.2023.05.010_b89
Goodfellow (10.1016/j.cag.2023.05.010_b47) 2016
Goodfellow (10.1016/j.cag.2023.05.010_b1) 2014; 27
10.1016/j.cag.2023.05.010_b72
Achlioptas (10.1016/j.cag.2023.05.010_b14) 2018; vol. 80
Sauer (10.1016/j.cag.2023.05.010_b28) 2023
Karras (10.1016/j.cag.2023.05.010_b21) 2020
10.1016/j.cag.2023.05.010_b39
10.1016/j.cag.2023.05.010_b38
Deng (10.1016/j.cag.2023.05.010_b51) 2009
Jing (10.1016/j.cag.2023.05.010_b83) 2020; 26
10.1016/j.cag.2023.05.010_b33
10.1016/j.cag.2023.05.010_b76
10.1016/j.cag.2023.05.010_b75
10.1016/j.cag.2023.05.010_b30
10.1016/j.cag.2023.05.010_b37
Ho (10.1016/j.cag.2023.05.010_b91) 2020; 33
10.1016/j.cag.2023.05.010_b35
Gao (10.1016/j.cag.2023.05.010_b11) 2022; 13
10.1016/j.cag.2023.05.010_b34
Wang (10.1016/j.cag.2023.05.010_b65) 2004; 13
10.1016/j.cag.2023.05.010_b78
Bian (10.1016/j.cag.2023.05.010_b5) 2021; 27
Roich (10.1016/j.cag.2023.05.010_b32) 2021
Crowson (10.1016/j.cag.2023.05.010_b27) 2022
10.1016/j.cag.2023.05.010_b62
Shu (10.1016/j.cag.2023.05.010_b15) 2019
Wang (10.1016/j.cag.2023.05.010_b18) 2021; 54
Pang (10.1016/j.cag.2023.05.010_b46) 2021; 24
Gal (10.1016/j.cag.2023.05.010_b70) 2022; 41
Miyato (10.1016/j.cag.2023.05.010_b59) 2018
Kynkäänniemi (10.1016/j.cag.2023.05.010_b63) 2022
10.1016/j.cag.2023.05.010_b29
Deng (10.1016/j.cag.2023.05.010_b12) 2023
Simonyan (10.1016/j.cag.2023.05.010_b86) 2014
Parmar (10.1016/j.cag.2023.05.010_b74) 2018
Ioffe (10.1016/j.cag.2023.05.010_b49) 2015
10.1016/j.cag.2023.05.010_b66
Gui (10.1016/j.cag.2023.05.010_b2) 2021
Zhang (10.1016/j.cag.2023.05.010_b23) 2019
10.1016/j.cag.2023.05.010_b20
10.1016/j.cag.2023.05.010_b64
Kingma (10.1016/j.cag.2023.05.010_b93) 2013
10.1016/j.cag.2023.05.010_b26
Sohl-Dickstein (10.1016/j.cag.2023.05.010_b90) 2015
Liu (10.1016/j.cag.2023.05.010_b16) 2022; 31
10.1016/j.cag.2023.05.010_b69
Dan (10.1016/j.cag.2023.05.010_b4) 2020; 6
10.1016/j.cag.2023.05.010_b24
10.1016/j.cag.2023.05.010_b68
References_xml – reference: Karras T, Laine S, Aila T. A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019, p. 4401–10.
– start-page: 88
  year: 2022
  end-page: 105
  ident: b27
  article-title: VQGAN-CLIP: Open domain image generation and editing with natural language guidance
  publication-title: European conference on computer vision
– year: 2023
  ident: b28
  article-title: StyleGAN-T: Unlocking the power of GANs for fast large-scale text-to-image synthesis
– reference: Wang X, Xie L, Dong C, Shan Y. Real-ESRGAN: Training real-world blind super-resolution with pure synthetic data. In: Proceedings of the IEEE/CVF international conference on computer vision. 2021, p. 1905–14.
– reference: Wang X, Li Y, Zhang H, Shan Y. Towards Real-World Blind Face Restoration with Generative Facial Prior. In: The IEEE conference on computer vision and pattern recognition. CVPR, 2021.
– volume: 29
  start-page: 2234
  year: 2016
  end-page: 2242
  ident: b55
  article-title: Improved techniques for training gans
  publication-title: Adv Neural Inf Process Syst
– reference: Huang X, Belongie S. Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE international conference on computer vision. 2017, p. 1501–10.
– start-page: 248
  year: 2009
  end-page: 255
  ident: b51
  article-title: ImageNet: A large-scale hierarchical image database
  publication-title: 2009 IEEE conference on computer vision and pattern recognition
– reference: Xiao Z, Kreis K, Vahdat A. Tackling the Generative Learning Trilemma with Denoising Diffusion GANs. In: International conference on learning representations. ICLR, 2022.
– start-page: 12104
  year: 2020
  end-page: 12114
  ident: b21
  article-title: Training generative adversarial networks with limited data
  publication-title: Advances in neural information processing systems. Vol. 33
– reference: Patashnik O, Wu Z, Shechtman E, Cohen-Or D, Lischinski D. Styleclip: Text-driven manipulation of stylegan imagery. In: Proceedings of the IEEE/CVF international conference on computer vision. 2021, p. 2085–94.
– volume: 6
  start-page: 84
  year: 2020
  ident: b4
  article-title: Generative adversarial networks (GAN) based efficient sampling of chemical composition space for inverse design of inorganic materials
  publication-title: Npj Comput Mater
– year: 2015
  ident: b50
  article-title: LSUN: Construction of a large-scale image dataset using deep learning with humans in the loop
– volume: 30
  year: 2017
  ident: b73
  article-title: Attention is all you need
  publication-title: Adv Neural Inf Process Syst
– volume: 40
  start-page: 1
  year: 2021
  end-page: 14
  ident: b82
  article-title: Designing an encoder for stylegan image manipulation
  publication-title: ACM Trans Graph
– volume: 55
  year: 2023
  ident: b10
  article-title: Generative adversarial networks in time series: A systematic literature review
  publication-title: ACM Comput Surv
– start-page: 8748
  year: 2021
  end-page: 8763
  ident: b71
  article-title: Learning transferable visual models from natural language supervision
  publication-title: International conference on machine learning
– year: 2021
  ident: b80
  article-title: LAFITE: Towards language-free training for text-to-image generation
– start-page: 852
  year: 2021
  end-page: 863
  ident: b22
  article-title: Alias-free generative adversarial networks
  publication-title: Advances in neural information processing systems. Vol. 34
– reference: Cordts M, Omran M, Ramos S, Rehfeld T, Enzweiler M, Benenson R, et al. The Cityscapes Dataset for Semantic Urban Scene Understanding. In: Proc. of the IEEE conference on computer vision and pattern recognition. CVPR, 2016.
– year: 2014
  ident: b52
  article-title: Conditional generative adversarial nets
– volume: 27
  year: 2014
  ident: b1
  article-title: Generative adversarial nets
  publication-title: Adv Neural Inf Process Syst
– reference: Zhu J-Y, Park T, Isola P, Efros AA. Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE international conference on computer vision. 2017, p. 2223–32.
– reference: Wang X, Yu K, Wu S, Gu J, Liu Y, Dong C, et al. ESRGAN: Enhanced super-resolution generative adversarial networks. In: The European conference on computer vision workshops. ECCVW, 2018.
– year: 2018
  ident: b87
  article-title: The relativistic discriminator: a key element missing from standard GAN
– year: 2021
  ident: b32
  article-title: Pivotal tuning for latent-based editing of real images
– reference: Zhang H, Xu T, Li H, Zhang S, Wang X, Huang X, et al. StackGAN: Text to photo-realistic image synthesis with stacked generative adversarial networks. In: Proceedings of the IEEE international conference on computer vision. 2017, p. 5907–15.
– year: 2022
  ident: b13
  article-title: GET3D: A generative model of high quality 3D textured shapes learned from images
  publication-title: Advances in neural information processing systems
– year: 2013
  ident: b93
  article-title: Auto-encoding variational bayes
– start-page: 214
  year: 2017
  end-page: 223
  ident: b56
  article-title: Wasserstein generative adversarial networks
  publication-title: International conference on machine learning
– reference: Abdal R, Qin Y, Wonka P. Image2stylegan: How to embed images into the stylegan latent space?. In: Proceedings of the IEEE/CVF international conference on computer vision. 2019, p. 4432–41.
– start-page: 4055
  year: 2018
  end-page: 4064
  ident: b74
  article-title: Image transformer
  publication-title: International conference on machine learning
– volume: 26
  start-page: 3365
  year: 2020
  end-page: 3385
  ident: b83
  article-title: Neural style transfer: A review
  publication-title: IEEE Trans Vis Comput Graphics
– start-page: 2256
  year: 2015
  end-page: 2265
  ident: b90
  article-title: Deep unsupervised learning using nonequilibrium thermodynamics
  publication-title: International conference on machine learning
– reference: Zhang R, Isola P, Efros AA, Shechtman E, Wang O. The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2018, p. 586–95.
– volume: vol. 80
  start-page: 40
  year: 2018
  end-page: 49
  ident: b14
  article-title: Learning representations and generative models for 3D point clouds
  publication-title: Proceedings of the 35th international conference on machine learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018
– volume: 24
  start-page: 3859
  year: 2021
  end-page: 3881
  ident: b46
  article-title: Image-to-image translation: Methods and applications
  publication-title: IEEE Trans Multimed
– reference: Rombach R, Blattmann A, Lorenz D, Esser P, Ommer B. High-resolution image synthesis with latent diffusion models. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022, p. 10684–95.
– start-page: 308
  year: 2022
  end-page: 313
  ident: b42
  article-title: Fundamentals and challenges of generative adversarial networks for image-based applications
  publication-title: 2022 35th SIBGRAPI conference on graphics, patterns and images. Vol. 1
– reference: Wang T-C, Liu M-Y, Zhu J-Y, Tao A, Kautz J, Catanzaro B. High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2018, p. 8798–807.
– reference: Deng J, Guo J, Xue N, Zafeiriou S. Arcface: Additive angular margin loss for deep face recognition. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019, p. 4690–9.
– year: 2023
  ident: b12
  article-title: 3D-aware conditional image synthesis
– volume: 13
  year: 2022
  ident: b11
  article-title: Generative adversarial networks for spatio-temporal data: A survey
  publication-title: ACM Trans Intell Syst Technol
– start-page: 852
  year: 2019
  end-page: 857
  ident: b94
  article-title: Deepfake: A survey on facial forgery technique using generative adversarial network
  publication-title: 2019 International conference on intelligent computing and control systems
– start-page: 77
  year: 2021
  end-page: 96
  ident: b3
  article-title: Medical image generation using generative adversarial networks: A review
  publication-title: Health Inf A Comput Perspect Healthc
– reference: He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016, p. 770–8.
– volume: 33
  start-page: 7198
  year: 2020
  end-page: 7211
  ident: b36
  article-title: Swapping autoencoder for deep image manipulation
  publication-title: Adv Neural Inf Process Syst
– reference: Skorokhodov I, Tulyakov S, Elhoseiny M. Stylegan-v: A continuous video generator with the price, image quality and perks of stylegan2. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022, p. 3626–36.
– volume: 54
  start-page: 1
  year: 2022
  end-page: 42
  ident: b45
  article-title: Generative adversarial networks (GANs): Challenges, solutions, and future directions
  publication-title: ACM Comput Surv
– start-page: 3858
  year: 2019
  end-page: 3867
  ident: b15
  article-title: 3D point cloud generative adversarial network based on tree structured graph convolutions
  publication-title: 2019 IEEE/CVF international conference on computer vision, ICCV 2019, Seoul, Korea (South), October 27 - November 2, 2019
– year: 2022
  ident: b79
  article-title: Photorealistic text-to-image diffusion models with deep language understanding
– reference: Park T, Liu M-Y, Wang T-C, Zhu J-Y. Semantic image synthesis with spatially-adaptive normalization. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019, p. 2337–46.
– year: 2018
  ident: b59
  article-title: Spectral normalization for generative adversarial networks
  publication-title: International conference on learning representations
– start-page: 775
  year: 2016
  ident: b47
  article-title: Deep learning
– start-page: 448
  year: 2015
  end-page: 456
  ident: b49
  article-title: Batch normalization: Accelerating deep network training by reducing internal covariate shift
  publication-title: International conference on machine learning
– reference: Esser P, Rombach R, Ommer B. Taming transformers for high-resolution image synthesis. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021, p. 12873–83.
– reference: Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z. Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016, p. 2818–26.
– volume: 30
  year: 2017
  ident: b54
  article-title: Gans trained by a two time-scale update rule converge to a local nash equilibrium
  publication-title: Adv Neural Inf Process Syst
– year: 2021
  ident: b2
  article-title: A review on generative adversarial networks: Algorithms, theory, and applications
  publication-title: IEEE Trans Knowl Data Eng
– volume: 30
  year: 2017
  ident: b57
  article-title: Improved training of wasserstein gans
  publication-title: Adv Neural Inf Process Syst
– reference: Wang X, Girshick R, Gupta A, He K. Non-local neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2018, p. 7794–803.
– volume: 54
  start-page: 1
  year: 2021
  end-page: 38
  ident: b18
  article-title: Generative adversarial networks in computer vision: A survey and taxonomy
  publication-title: ACM Comput Surv
– volume: 34
  start-page: 17480
  year: 2021
  end-page: 17492
  ident: b61
  article-title: Projected gans converge faster
  publication-title: Adv Neural Inf Process Syst
– volume: 34
  start-page: 8780
  year: 2021
  end-page: 8794
  ident: b92
  article-title: Diffusion models beat gans on image synthesis
  publication-title: Adv Neural Inf Process Syst
– volume: 31
  start-page: 7389
  year: 2022
  end-page: 7402
  ident: b16
  article-title: PUFA-GAN: A frequency-aware generative adversarial network for 3D point cloud upsampling
  publication-title: IEEE Trans Image Process
– start-page: 10
  year: 2022
  ident: b25
  article-title: StyleGAN-XL: Scaling StyleGAN to large diverse datasets
  publication-title: ACM SIGGRAPH 2022 conference proceedings
– reference: Arjovsky M, Bottou L. Towards principled methods for training generative adversarial networks. In: International conference on learning representations. 2017.
– volume: 33
  start-page: 3090
  year: 2021
  end-page: 3103
  ident: b9
  article-title: Learning graph representation with generative adversarial nets
  publication-title: IEEE Trans Knowl Data Eng
– reference: Brock A, Donahue J, Simonyan K. Large Scale GAN Training for High Fidelity Natural Image Synthesis. In: International conference on learning representations. 2019.
– start-page: 3849
  year: 2018
  end-page: 3858
  ident: b60
  article-title: Is generator conditioning causally related to GAN performance?
  publication-title: International conference on machine learning
– reference: Ledig C, Theis L, Huszár F, Caballero J, Cunningham A, Acosta A, et al. Photo-realistic single image super-resolution using a generative adversarial network. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2017, p. 4681–90.
– volume: 139
  start-page: 1
  year: 2021
  end-page: 16
  ident: b6
  article-title: A generative adversarial network approach to (ensemble) weather prediction
  publication-title: Neural Netw
– volume: 147
  start-page: 400
  year: 2019
  end-page: 406
  ident: b7
  article-title: Stock market prediction based on generative adversarial network
  publication-title: Procedia Comput Sci
– volume: 13
  start-page: 600
  year: 2004
  end-page: 612
  ident: b65
  article-title: Image quality assessment: from error visibility to structural similarity
  publication-title: IEEE Trans Image Process
– volume: 41
  year: 2022
  ident: b70
  article-title: StyleGAN-NADA: CLIP-guided domain adaptation of image generators
  publication-title: ACM Trans Graph
– volume: 8
  start-page: 3
  year: 2022
  end-page: 31
  ident: b67
  article-title: Deep image synthesis from intuitive user input: A review and perspectives
  publication-title: Comput Vis Media
– start-page: 592
  year: 2020
  end-page: 608
  ident: b31
  article-title: In-domain gan inversion for real image editing
  publication-title: European conference on computer vision
– reference: Bińkowski M, Sutherland DJ, Arbel M, Gretton A. Demystifying MMD GANs. In: International conference on learning representations. 2018.
– start-page: 234
  year: 2015
  end-page: 241
  ident: b88
  article-title: U-net: Convolutional networks for biomedical image segmentation
  publication-title: International conference on medical image computing and computer-assisted intervention
– volume: 27
  start-page: 1
  year: 2021
  end-page: 18
  ident: b5
  article-title: Generative chemistry: drug discovery with deep learning generative models
  publication-title: J Mol Model
– volume: 162
  year: 2022
  ident: b8
  article-title: Generative adversarial network for geological prediction based on TBM operational data
  publication-title: Mech Syst Signal Process
– year: 2022
  ident: b63
  article-title: The role of ImageNet classes in Fréchet inception distance
– reference: Chan ER, Lin CZ, Chan MA, Nagano K, Pan B, De Mello S, et al. Efficient geometry-aware 3D generative adversarial networks. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022, p. 16123–33.
– volume: 54
  start-page: 1
  year: 2022
  end-page: 49
  ident: b43
  article-title: A survey on generative adversarial networks: Variants, applications, and training
  publication-title: ACM Comput Surv
– reference: Karras T, Aila T, Laine S, Lehtinen J. Progressive Growing of GANs for Improved Quality, Stability, and Variation. In: International conference on learning representations. 2018.
– start-page: 7354
  year: 2019
  end-page: 7363
  ident: b23
  article-title: Self-attention generative adversarial networks
  publication-title: International conference on machine learning
– reference: Isola P, Zhu J-Y, Zhou T, Efros AA. Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2017, p. 1125–34.
– reference: Abdal R, Qin Y, Wonka P. Image2stylegan++: How to edit the embedded images?. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020, p. 8296–305.
– year: 2022
  ident: b77
  article-title: Hierarchical text-conditional image generation with clip latents
– year: 2014
  ident: b86
  article-title: Very deep convolutional networks for large-scale image recognition
– reference: Radford A, Metz L, Chintala S. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. In: International conference on learning representations. 2016.
– volume: 41
  start-page: 591
  year: 2022
  end-page: 611
  ident: b44
  article-title: State-of-the-art in the architecture, methods and applications of StyleGAN
  publication-title: Comput Graph Forum
– volume: 33
  start-page: 6840
  year: 2020
  end-page: 6851
  ident: b91
  article-title: Denoising diffusion probabilistic models
  publication-title: Adv Neural Inf Process Syst
– reference: Karras T, Laine S, Aittala M, Hellsten J, Lehtinen J, Aila T. Analyzing and improving the image quality of stylegan. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020, p. 8110–9.
– volume: 27
  start-page: 1
  year: 2021
  ident: 10.1016/j.cag.2023.05.010_b5
  article-title: Generative chemistry: drug discovery with deep learning generative models
  publication-title: J Mol Model
  doi: 10.1007/s00894-021-04674-8
– ident: 10.1016/j.cag.2023.05.010_b89
  doi: 10.1109/CVPR.2019.00482
– ident: 10.1016/j.cag.2023.05.010_b48
– volume: 27
  year: 2014
  ident: 10.1016/j.cag.2023.05.010_b1
  article-title: Generative adversarial nets
  publication-title: Adv Neural Inf Process Syst
– ident: 10.1016/j.cag.2023.05.010_b17
  doi: 10.1109/CVPR52688.2022.01565
– ident: 10.1016/j.cag.2023.05.010_b85
  doi: 10.1109/CVPR.2016.90
– start-page: 592
  year: 2020
  ident: 10.1016/j.cag.2023.05.010_b31
  article-title: In-domain gan inversion for real image editing
– volume: 40
  start-page: 1
  issue: 4
  year: 2021
  ident: 10.1016/j.cag.2023.05.010_b82
  article-title: Designing an encoder for stylegan image manipulation
  publication-title: ACM Trans Graph
  doi: 10.1145/3450626.3459838
– volume: 34
  start-page: 17480
  year: 2021
  ident: 10.1016/j.cag.2023.05.010_b61
  article-title: Projected gans converge faster
  publication-title: Adv Neural Inf Process Syst
– volume: 139
  start-page: 1
  year: 2021
  ident: 10.1016/j.cag.2023.05.010_b6
  article-title: A generative adversarial network approach to (ensemble) weather prediction
  publication-title: Neural Netw
  doi: 10.1016/j.neunet.2021.02.003
– start-page: 88
  year: 2022
  ident: 10.1016/j.cag.2023.05.010_b27
  article-title: VQGAN-CLIP: Open domain image generation and editing with natural language guidance
– ident: 10.1016/j.cag.2023.05.010_b66
  doi: 10.1109/CVPR.2018.00068
– ident: 10.1016/j.cag.2023.05.010_b34
  doi: 10.1109/CVPR.2018.00917
– ident: 10.1016/j.cag.2023.05.010_b64
– start-page: 775
  year: 2016
  ident: 10.1016/j.cag.2023.05.010_b47
– volume: 54
  start-page: 1
  issue: 2
  year: 2021
  ident: 10.1016/j.cag.2023.05.010_b18
  article-title: Generative adversarial networks in computer vision: A survey and taxonomy
  publication-title: ACM Comput Surv
– start-page: 7354
  year: 2019
  ident: 10.1016/j.cag.2023.05.010_b23
  article-title: Self-attention generative adversarial networks
– ident: 10.1016/j.cag.2023.05.010_b26
  doi: 10.1109/ICCV48922.2021.00209
– ident: 10.1016/j.cag.2023.05.010_b37
  doi: 10.1109/ICCV.2017.244
– year: 2021
  ident: 10.1016/j.cag.2023.05.010_b32
– volume: vol. 80
  start-page: 40
  year: 2018
  ident: 10.1016/j.cag.2023.05.010_b14
  article-title: Learning representations and generative models for 3D point clouds
– ident: 10.1016/j.cag.2023.05.010_b53
– year: 2018
  ident: 10.1016/j.cag.2023.05.010_b59
  article-title: Spectral normalization for generative adversarial networks
– year: 2022
  ident: 10.1016/j.cag.2023.05.010_b77
– start-page: 2256
  year: 2015
  ident: 10.1016/j.cag.2023.05.010_b90
  article-title: Deep unsupervised learning using nonequilibrium thermodynamics
– start-page: 308
  year: 2022
  ident: 10.1016/j.cag.2023.05.010_b42
  article-title: Fundamentals and challenges of generative adversarial networks for image-based applications
– volume: 162
  year: 2022
  ident: 10.1016/j.cag.2023.05.010_b8
  article-title: Generative adversarial network for geological prediction based on TBM operational data
  publication-title: Mech Syst Signal Process
  doi: 10.1016/j.ymssp.2021.108035
– volume: 24
  start-page: 3859
  year: 2021
  ident: 10.1016/j.cag.2023.05.010_b46
  article-title: Image-to-image translation: Methods and applications
  publication-title: IEEE Trans Multimed
  doi: 10.1109/TMM.2021.3109419
– year: 2023
  ident: 10.1016/j.cag.2023.05.010_b12
– volume: 31
  start-page: 7389
  year: 2022
  ident: 10.1016/j.cag.2023.05.010_b16
  article-title: PUFA-GAN: A frequency-aware generative adversarial network for 3D point cloud upsampling
  publication-title: IEEE Trans Image Process
  doi: 10.1109/TIP.2022.3222918
– start-page: 852
  year: 2021
  ident: 10.1016/j.cag.2023.05.010_b22
  article-title: Alias-free generative adversarial networks
– year: 2018
  ident: 10.1016/j.cag.2023.05.010_b87
– ident: 10.1016/j.cag.2023.05.010_b19
  doi: 10.1109/CVPR.2019.00453
– start-page: 448
  year: 2015
  ident: 10.1016/j.cag.2023.05.010_b49
  article-title: Batch normalization: Accelerating deep network training by reducing internal covariate shift
– year: 2022
  ident: 10.1016/j.cag.2023.05.010_b13
  article-title: GET3D: A generative model of high quality 3D textured shapes learned from images
– volume: 30
  year: 2017
  ident: 10.1016/j.cag.2023.05.010_b57
  article-title: Improved training of wasserstein gans
  publication-title: Adv Neural Inf Process Syst
– volume: 54
  start-page: 1
  issue: 8
  year: 2022
  ident: 10.1016/j.cag.2023.05.010_b43
  article-title: A survey on generative adversarial networks: Variants, applications, and training
  publication-title: ACM Comput Surv
  doi: 10.1145/3463475
– ident: 10.1016/j.cag.2023.05.010_b20
  doi: 10.1109/CVPR42600.2020.00813
– ident: 10.1016/j.cag.2023.05.010_b76
  doi: 10.1109/ICCV.2017.629
– start-page: 3849
  year: 2018
  ident: 10.1016/j.cag.2023.05.010_b60
  article-title: Is generator conditioning causally related to GAN performance?
– start-page: 8748
  year: 2021
  ident: 10.1016/j.cag.2023.05.010_b71
  article-title: Learning transferable visual models from natural language supervision
– ident: 10.1016/j.cag.2023.05.010_b78
  doi: 10.1109/CVPR52688.2022.01042
– ident: 10.1016/j.cag.2023.05.010_b81
  doi: 10.1109/CVPR46437.2021.01268
– year: 2021
  ident: 10.1016/j.cag.2023.05.010_b2
  article-title: A review on generative adversarial networks: Algorithms, theory, and applications
  publication-title: IEEE Trans Knowl Data Eng
– ident: 10.1016/j.cag.2023.05.010_b84
  doi: 10.1109/CVPR.2016.350
– start-page: 3858
  year: 2019
  ident: 10.1016/j.cag.2023.05.010_b15
  article-title: 3D point cloud generative adversarial network based on tree structured graph convolutions
– volume: 26
  start-page: 3365
  issue: 11
  year: 2020
  ident: 10.1016/j.cag.2023.05.010_b83
  article-title: Neural style transfer: A review
  publication-title: IEEE Trans Vis Comput Graphics
  doi: 10.1109/TVCG.2019.2921336
– volume: 13
  start-page: 600
  issue: 4
  year: 2004
  ident: 10.1016/j.cag.2023.05.010_b65
  article-title: Image quality assessment: from error visibility to structural similarity
  publication-title: IEEE Trans Image Process
  doi: 10.1109/TIP.2003.819861
– ident: 10.1016/j.cag.2023.05.010_b38
  doi: 10.1109/CVPR.2017.19
– ident: 10.1016/j.cag.2023.05.010_b62
  doi: 10.1109/CVPR.2016.308
– ident: 10.1016/j.cag.2023.05.010_b40
  doi: 10.1109/ICCVW54120.2021.00217
– start-page: 248
  year: 2009
  ident: 10.1016/j.cag.2023.05.010_b51
  article-title: ImageNet: A large-scale hierarchical image database
– volume: 13
  issue: 2
  year: 2022
  ident: 10.1016/j.cag.2023.05.010_b11
  article-title: Generative adversarial networks for spatio-temporal data: A survey
  publication-title: ACM Trans Intell Syst Technol
  doi: 10.1145/3474838
– volume: 33
  start-page: 3090
  issue: 8
  year: 2021
  ident: 10.1016/j.cag.2023.05.010_b9
  article-title: Learning graph representation with generative adversarial nets
  publication-title: IEEE Trans Knowl Data Eng
  doi: 10.1109/TKDE.2019.2961882
– year: 2015
  ident: 10.1016/j.cag.2023.05.010_b50
– volume: 147
  start-page: 400
  year: 2019
  ident: 10.1016/j.cag.2023.05.010_b7
  article-title: Stock market prediction based on generative adversarial network
  publication-title: Procedia Comput Sci
  doi: 10.1016/j.procs.2019.01.256
– start-page: 12104
  year: 2020
  ident: 10.1016/j.cag.2023.05.010_b21
  article-title: Training generative adversarial networks with limited data
– volume: 33
  start-page: 7198
  year: 2020
  ident: 10.1016/j.cag.2023.05.010_b36
  article-title: Swapping autoencoder for deep image manipulation
  publication-title: Adv Neural Inf Process Syst
– year: 2014
  ident: 10.1016/j.cag.2023.05.010_b86
– year: 2013
  ident: 10.1016/j.cag.2023.05.010_b93
– start-page: 214
  year: 2017
  ident: 10.1016/j.cag.2023.05.010_b56
  article-title: Wasserstein generative adversarial networks
– year: 2022
  ident: 10.1016/j.cag.2023.05.010_b79
– ident: 10.1016/j.cag.2023.05.010_b33
  doi: 10.1109/CVPR.2017.632
– year: 2023
  ident: 10.1016/j.cag.2023.05.010_b28
– year: 2014
  ident: 10.1016/j.cag.2023.05.010_b52
– ident: 10.1016/j.cag.2023.05.010_b68
  doi: 10.1109/ICCV.2017.167
– volume: 54
  start-page: 1
  issue: 3
  year: 2022
  ident: 10.1016/j.cag.2023.05.010_b45
  article-title: Generative adversarial networks (GANs): Challenges, solutions, and future directions
  publication-title: ACM Comput Surv
  doi: 10.1145/3446374
– volume: 6
  start-page: 84
  issue: 1
  year: 2020
  ident: 10.1016/j.cag.2023.05.010_b4
  article-title: Generative adversarial networks (GAN) based efficient sampling of chemical composition space for inverse design of inorganic materials
  publication-title: Npj Comput Mater
  doi: 10.1038/s41524-020-00352-0
– volume: 41
  issue: 4
  year: 2022
  ident: 10.1016/j.cag.2023.05.010_b70
  article-title: StyleGAN-NADA: CLIP-guided domain adaptation of image generators
  publication-title: ACM Trans Graph
  doi: 10.1145/3528223.3530164
– ident: 10.1016/j.cag.2023.05.010_b24
– volume: 30
  year: 2017
  ident: 10.1016/j.cag.2023.05.010_b73
  article-title: Attention is all you need
  publication-title: Adv Neural Inf Process Syst
– volume: 41
  start-page: 591
  issue: 2
  year: 2022
  ident: 10.1016/j.cag.2023.05.010_b44
  article-title: State-of-the-art in the architecture, methods and applications of StyleGAN
  publication-title: Comput Graph Forum
  doi: 10.1111/cgf.14503
– ident: 10.1016/j.cag.2023.05.010_b30
  doi: 10.1109/CVPR42600.2020.00832
– volume: 55
  issue: 10
  year: 2023
  ident: 10.1016/j.cag.2023.05.010_b10
  article-title: Generative adversarial networks in time series: A systematic literature review
  publication-title: ACM Comput Surv
  doi: 10.1145/3559540
– ident: 10.1016/j.cag.2023.05.010_b69
  doi: 10.1109/CVPR52688.2022.00361
– ident: 10.1016/j.cag.2023.05.010_b72
– ident: 10.1016/j.cag.2023.05.010_b35
  doi: 10.1109/CVPR.2019.00244
– start-page: 77
  year: 2021
  ident: 10.1016/j.cag.2023.05.010_b3
  article-title: Medical image generation using generative adversarial networks: A review
  publication-title: Health Inf A Comput Perspect Healthc
– ident: 10.1016/j.cag.2023.05.010_b41
  doi: 10.1109/CVPR46437.2021.00905
– year: 2021
  ident: 10.1016/j.cag.2023.05.010_b80
– start-page: 234
  year: 2015
  ident: 10.1016/j.cag.2023.05.010_b88
  article-title: U-net: Convolutional networks for biomedical image segmentation
– volume: 33
  start-page: 6840
  year: 2020
  ident: 10.1016/j.cag.2023.05.010_b91
  article-title: Denoising diffusion probabilistic models
  publication-title: Adv Neural Inf Process Syst
– volume: 34
  start-page: 8780
  year: 2021
  ident: 10.1016/j.cag.2023.05.010_b92
  article-title: Diffusion models beat gans on image synthesis
  publication-title: Adv Neural Inf Process Syst
– volume: 8
  start-page: 3
  year: 2022
  ident: 10.1016/j.cag.2023.05.010_b67
  article-title: Deep image synthesis from intuitive user input: A review and perspectives
  publication-title: Comput Vis Media
  doi: 10.1007/s41095-021-0234-8
– start-page: 10
  year: 2022
  ident: 10.1016/j.cag.2023.05.010_b25
  article-title: StyleGAN-XL: Scaling StyleGAN to large diverse datasets
– volume: 29
  start-page: 2234
  year: 2016
  ident: 10.1016/j.cag.2023.05.010_b55
  article-title: Improved techniques for training gans
  publication-title: Adv Neural Inf Process Syst
– ident: 10.1016/j.cag.2023.05.010_b58
– year: 2022
  ident: 10.1016/j.cag.2023.05.010_b63
– start-page: 852
  year: 2019
  ident: 10.1016/j.cag.2023.05.010_b94
  article-title: Deepfake: A survey on facial forgery technique using generative adversarial network
– ident: 10.1016/j.cag.2023.05.010_b75
  doi: 10.1109/CVPR.2018.00813
– ident: 10.1016/j.cag.2023.05.010_b29
  doi: 10.1109/ICCV.2019.00453
– volume: 30
  year: 2017
  ident: 10.1016/j.cag.2023.05.010_b54
  article-title: Gans trained by a two time-scale update rule converge to a local nash equilibrium
  publication-title: Adv Neural Inf Process Syst
– start-page: 4055
  year: 2018
  ident: 10.1016/j.cag.2023.05.010_b74
  article-title: Image transformer
– ident: 10.1016/j.cag.2023.05.010_b39
  doi: 10.1007/978-3-030-11021-5_5
SSID ssj0002264
Score 2.5179694
Snippet Generative Adversarial Networks (GANs) are a type of deep learning architecture that uses two networks namely a generator and a discriminator that, by...
SourceID crossref
elsevier
SourceType Enrichment Source
Index Database
Publisher
StartPage 13
SubjectTerms Deep image synthesis
Generative Adversarial Network
Generative models
Image generation
Title A review on Generative Adversarial Networks for image generation
URI https://dx.doi.org/10.1016/j.cag.2023.05.010
Volume 114
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV09T8MwED1VZYEB8SnKR-WBCSmt0zhxslFVVAVEJyp1s2LHroogrWhZ-e2cYweKBAyMic5SdLHfu5Pv3QFccpbxglvZL7JbwFQWBZk0JshUjGRieias5GMP42Q0YXfTeNqAQa2FsWWVHvsdpldo7d90vTe7y_ncanwxA2IYf0QVsU6tgp1xu8s7719lHlYo6jpRIhqjdX2zWdV4qXzWsfPDXfNO-jM3bfDNcA92faBI-u5b9qGhywPY2WgfeAjXfeKUJ2RREtc_2oIXqYYsr3K7tcjYlXmvCAanZP6C6EFm3nJRHsFkePM4GAV-IkKgehlfB0aysCdje5uV0kJzmspQ5RpTsiSNDJ5PZOsI6SdlKaYqaYbBgDSFlEjKjCplomNolotSnwCRVKs4ogZDLsM4k1LGcZ4UiU6jsNCZbgGtfSGUbxdup1Y8i7ou7Emg-4R1n6CxQPe14OpzydL1yvjLmNUOFt9-uEAs_33Z6f-WncG2fXKVe-fQXL--6QuMJtayXW2XNmz1b-9H4w8vmcZd
linkProvider Elsevier
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1LT8JAEJ4gHtSD8RnxuQe9mBRKu6XtwUSiEpDHCRJuK7vdJRgtRDDGi3_KP-hst1VM1IMJ13anab9Ovm82Ow-AU5-GfuTrsl9UN4uK0LVCrpQVCg_FRDmqnJSPtTuVeo_e9r1-Dt6zWhidVplyv-H0hK3TK6UUzdJkNNI1vrgDohh_uImw9tPMyqZ8fcF92_SicY0_-cxxajfdq7qVjhawhBP6M0txWna4p4-FAjuSvh3wshhI3NtUAleho6PsucjjAQ0w5g9CVFWuIs5R3agthHLxuUuwTJEu9NiE4ttXXomuTDWtL5H-8fWyo9QkqUwMhkU9sNx0C7V_FsM5gattwHoamZKq-fhNyMl4C9bm-hVuw2WVmFIXMo6JaVit2ZIkU52nA-3LpGPyyqcEo2EyekS6IsN05Tjegd5CcNqFfDyO5R4QbkvhubbCGE9Rn3LOPW9QiSoycMuRDGUB7AwLJtL-5HpMxgPLEtHuGcLHNHzM9hjCV4DzT5OJac7x12KaAcy-eRhD8fjdbP9_ZiewUu-2W6zV6DQPYFXfMWmDh5CfPT3LIwxlZvw4cR0Cd4v21Q9ilP-X
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=A+review+on+Generative+Adversarial+Networks+for+image+generation&rft.jtitle=Computers+%26+graphics&rft.au=Trevisan+de+Souza%2C+Vinicius+Luis&rft.au=Marques%2C+Bruno+Augusto+Dorta&rft.au=Batagelo%2C+Harlen+Costa&rft.au=Gois%2C+Jo%C3%A3o+Paulo&rft.date=2023-08-01&rft.pub=Elsevier+Ltd&rft.issn=0097-8493&rft.volume=114&rft.spage=13&rft.epage=25&rft_id=info:doi/10.1016%2Fj.cag.2023.05.010&rft.externalDocID=S009784932300064X
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0097-8493&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0097-8493&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0097-8493&client=summon