RSFace: subject agnostic face swapping with expression high fidelity

Face swapping has shown remarkable progress with the flourishing development of deep learning. In particular, the emergence of subject agnostic methods has broadened the range of applications of face swapping. Furthermore, high fidelity implementation has improved the naturalness of generated faces....

Full description

Saved in:
Bibliographic Details
Published inThe Visual computer Vol. 39; no. 11; pp. 5497 - 5511
Main Authors Yang, Gaoming, Wang, Tao, Fang, Xianjin, Zhang, ji
Format Journal Article
LanguageEnglish
Published Berlin/Heidelberg Springer Berlin Heidelberg 01.11.2023
Springer Nature B.V
Subjects
Online AccessGet full text
ISSN0178-2789
1432-2315
DOI10.1007/s00371-022-02675-z

Cover

Loading…
Abstract Face swapping has shown remarkable progress with the flourishing development of deep learning. In particular, the emergence of subject agnostic methods has broadened the range of applications of face swapping. Furthermore, high fidelity implementation has improved the naturalness of generated faces. However, some high fidelity face swapping methods still suffer from expression distortion at this stage. In this work, we propose an extended Adaptive Embedding Integration Network (AEI-Net) to improve the performance of this network in synthesizing swapped faces on faces in the wild. First, we add a face reenactment module to synchronize the expressions of the input faces and reduce the influence of irrelevant attributes on the synthesis results. Second, we train AEI-Net using a new attribute matching loss to improve the consistency of the generated results and the target face expressions. Finally, extensive experiments on wild faces demonstrate that our method can better restore expression and posture while maintaining identity than previous methods.
AbstractList Face swapping has shown remarkable progress with the flourishing development of deep learning. In particular, the emergence of subject agnostic methods has broadened the range of applications of face swapping. Furthermore, high fidelity implementation has improved the naturalness of generated faces. However, some high fidelity face swapping methods still suffer from expression distortion at this stage. In this work, we propose an extended Adaptive Embedding Integration Network (AEI-Net) to improve the performance of this network in synthesizing swapped faces on faces in the wild. First, we add a face reenactment module to synchronize the expressions of the input faces and reduce the influence of irrelevant attributes on the synthesis results. Second, we train AEI-Net using a new attribute matching loss to improve the consistency of the generated results and the target face expressions. Finally, extensive experiments on wild faces demonstrate that our method can better restore expression and posture while maintaining identity than previous methods.
Author Fang, Xianjin
Yang, Gaoming
Wang, Tao
Zhang, ji
Author_xml – sequence: 1
  givenname: Gaoming
  surname: Yang
  fullname: Yang, Gaoming
  organization: School of Computer Science and Engineering, Anhui University of Science and Technology
– sequence: 2
  givenname: Tao
  surname: Wang
  fullname: Wang, Tao
  email: taowang@aust.edu.cn
  organization: School of Computer Science and Engineering, Anhui University of Science and Technology
– sequence: 3
  givenname: Xianjin
  surname: Fang
  fullname: Fang, Xianjin
  organization: School of Computer Science and Engineering, Anhui University of Science and Technology
– sequence: 4
  givenname: ji
  surname: Zhang
  fullname: Zhang, ji
  organization: School of Mathematics, Physics, and Computing, University of Southern Queensland
BookMark eNp9kFtLAzEQhYNUsK3-AZ8CPq_mstskvkm1KhQEL88hm022KXV3TVJ6-fWmriD40IdhmOF8M4czAoOmbQwAlxhdY4TYTUCIMpwhQlJNWJHtT8AQ55RkhOJiAIYIM54RxsUZGIWwRGlmuRiC-9e3mdLmFoZ1uTQ6QlU3bYhOQ5vWMGxU17mmhhsXF9BsO29CcG0DF65eQOsqs3Jxdw5OrVoFc_Hbx-Bj9vA-fcrmL4_P07t5pikWMVPlRBnOqopVQnGWa5VzTjBFWhXYYGYLYjTXWmlbWKvKXFfGEqQsL1GBCKNjcNXf7Xz7tTYhymW79k16KYnATHBBxUHFe5X2bQjeWKldVDG5jl65lcRIHjKTfWYyZSZ_MpP7hJJ_aOfdp_K74xDtoZDETW38n6sj1DdV8YK3
CitedBy_id crossref_primary_10_1007_s11042_024_18706_x
Cites_doi 10.1109/CVPR46437.2021.00480
10.1109/CVPRW.2018.00281
10.1109/TIP.2021.3089909
10.1109/ICCV.2019.00009
10.1109/FG.2018.00024
10.1007/s11263-019-01151-x
10.1109/TPAMI.2020.2983686
10.1109/ICCV.2017.167
10.1007/s00371-021-02347-4
10.1145/3394171.3413630
10.1109/CVPR42600.2020.00582
10.1109/CVPR.2018.00702
10.1109/TPAMI.2021.3087709
10.1109/CVPR.2017.632
10.1109/CVPR42600.2020.00512
10.1109/CVPR42600.2020.00537
10.1609/aaai.v34i07.6970
10.1109/LSP.2016.2603342
10.1109/CVPR.2018.00552
10.1109/CVPR46437.2021.01605
10.1080/02699930903485076
10.24963/ijcai.2021/157
10.1109/CVPR42600.2020.01380
10.1109/TPAMI.2020.2970919
10.1109/CVPRW.2019.00038
10.1145/3230744.3230818
10.1609/aaai.v34i07.6721
10.1109/CVPR.2016.262
10.1109/CVPR46437.2021.01468
10.1007/978-3-030-01261-8_41
10.5244/C.29.41
10.1109/CVPR42600.2020.00813
10.1109/CVPR.2019.00244
10.1145/3072959.3073640
10.1109/ICCV.2019.00728
10.1145/1399504.1360638
ContentType Journal Article
Copyright The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2022. Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
Copyright_xml – notice: The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2022. Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
DBID AAYXX
CITATION
8FE
8FG
AFKRA
ARAPS
AZQEC
BENPR
BGLVJ
CCPQU
DWQXO
GNUQQ
HCIFZ
JQ2
K7-
P5Z
P62
PHGZM
PHGZT
PKEHL
PQEST
PQGLB
PQQKQ
PQUKI
DOI 10.1007/s00371-022-02675-z
DatabaseName CrossRef
ProQuest SciTech Collection
ProQuest Technology Collection
ProQuest Central UK/Ireland
Advanced Technologies & Aerospace Collection
ProQuest Central Essentials
ProQuest Central (subscription)
Technology Collection
ProQuest One Community College
ProQuest Central
ProQuest Central Student
SciTech Premium Collection
ProQuest Computer Science Collection
Computer Science Database
Advanced Technologies & Aerospace Database
ProQuest Advanced Technologies & Aerospace Collection
ProQuest Central Premium
ProQuest One Academic
ProQuest One Academic Middle East (New)
ProQuest One Academic Eastern Edition (DO NOT USE)
ProQuest One Applied & Life Sciences
ProQuest One Academic
ProQuest One Academic UKI Edition
DatabaseTitle CrossRef
Advanced Technologies & Aerospace Collection
Computer Science Database
ProQuest Central Student
Technology Collection
ProQuest One Academic Middle East (New)
ProQuest Advanced Technologies & Aerospace Collection
ProQuest Central Essentials
ProQuest Computer Science Collection
ProQuest One Academic Eastern Edition
SciTech Premium Collection
ProQuest One Community College
ProQuest Technology Collection
ProQuest SciTech Collection
ProQuest Central
Advanced Technologies & Aerospace Database
ProQuest One Applied & Life Sciences
ProQuest One Academic UKI Edition
ProQuest Central Korea
ProQuest Central (New)
ProQuest One Academic
ProQuest One Academic (New)
DatabaseTitleList
Advanced Technologies & Aerospace Collection
Database_xml – sequence: 1
  dbid: 8FG
  name: ProQuest Technology Collection
  url: https://search.proquest.com/technologycollection1
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
Computer Science
EISSN 1432-2315
EndPage 5511
ExternalDocumentID 10_1007_s00371_022_02675_z
GroupedDBID -4Z
-59
-5G
-BR
-EM
-Y2
-~C
-~X
.86
.DC
.VR
06D
0R~
0VY
123
1N0
1SB
2.D
203
28-
29R
2J2
2JN
2JY
2KG
2KM
2LR
2P1
2VQ
2~H
30V
4.4
406
408
409
40D
40E
5QI
5VS
67Z
6NX
6TJ
78A
8TC
8UJ
95-
95.
95~
96X
AAAVM
AABHQ
AACDK
AAHNG
AAIAL
AAJBT
AAJKR
AANZL
AAOBN
AARHV
AARTL
AASML
AATNV
AATVU
AAUYE
AAWCG
AAYIU
AAYOK
AAYQN
AAYTO
AAYZH
ABAKF
ABBBX
ABBXA
ABDPE
ABDZT
ABECU
ABFTV
ABHLI
ABHQN
ABJNI
ABJOX
ABKCH
ABKTR
ABMNI
ABMQK
ABNWP
ABQBU
ABQSL
ABSXP
ABTEG
ABTHY
ABTKH
ABTMW
ABULA
ABWNU
ABXPI
ACAOD
ACBXY
ACDTI
ACGFS
ACHSB
ACHXU
ACKNC
ACMDZ
ACMLO
ACOKC
ACOMO
ACPIV
ACZOJ
ADHHG
ADHIR
ADIMF
ADINQ
ADKNI
ADKPE
ADQRH
ADRFC
ADTPH
ADURQ
ADYFF
ADZKW
AEBTG
AEFIE
AEFQL
AEGAL
AEGNC
AEJHL
AEJRE
AEKMD
AEMSY
AENEX
AEOHA
AEPYU
AESKC
AETLH
AEVLU
AEXYK
AFBBN
AFEXP
AFFNX
AFGCZ
AFKRA
AFLOW
AFQWF
AFWTZ
AFZKB
AGAYW
AGDGC
AGGDS
AGJBK
AGMZJ
AGQEE
AGQMX
AGRTI
AGWIL
AGWZB
AGYKE
AHAVH
AHBYD
AHSBF
AHYZX
AIAKS
AIGIU
AIIXL
AILAN
AITGF
AJBLW
AJRNO
AJZVZ
ALMA_UNASSIGNED_HOLDINGS
ALWAN
AMKLP
AMXSW
AMYLF
AMYQR
AOCGG
ARAPS
ARMRJ
ASPBG
AVWKF
AXYYD
AYJHY
AZFZN
B-.
BA0
BBWZM
BDATZ
BENPR
BGLVJ
BGNMA
BSONS
CAG
CCPQU
COF
CS3
CSCUP
DDRTE
DL5
DNIVK
DPUIP
DU5
EBLON
EBS
EIOEI
EJD
ESBYG
FEDTE
FERAY
FFXSO
FIGPU
FINBP
FNLPD
FRRFC
FSGXE
FWDCC
GGCAI
GGRSB
GJIRD
GNWQR
GQ6
GQ7
GQ8
GXS
H13
HCIFZ
HF~
HG5
HG6
HMJXF
HQYDN
HRMNR
HVGLF
HZ~
I09
IHE
IJ-
IKXTQ
ITM
IWAJR
IXC
IZIGR
IZQ
I~X
I~Z
J-C
J0Z
JBSCW
JCJTX
JZLTJ
K7-
KDC
KOV
KOW
LAS
LLZTM
M4Y
MA-
N2Q
N9A
NB0
NDZJH
NPVJJ
NQJWS
NU0
O9-
O93
O9G
O9I
O9J
OAM
P19
P2P
P9O
PF0
PT4
PT5
QOK
QOS
R4E
R89
R9I
RHV
RIG
RNI
RNS
ROL
RPX
RSV
RZK
S16
S1Z
S26
S27
S28
S3B
SAP
SCJ
SCLPG
SCO
SDH
SDM
SHX
SISQX
SJYHP
SNE
SNPRN
SNX
SOHCF
SOJ
SPISZ
SRMVM
SSLCW
STPWE
SZN
T13
T16
TN5
TSG
TSK
TSV
TUC
U2A
UG4
UOJIU
UTJUX
UZXMN
VC2
VFIZW
W23
W48
WK8
YLTOR
YOT
Z45
Z5O
Z7R
Z7S
Z7X
Z7Z
Z83
Z86
Z88
Z8M
Z8N
Z8R
Z8T
Z8W
Z92
ZMTXR
~EX
AAPKM
AAYXX
ABBRH
ABDBE
ABFSG
ACSTC
ADHKG
ADKFA
AEZWR
AFDZB
AFHIU
AFOHR
AGQPQ
AHPBZ
AHWEU
AIXLP
ATHPR
AYFIA
CITATION
PHGZM
PHGZT
8FE
8FG
ABRTQ
AZQEC
DWQXO
GNUQQ
JQ2
P62
PKEHL
PQEST
PQGLB
PQQKQ
PQUKI
ID FETCH-LOGICAL-c319t-ab6ae87dd7d9a874ca4882130ca51e17f52ec8ccacf5ffab4cdef20af8b050273
IEDL.DBID 8FG
ISSN 0178-2789
IngestDate Fri Jul 25 23:38:08 EDT 2025
Thu Apr 24 22:54:26 EDT 2025
Tue Jul 01 01:05:52 EDT 2025
Fri Feb 21 02:42:35 EST 2025
IsPeerReviewed true
IsScholarly true
Issue 11
Keywords Attribute matching loss
Face reenactment
Face swapping
High fidelity
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c319t-ab6ae87dd7d9a874ca4882130ca51e17f52ec8ccacf5ffab4cdef20af8b050273
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
PQID 2917989397
PQPubID 2043737
PageCount 15
ParticipantIDs proquest_journals_2917989397
crossref_citationtrail_10_1007_s00371_022_02675_z
crossref_primary_10_1007_s00371_022_02675_z
springer_journals_10_1007_s00371_022_02675_z
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 20231100
2023-11-00
20231101
PublicationDateYYYYMMDD 2023-11-01
PublicationDate_xml – month: 11
  year: 2023
  text: 20231100
PublicationDecade 2020
PublicationPlace Berlin/Heidelberg
PublicationPlace_xml – name: Berlin/Heidelberg
– name: Heidelberg
PublicationSubtitle International Journal of Computer Graphics
PublicationTitle The Visual computer
PublicationTitleAbbrev Vis Comput
PublicationYear 2023
Publisher Springer Berlin Heidelberg
Springer Nature B.V
Publisher_xml – name: Springer Berlin Heidelberg
– name: Springer Nature B.V
References Natsume, R., Yatagawa, T., Morishima, S.: Rsgan: Face swapping and editing using face and hair representation in latent spaces. In: ACM SIGGRAPH 2018 Posters, pp. 1–2 (2018). https://doi.org/10.1145/3230744.3230818
Ha, S., Kersner, M., Kim, B., Seo, S., Kim, D.: Marionette: Few-shot face reenactment preserving identity of unseen targets. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), pp. 10893–10900 (2020). https://doi.org/10.1609/aaai.v34i07.6721
Deng, Y., Yang, J., Xu, S., Chen, D., Jia, Y., Tong, X.: Accurate 3d face reconstruction with weakly-supervised learning: From single image to image set. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops(CVPRW), pp. 285–295 (2019). https://doi.org/10.1109/CVPRW.2019.00038
DeepFakes. https://github.com/ondyari/FaceForensics/tree/master/dataset/DeepFakes. Accessed:2020-12-08
Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 1501–1510 (2017). https://doi.org/10.1109/ICCV.2017.167
Dang, H., Liu, F., Stehouwer, J., Liu, X., Jain, A.K.: On the detection of digital face manipulation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern recognition(CVPR), pp. 5781–5790. https://doi.org/10.1109/CVPR42600.2020.00582
Zhang, J., Zeng, X., Wang, M., Pan, Y., Liu, L., Liu, Y., Ding, Y., Fan, C.: Freenet: Multi-identity face reenactment. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020). https://doi.org/10.1109/CVPR42600.2020.00537
Parkhi, O.M., Vedaldi, A., Zisserman, A.: Deep face recognition. In: Proceedings of the British Machine Vision Conference (BMVC), pp. 41–14112 (2015). https://doi.org/10.5244/C.29.41
Wang, Y., Chen, X., Zhu, J., Chu, W., Tai, Y., Wang, C., Li, J., Wu, Y., Huang, F., Ji, R.: Hififace: 3d shape and semantic prior guided high fidelity face swapping. In: Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence(IJCAI), pp. 1136–1142 (2021). https://doi.org/10.24963/ijcai.2021/157
ChangF-JTranATHassnerTMasiINevatiaRMedioniGDeep, landmark-free fame: Face alignment, modeling, and expression estimationInt. J. Comput. Vis.2019127693095610.1007/s11263-019-01151-x
FaceSwap. https://github.com/ondyari/FaceForensics/tree/master/dataset/FaceSwapKowalski. Accessed:2020-10-17
Burkov, E., Pasechnik, I., Grigorev, A., Lempitsky, V.: Neural head reenactment with latent pose descriptors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern recognition(CVPR), pp. 13786–13795 (2020). https://doi.org/10.1109/CVPR42600.2020.01380
Deng, J., Guo, J., Xue, N., Zafeiriou, S.: Arcface: Additive angular margin loss for deep face recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4690–4699 (2019). https://doi.org/10.1109/TPAMI.2021.3087709
LangnerODotschRBijlstraGWigboldusDHHawkSTVan KnippenbergAPresentation and validation of the radboud faces databaseCogn. Emot.20102481377138810.1080/02699930903485076
Bao, J., Chen, D., Wen, F., Li, H., Hua, G.: Towards open-set identity preserving face synthesis. In: Proceedings of the IEEE Conference on Computer Vision and Pattern recognition (CVPR), pp. 6713–6722 (2018). https://doi.org/10.1109/CVPR.2018.00702
ZhangKZhangZLiZQiaoYJoint face detection and alignment using multitask cascaded convolutional networksIEEE Signal Process. Lett.201623101499150310.1109/LSP.2016.2603342
GoodfellowIPouget-AbadieJMirzaMXuBWarde-FarleyDOzairSCourvilleABengioYGenerative adversarial netsAdv. Neural. Inf. Process. Syst.20142726722680
Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4401–4410 (2019). https://doi.org/10.1109/TPAMI.2020.2970919
Nirkin, Y., Keller, Y., Hassner, T.: Fsgan: Subject agnostic face swapping and reenactment. In: Proceedings of the IEEE/CVF International Conference on Computer vision(ICCV), pp. 7184–7193 (2019). https://doi.org/10.1109/ICCV.2019.00728
Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive growing of gans for improved quality, stability, and variation. In: International Conference on Learning Representations (LCLR) (2018)
CaoMHuangSWangHWangXShenLWangSBaoLLiZLuoJUnifacegan: A unified framework for temporally consistent facial video editingIEEE Trans. Image Process.2021306107611610.1109/TIP.2021.3089909
Thies, J., Zollhofer, M., Stamminger, M., Theobalt, C., Nießner, M.: Face2face: Real-time face capture and reenactment of rgb videos. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2387–2395 (2016). https://doi.org/10.1109/CVPR.2016.262
Rossler, A., Cozzolino, D., Verdoliva, L., Riess, C., Thies, J., Niessner, M.: Faceforensics++: Learning to detect manipulated facial images. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 1–11 (2019). https://doi.org/10.1109/ICCV.2019.00009
Zhu, Y., Li, Q., Wang, J., Xu, C.-Z., Sun, Z.: One shot face swapping on megapixels. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR), pp. 4834–4844 (2021). https://doi.org/10.1109/CVPR46437.2021.00480
Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern recognition (CVPR), pp. 8110–8119 (2020). https://doi.org/10.1109/CVPR42600.2020.00813
TyagiSYadavDA detailed analysis of image and video forgery detection techniquesVis. Comput.20223812110.1007/s00371-021-02347-4
Bitouk, D., Kumar, N., Dhillon, S., Belhumeur, P., Nayar, S.K.: Face swapping: automatically replacing faces in photographs. In: ACM SIGGRAPH 2008 Papers, pp. 1–8 (2008). https://doi.org/10.1145/1399504.1360638
Li, L., Bao, J., Yang, H., Chen, D., Wen, F.: Advancing high fidelity identity swapping for forgery detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR), pp. 5074–5083 (2020). https://doi.org/10.1109/CVPR42600.2020.00512
Isola, P., Zhu, J.-Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1125–1134 (2017). https://doi.org/10.1109/CVPR.2017.632
Luo, Y., Zhang, Y., Yan, J., Liu, W.: Generalizing face forgery detection with high-frequency features. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR), pp. 16317–16326. https://doi.org/10.1109/CVPR46437.2021.01605
Wang, C., Deng, W.: Representative forgery mining for fake face detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR), pp. 14923–14932 (2021). https://doi.org/10.1109/CVPR46437.2021.01468
WangJSunKChengTJiangBDengCZhaoYLiuDMuYTanMWangXDeep high-resolution representation learning for visual recognitionIEEE Trans. Pattern Anal. Mach. Intell.202043103349336410.1109/TPAMI.2020.2983686
Wang, H., Wang, Y., Zhou, Z., Ji, X., Gong, D., Zhou, J., Li, Z., Liu, W.: Cosface: Large margin cosine loss for deep face recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern recognition(CVPR), pp. 5265–5274 (2018). https://doi.org/10.1109/CVPR.2018.00552
Chen, R., Chen, X., Ni, B., Ge, Y.: Simswap: An efficient framework for high fidelity face swapping. In: Proceedings of the 28th ACM International Conference on Multimedia, pp. 2003–2011 (2020). https://doi.org/10.1145/3394171.3413630
SuwajanakornSSeitzSMKemelmacher-ShlizermanISynthesizing Obama: learning lip sync from audioACM Trans. Graph. (ToG)201736411310.1145/3072959.3073640
Ruiz, N., Chong, E., Rehg, J.M.: Fine-grained head pose estimation without keypoints. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition workshops(CVPRW), pp. 2074–2083 (2018). https://doi.org/10.1109/CVPRW.2018.00281
Nirkin, Y., Masi, I., Tuan, A.T., Hassner, T., Medioni, G.: On face segmentation, face swapping, and face perception. In: 2018 13th IEEE International Conference on Automatic Face Gesture Recognition(FG), pp. 98–105 (2018). https://doi.org/10.1109/FG.2018.00024
Park, T., Liu, M.-Y., Wang, T.-C., Zhu, J.-Y.: Semantic image synthesis with spatially-adaptive normalization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2337–2346 (2019). https://doi.org/10.1109/CVPR.2019.00244
Zeng, X., Pan, Y., Wang, M., Zhang, J., Liu, Y.: Realistic face reenactment via self-supervised disentangling of identity and pose. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), pp. 12757–12764 (2020). https://doi.org/10.1609/aaai.v34i07.6970
Wiles, O., Koepke, A.S., Zisserman, A.: X2face: A network for controlling face generation using images, audio, and pose codes. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 670–686 (2018). https://doi.org/10.1007/978-3-030-01261-8_41
S Suwajanakorn (2675_CR5) 2017; 36
2675_CR18
2675_CR17
2675_CR39
2675_CR16
2675_CR15
2675_CR37
2675_CR14
2675_CR36
2675_CR13
2675_CR35
2675_CR12
2675_CR11
2675_CR33
F-J Chang (2675_CR6) 2019; 127
2675_CR10
2675_CR32
2675_CR31
I Goodfellow (2675_CR19) 2014; 27
J Wang (2675_CR30) 2020; 43
S Tyagi (2675_CR4) 2022; 38
M Cao (2675_CR8) 2021; 30
2675_CR29
O Langner (2675_CR34) 2010; 24
2675_CR28
2675_CR27
2675_CR26
2675_CR25
2675_CR9
2675_CR24
2675_CR23
2675_CR22
2675_CR21
2675_CR20
2675_CR7
K Zhang (2675_CR38) 2016; 23
2675_CR40
2675_CR1
2675_CR2
2675_CR3
References_xml – reference: ZhangKZhangZLiZQiaoYJoint face detection and alignment using multitask cascaded convolutional networksIEEE Signal Process. Lett.201623101499150310.1109/LSP.2016.2603342
– reference: Luo, Y., Zhang, Y., Yan, J., Liu, W.: Generalizing face forgery detection with high-frequency features. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR), pp. 16317–16326. https://doi.org/10.1109/CVPR46437.2021.01605
– reference: Bao, J., Chen, D., Wen, F., Li, H., Hua, G.: Towards open-set identity preserving face synthesis. In: Proceedings of the IEEE Conference on Computer Vision and Pattern recognition (CVPR), pp. 6713–6722 (2018). https://doi.org/10.1109/CVPR.2018.00702
– reference: Wiles, O., Koepke, A.S., Zisserman, A.: X2face: A network for controlling face generation using images, audio, and pose codes. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 670–686 (2018). https://doi.org/10.1007/978-3-030-01261-8_41
– reference: Wang, C., Deng, W.: Representative forgery mining for fake face detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR), pp. 14923–14932 (2021). https://doi.org/10.1109/CVPR46437.2021.01468
– reference: Wang, H., Wang, Y., Zhou, Z., Ji, X., Gong, D., Zhou, J., Li, Z., Liu, W.: Cosface: Large margin cosine loss for deep face recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern recognition(CVPR), pp. 5265–5274 (2018). https://doi.org/10.1109/CVPR.2018.00552
– reference: Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 1501–1510 (2017). https://doi.org/10.1109/ICCV.2017.167
– reference: TyagiSYadavDA detailed analysis of image and video forgery detection techniquesVis. Comput.20223812110.1007/s00371-021-02347-4
– reference: Deng, J., Guo, J., Xue, N., Zafeiriou, S.: Arcface: Additive angular margin loss for deep face recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4690–4699 (2019). https://doi.org/10.1109/TPAMI.2021.3087709
– reference: Wang, Y., Chen, X., Zhu, J., Chu, W., Tai, Y., Wang, C., Li, J., Wu, Y., Huang, F., Ji, R.: Hififace: 3d shape and semantic prior guided high fidelity face swapping. In: Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence(IJCAI), pp. 1136–1142 (2021). https://doi.org/10.24963/ijcai.2021/157
– reference: Thies, J., Zollhofer, M., Stamminger, M., Theobalt, C., Nießner, M.: Face2face: Real-time face capture and reenactment of rgb videos. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2387–2395 (2016). https://doi.org/10.1109/CVPR.2016.262
– reference: Li, L., Bao, J., Yang, H., Chen, D., Wen, F.: Advancing high fidelity identity swapping for forgery detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR), pp. 5074–5083 (2020). https://doi.org/10.1109/CVPR42600.2020.00512
– reference: Nirkin, Y., Masi, I., Tuan, A.T., Hassner, T., Medioni, G.: On face segmentation, face swapping, and face perception. In: 2018 13th IEEE International Conference on Automatic Face Gesture Recognition(FG), pp. 98–105 (2018). https://doi.org/10.1109/FG.2018.00024
– reference: Bitouk, D., Kumar, N., Dhillon, S., Belhumeur, P., Nayar, S.K.: Face swapping: automatically replacing faces in photographs. In: ACM SIGGRAPH 2008 Papers, pp. 1–8 (2008). https://doi.org/10.1145/1399504.1360638
– reference: Dang, H., Liu, F., Stehouwer, J., Liu, X., Jain, A.K.: On the detection of digital face manipulation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern recognition(CVPR), pp. 5781–5790. https://doi.org/10.1109/CVPR42600.2020.00582
– reference: Park, T., Liu, M.-Y., Wang, T.-C., Zhu, J.-Y.: Semantic image synthesis with spatially-adaptive normalization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2337–2346 (2019). https://doi.org/10.1109/CVPR.2019.00244
– reference: DeepFakes. https://github.com/ondyari/FaceForensics/tree/master/dataset/DeepFakes. Accessed:2020-12-08
– reference: Burkov, E., Pasechnik, I., Grigorev, A., Lempitsky, V.: Neural head reenactment with latent pose descriptors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern recognition(CVPR), pp. 13786–13795 (2020). https://doi.org/10.1109/CVPR42600.2020.01380
– reference: CaoMHuangSWangHWangXShenLWangSBaoLLiZLuoJUnifacegan: A unified framework for temporally consistent facial video editingIEEE Trans. Image Process.2021306107611610.1109/TIP.2021.3089909
– reference: LangnerODotschRBijlstraGWigboldusDHHawkSTVan KnippenbergAPresentation and validation of the radboud faces databaseCogn. Emot.20102481377138810.1080/02699930903485076
– reference: FaceSwap. https://github.com/ondyari/FaceForensics/tree/master/dataset/FaceSwapKowalski. Accessed:2020-10-17
– reference: Nirkin, Y., Keller, Y., Hassner, T.: Fsgan: Subject agnostic face swapping and reenactment. In: Proceedings of the IEEE/CVF International Conference on Computer vision(ICCV), pp. 7184–7193 (2019). https://doi.org/10.1109/ICCV.2019.00728
– reference: Isola, P., Zhu, J.-Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1125–1134 (2017). https://doi.org/10.1109/CVPR.2017.632
– reference: Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4401–4410 (2019). https://doi.org/10.1109/TPAMI.2020.2970919
– reference: ChangF-JTranATHassnerTMasiINevatiaRMedioniGDeep, landmark-free fame: Face alignment, modeling, and expression estimationInt. J. Comput. Vis.2019127693095610.1007/s11263-019-01151-x
– reference: Ha, S., Kersner, M., Kim, B., Seo, S., Kim, D.: Marionette: Few-shot face reenactment preserving identity of unseen targets. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), pp. 10893–10900 (2020). https://doi.org/10.1609/aaai.v34i07.6721
– reference: SuwajanakornSSeitzSMKemelmacher-ShlizermanISynthesizing Obama: learning lip sync from audioACM Trans. Graph. (ToG)201736411310.1145/3072959.3073640
– reference: Chen, R., Chen, X., Ni, B., Ge, Y.: Simswap: An efficient framework for high fidelity face swapping. In: Proceedings of the 28th ACM International Conference on Multimedia, pp. 2003–2011 (2020). https://doi.org/10.1145/3394171.3413630
– reference: Natsume, R., Yatagawa, T., Morishima, S.: Rsgan: Face swapping and editing using face and hair representation in latent spaces. In: ACM SIGGRAPH 2018 Posters, pp. 1–2 (2018). https://doi.org/10.1145/3230744.3230818
– reference: Parkhi, O.M., Vedaldi, A., Zisserman, A.: Deep face recognition. In: Proceedings of the British Machine Vision Conference (BMVC), pp. 41–14112 (2015). https://doi.org/10.5244/C.29.41
– reference: Zeng, X., Pan, Y., Wang, M., Zhang, J., Liu, Y.: Realistic face reenactment via self-supervised disentangling of identity and pose. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), pp. 12757–12764 (2020). https://doi.org/10.1609/aaai.v34i07.6970
– reference: Rossler, A., Cozzolino, D., Verdoliva, L., Riess, C., Thies, J., Niessner, M.: Faceforensics++: Learning to detect manipulated facial images. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 1–11 (2019). https://doi.org/10.1109/ICCV.2019.00009
– reference: GoodfellowIPouget-AbadieJMirzaMXuBWarde-FarleyDOzairSCourvilleABengioYGenerative adversarial netsAdv. Neural. Inf. Process. Syst.20142726722680
– reference: Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive growing of gans for improved quality, stability, and variation. In: International Conference on Learning Representations (LCLR) (2018)
– reference: Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern recognition (CVPR), pp. 8110–8119 (2020). https://doi.org/10.1109/CVPR42600.2020.00813
– reference: Deng, Y., Yang, J., Xu, S., Chen, D., Jia, Y., Tong, X.: Accurate 3d face reconstruction with weakly-supervised learning: From single image to image set. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops(CVPRW), pp. 285–295 (2019). https://doi.org/10.1109/CVPRW.2019.00038
– reference: Zhu, Y., Li, Q., Wang, J., Xu, C.-Z., Sun, Z.: One shot face swapping on megapixels. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR), pp. 4834–4844 (2021). https://doi.org/10.1109/CVPR46437.2021.00480
– reference: Zhang, J., Zeng, X., Wang, M., Pan, Y., Liu, L., Liu, Y., Ding, Y., Fan, C.: Freenet: Multi-identity face reenactment. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020). https://doi.org/10.1109/CVPR42600.2020.00537
– reference: WangJSunKChengTJiangBDengCZhaoYLiuDMuYTanMWangXDeep high-resolution representation learning for visual recognitionIEEE Trans. Pattern Anal. Mach. Intell.202043103349336410.1109/TPAMI.2020.2983686
– reference: Ruiz, N., Chong, E., Rehg, J.M.: Fine-grained head pose estimation without keypoints. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition workshops(CVPRW), pp. 2074–2083 (2018). https://doi.org/10.1109/CVPRW.2018.00281
– ident: 2675_CR13
  doi: 10.1109/CVPR46437.2021.00480
– ident: 2675_CR40
  doi: 10.1109/CVPRW.2018.00281
– volume: 30
  start-page: 6107
  year: 2021
  ident: 2675_CR8
  publication-title: IEEE Trans. Image Process.
  doi: 10.1109/TIP.2021.3089909
– ident: 2675_CR33
  doi: 10.1109/ICCV.2019.00009
– ident: 2675_CR18
  doi: 10.1109/FG.2018.00024
– volume: 127
  start-page: 930
  issue: 6
  year: 2019
  ident: 2675_CR6
  publication-title: Int. J. Comput. Vis.
  doi: 10.1007/s11263-019-01151-x
– volume: 43
  start-page: 3349
  issue: 10
  year: 2020
  ident: 2675_CR30
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2020.2983686
– ident: 2675_CR31
  doi: 10.1109/ICCV.2017.167
– volume: 38
  start-page: 1
  year: 2022
  ident: 2675_CR4
  publication-title: Vis. Comput.
  doi: 10.1007/s00371-021-02347-4
– ident: 2675_CR15
  doi: 10.1145/3394171.3413630
– ident: 2675_CR3
  doi: 10.1109/CVPR42600.2020.00582
– ident: 2675_CR22
  doi: 10.1109/CVPR.2018.00702
– ident: 2675_CR29
  doi: 10.1109/TPAMI.2021.3087709
– ident: 2675_CR20
  doi: 10.1109/CVPR.2017.632
– ident: 2675_CR12
  doi: 10.1109/CVPR42600.2020.00512
– ident: 2675_CR26
  doi: 10.1109/CVPR42600.2020.00537
– ident: 2675_CR28
  doi: 10.1609/aaai.v34i07.6970
– volume: 23
  start-page: 1499
  issue: 10
  year: 2016
  ident: 2675_CR38
  publication-title: IEEE Signal Process. Lett.
  doi: 10.1109/LSP.2016.2603342
– ident: 2675_CR39
  doi: 10.1109/CVPR.2018.00552
– ident: 2675_CR2
  doi: 10.1109/CVPR46437.2021.01605
– ident: 2675_CR10
– volume: 24
  start-page: 1377
  issue: 8
  year: 2010
  ident: 2675_CR34
  publication-title: Cogn. Emot.
  doi: 10.1080/02699930903485076
– volume: 27
  start-page: 2672
  year: 2014
  ident: 2675_CR19
  publication-title: Adv. Neural. Inf. Process. Syst.
– ident: 2675_CR9
– ident: 2675_CR14
  doi: 10.24963/ijcai.2021/157
– ident: 2675_CR16
  doi: 10.1109/CVPR42600.2020.01380
– ident: 2675_CR35
– ident: 2675_CR36
  doi: 10.1109/TPAMI.2020.2970919
– ident: 2675_CR7
  doi: 10.1109/CVPRW.2019.00038
– ident: 2675_CR21
  doi: 10.1145/3230744.3230818
– ident: 2675_CR25
  doi: 10.1609/aaai.v34i07.6721
– ident: 2675_CR24
  doi: 10.1109/CVPR.2016.262
– ident: 2675_CR1
  doi: 10.1109/CVPR46437.2021.01468
– ident: 2675_CR27
  doi: 10.1007/978-3-030-01261-8_41
– ident: 2675_CR37
  doi: 10.5244/C.29.41
– ident: 2675_CR23
  doi: 10.1109/CVPR42600.2020.00813
– ident: 2675_CR32
  doi: 10.1109/CVPR.2019.00244
– volume: 36
  start-page: 1
  issue: 4
  year: 2017
  ident: 2675_CR5
  publication-title: ACM Trans. Graph. (ToG)
  doi: 10.1145/3072959.3073640
– ident: 2675_CR11
  doi: 10.1109/ICCV.2019.00728
– ident: 2675_CR17
  doi: 10.1145/1399504.1360638
SSID ssj0017749
Score 2.3461826
Snippet Face swapping has shown remarkable progress with the flourishing development of deep learning. In particular, the emergence of subject agnostic methods has...
SourceID proquest
crossref
springer
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 5497
SubjectTerms Accuracy
Artificial Intelligence
Computer Graphics
Computer Science
Image Processing and Computer Vision
Methods
Original Article
SummonAdditionalLinks – databaseName: SpringerLink Journals (ICM)
  dbid: U2A
  link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LT8MwDLZgXODAY4AYDJQDN4i0Rkkf3CZgmpDgAEzarUrShAsqaN0E2q8nztIOECBxbVMf7Lj-EtufAU5dFNOZzRhVjFnKE85pqrBsJ0ljqxwENp5t__YuHo74zViMQ1NYVVe71ylJ_6dumt08uxzF6nOcmiTofBXWhDu7474esX6TO3CAxoPeyJ2PsM8ztMr8LONrOFpizG9pUR9tBtuwGWAi6S_sugMrpmzDVj2CgQSPbMPGJz7BXbi6fxhIbS5INVN4v0J8GZ0TQax7TKo3iWwMTwQvX4l5DzWwJUHOYmKR8Mph8j0YDa4fL4c0jEmg2vnPlEoVS5MmRZEUmUwTjjzlKXOxSUsRmSixghmdOktpK6yViuvCWNaT1hlFIJ3NPrTKl9IcABEWGcl6OjVc8oxzZU0mWBHJXmwLoVUHolpbuQ4c4jjK4jlv2I-9hnOn4dxrOJ934Kz55nXBoPHn6m5thDx4U5WzDGnVMgedOnBeG2b5-ndph_9bfgTrOE1-0WrYhdZ0MjPHDnNM1YnfYh9JUs2U
  priority: 102
  providerName: Springer Nature
Title RSFace: subject agnostic face swapping with expression high fidelity
URI https://link.springer.com/article/10.1007/s00371-022-02675-z
https://www.proquest.com/docview/2917989397
Volume 39
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV1LSwMxEA4-LnoQn1itJQdvGuyGpLvrRVrtKopF1IKeliSbeJH62IrSX-9Mmm1VsKfA5nGYSTLfTma-IWQfrJhJXcqZ5twxEQvBEo1hO3HSchogsPVs-9e91kVfXD7Ih-BwK0NYZXUn-ou6eDHoIz_iKVJrpWA-T17fGFaNwtfVUEJjnixGYGlwhyfZ-eQVAaCNh78R_ClhxmdImvGpc56rjmEsO9Zgkmz02zBN0eafB1Jvd7JVshIAI22PNbxG5uxgnSz_oBHcIGe3d5ky9piWHxrdKtRHz8F46uAzLT8VkjA8UfS5UvsVQl8HFKmKqUOeK4Dim6Sfde9PL1iojsAMHJshU7qlbBIXRVykKokF0pMnHEySUTKyUewktyYBBRknnVNamMI63lQOdCGRxWaLLAxeBnabUOmQiKxpEiuUSIXQzqaSF5Fqtlwhja6RqBJNbgJ1OFaweM4npMdenDmIM_fizEc1cjCZ8zomzpg5ul5JPA-HqMynKq-Rw0oL0-7_V9uZvdouWcKi8eOMwjpZGL5_2D2AFkPd8PunQRbbWafTw_b88aoLbafbu7mF3j5vfwOetM-c
linkProvider ProQuest
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV3JTsMwEB2xHIADYhVl9QFOYNEYu0mQEEJAKeuBReIWbMfmgsrSIpaP4huZcZMWkODGNXEseTyZeR7PvAFYRi9mU58KboTwXMZS8sRQ2k6c1LxBCOwC2_7pWa1xJY-u1XUffJS1MJRWWdrEYKjze0sx8nWRErVWiu5z--GRU9coul0tW2h01OLYvb3gka21dbiH-7siRH3_crfBi64C3KK6tbk2Ne2SOM_jPNVJLInWOxFoyq1WkYtir4SzCS7MeuW9NtLmzouq9rgGRewvOG8_DMqNjZRSCJP6QffWAqFUgNsRnsyowrQo0gmleoEbj1PuPPV8Uvz9uyPsodsfF7LBz9XHYLQAqGyno1Hj0OeaEzDyhbZwEvbOL-rauk3WejYUxmEhWw_HM4-PWetFE-nDLaMYL3OvRaptkxE1MvPEq4XQfwqu_kVu0zDQvG-6GWDKE_FZ1SZOaplKabxLlcgjXa35XFlTgagUTWYLqnLqmHGXdUmWgzgzFGcWxJm9V2C1-81Dh6jjz9HzpcSz4qdtZT0Vq8BauQu917_PNvv3bEsw1Lg8PclODs-O52CYGtZ3qhnnYaD99OwWENa0zWLQJQY3_628n_0jCbI
linkToPdf http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlZ1LT8MwDMctGBKCA48BYjAgB25QsUbJ2nKbGNN4TQiYtFuVpAkXVCa2CbRPT5w-NhAgcW3THOym_jexfwY4tlFMRSainqTUeCxgzAslpu0EYdNIK4G1o-3f9ZrdPrse8MFcFb_Ldi-OJLOaBqQ0peOzYWLOysI3R5rzMBMdOyhxb7oIS_Zz7GNSV5-2ynMEK26cAPbtvxLWfOZlMz_P8TU0zfTmtyNSF3k6G7CWS0bSyny8CQs6rcJ60Y6B5KuzCqtzbMEtaD88doTS52Q0kbjXQlxKnZ2CGHuZjN4FkhmeCW7EEv2R58OmBPnFxCD8yurzbeh3Lp8uul7eMsFTdi2NPSGbQodBkgRJJMKAIbM8pDZOKcF97QeGU61C6zVluDFCMpVoQxvCWAdxRNvsQCV9TfUuEG6QTtZQoWaCRYxJoyNOE180mibhStbAL6wVq5wnjm0tXuKShOwsHFsLx87C8bQGJ-Uzw4ym8efoeuGEOF9Zo5hGiFiLrIyqwWnhmNnt32fb-9_wI1i-b3fi26vezT6sYJP5rAKxDpXx20QfWCkylofubfsEVnLUww
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=RSFace%3A+subject+agnostic+face+swapping+with+expression+high+fidelity&rft.jtitle=The+Visual+computer&rft.au=Yang%2C+Gaoming&rft.au=Wang%2C+Tao&rft.au=Fang%2C+Xianjin&rft.au=Zhang%2C+ji&rft.date=2023-11-01&rft.issn=0178-2789&rft.eissn=1432-2315&rft.volume=39&rft.issue=11&rft.spage=5497&rft.epage=5511&rft_id=info:doi/10.1007%2Fs00371-022-02675-z&rft.externalDBID=n%2Fa&rft.externalDocID=10_1007_s00371_022_02675_z
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0178-2789&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0178-2789&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0178-2789&client=summon