Key‐point‐guided adaptive convolution and instance normalization for continuous transitive face reenactment of any person

Face reenactment technology is widely applied in various applications. However, the reconstruction effects of existing methods are often not quite realistic enough. Thus, this paper proposes a progressive face reenactment method. First, to make full use of the key information, we propose adaptive co...

Full description

Saved in:
Bibliographic Details
Published inComputer animation and virtual worlds Vol. 35; no. 3
Main Authors Xu, Shibiao, Hua, Miao, Zhang, Jiguang, Zhang, Zhaohui, Zhang, Xiaopeng
Format Journal Article
LanguageEnglish
Published Chichester Wiley Subscription Services, Inc 01.05.2024
Subjects
Online AccessGet full text
ISSN1546-4261
1546-427X
DOI10.1002/cav.2256

Cover

Loading…
Abstract Face reenactment technology is widely applied in various applications. However, the reconstruction effects of existing methods are often not quite realistic enough. Thus, this paper proposes a progressive face reenactment method. First, to make full use of the key information, we propose adaptive convolution and instance normalization to encode the key information into all learnable parameters in the network, including the weights of the convolution kernels and the means and variances in the normalization layer. Second, we present continuous transitive facial expression generation according to all the weights of the network generated by the key points, resulting in the continuous change of the image generated by the network. Third, in contrast to classical convolution, we apply the combination of depth‐ and point‐wise convolutions, which can greatly reduce the number of weights and improve the efficiency of training. Finally, we extend the proposed face reenactment method to the face editing application. Comprehensive experiments demonstrate the effectiveness of the proposed method, which can generate a clearer and more realistic face from any person and is more generic and applicable than other methods. This work presents a continuous transitive face reenactment algorithm that uses face key points information to gradually reenact faces based on two stages GAN, which contains the key face points transformation module and the facial expression generation module. The process involves transforming key points from the source face and generating corresponding facial expressions on the target face.
AbstractList Face reenactment technology is widely applied in various applications. However, the reconstruction effects of existing methods are often not quite realistic enough. Thus, this paper proposes a progressive face reenactment method. First, to make full use of the key information, we propose adaptive convolution and instance normalization to encode the key information into all learnable parameters in the network, including the weights of the convolution kernels and the means and variances in the normalization layer. Second, we present continuous transitive facial expression generation according to all the weights of the network generated by the key points, resulting in the continuous change of the image generated by the network. Third, in contrast to classical convolution, we apply the combination of depth‐ and point‐wise convolutions, which can greatly reduce the number of weights and improve the efficiency of training. Finally, we extend the proposed face reenactment method to the face editing application. Comprehensive experiments demonstrate the effectiveness of the proposed method, which can generate a clearer and more realistic face from any person and is more generic and applicable than other methods.
Face reenactment technology is widely applied in various applications. However, the reconstruction effects of existing methods are often not quite realistic enough. Thus, this paper proposes a progressive face reenactment method. First, to make full use of the key information, we propose adaptive convolution and instance normalization to encode the key information into all learnable parameters in the network, including the weights of the convolution kernels and the means and variances in the normalization layer. Second, we present continuous transitive facial expression generation according to all the weights of the network generated by the key points, resulting in the continuous change of the image generated by the network. Third, in contrast to classical convolution, we apply the combination of depth‐ and point‐wise convolutions, which can greatly reduce the number of weights and improve the efficiency of training. Finally, we extend the proposed face reenactment method to the face editing application. Comprehensive experiments demonstrate the effectiveness of the proposed method, which can generate a clearer and more realistic face from any person and is more generic and applicable than other methods. This work presents a continuous transitive face reenactment algorithm that uses face key points information to gradually reenact faces based on two stages GAN, which contains the key face points transformation module and the facial expression generation module. The process involves transforming key points from the source face and generating corresponding facial expressions on the target face.
Author Zhang, Zhaohui
Zhang, Xiaopeng
Xu, Shibiao
Hua, Miao
Zhang, Jiguang
Author_xml – sequence: 1
  givenname: Shibiao
  surname: Xu
  fullname: Xu, Shibiao
  organization: Beijing University of Posts and Telecommunication
– sequence: 2
  givenname: Miao
  surname: Hua
  fullname: Hua, Miao
  organization: Beijing Bytedance Technology Co., Ltd
– sequence: 3
  givenname: Jiguang
  surname: Zhang
  fullname: Zhang, Jiguang
  email: jiguang.zhang@ia.ac.cn
  organization: Chinese Academy of Sciences
– sequence: 4
  givenname: Zhaohui
  orcidid: 0000-0003-0827-4797
  surname: Zhang
  fullname: Zhang, Zhaohui
  organization: Chinese Academy of Sciences
– sequence: 5
  givenname: Xiaopeng
  surname: Zhang
  fullname: Zhang, Xiaopeng
  organization: Chinese Academy of Sciences
BookMark eNp10M9KAzEQBvAgFWyr4CMsePGyNclut91jKf7DghcVb8tsMpGUNlmTbKWC4CP4jD6J2Va8eZrA_PgyfAPSM9YgIaeMjhil_ELAZsT5uDggfTbOizTnk-fe37tgR2Tg_TLKgjPaJx93uP3-_GqsNiHOl1ZLlAlIaILeYCKs2dhVG7Q1CRiZaOMDGIGJsW4NK_0Ou5WyrqNBm9a2PgkOjNe7AAURO0QDIqzRhMSqGLRNGnTemmNyqGDl8eR3Dsnj1eXD_CZd3F_fzmeLVPDubqSMl1CqCaVM5qhYKbKSQ42C1rwY16KWSrFMqoJBVoOcMGTTXE7zrAScZmU2JGf73MbZ1xZ9qJa2dSZ-WWV0wgoaKYvqfK-Es947VFXj9BrctmK06sqtYrlVV26k6Z6-6RVu_3XVfPa08z-ZNYJA
Cites_doi 10.1109/TMM.2019.2933338
10.1109/CVPR.2016.262
10.1109/TMM.2021.3068567
10.1145/3240508.3240612
10.1109/TMM.2020.2993962
10.1109/CVPR.2019.00179
10.1145/3197517.3201350
10.1023/A:1013737224969
10.1109/TMM.2019.2963621
10.1109/CVPR52688.2022.00072
10.1109/CVPR42600.2020.00813
10.1109/TMM.2022.3156820
10.1109/CVPR.2019.00453
10.1145/3072959.3073640
10.1109/ICCV.2017.244
10.1080/02699930903485076
10.1109/CVPR42600.2020.00537
10.1109/ICCV.2015.425
10.1145/311535.311556
ContentType Journal Article
Copyright 2024 John Wiley & Sons Ltd.
2024 John Wiley & Sons, Ltd.
Copyright_xml – notice: 2024 John Wiley & Sons Ltd.
– notice: 2024 John Wiley & Sons, Ltd.
DBID AAYXX
CITATION
7SC
8FD
JQ2
L7M
L~C
L~D
DOI 10.1002/cav.2256
DatabaseName CrossRef
Computer and Information Systems Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DatabaseTitle CrossRef
Computer and Information Systems Abstracts
Technology Research Database
Computer and Information Systems Abstracts – Academic
Advanced Technologies Database with Aerospace
ProQuest Computer Science Collection
Computer and Information Systems Abstracts Professional
DatabaseTitleList Computer and Information Systems Abstracts
CrossRef

DeliveryMethod fulltext_linktorsrc
Discipline Visual Arts
EISSN 1546-427X
EndPage n/a
ExternalDocumentID 10_1002_cav_2256
CAV2256
Genre article
GrantInformation_xml – fundername: National Natural Science Foundation of China
  funderid: 32271983; 52175493; 62162044; 62171321; 62271074
– fundername: Beijing Natural Science Foundation
  funderid: JQ23014
– fundername: State Key Laboratory of Virtual Reality Technology and Systems
  funderid: VRLAB2023B01
– fundername: Wenzhou Business School 2024 Talent launch program
  funderid: RC202401
GroupedDBID .3N
.4S
.DC
.GA
.Y3
05W
0R~
10A
1L6
1OB
1OC
29F
31~
33P
3SF
3WU
4.4
50Y
50Z
51W
51X
52M
52N
52O
52P
52S
52T
52U
52W
52X
5GY
5VS
66C
6J9
702
7PT
8-0
8-1
8-3
8-4
8-5
930
A03
AAESR
AAEVG
AAHQN
AAMMB
AAMNL
AANHP
AANLZ
AAONW
AASGY
AAXRX
AAYCA
AAZKR
ABCQN
ABCUV
ABEML
ABIJN
ABPVW
ACAHQ
ACBWZ
ACCZN
ACGFS
ACPOU
ACRPL
ACSCC
ACXBN
ACXQS
ACYXJ
ADBBV
ADEOM
ADIZJ
ADKYN
ADMGS
ADMLS
ADNMO
ADOZA
ADXAS
ADZMN
AEFGJ
AEIGN
AEIMD
AENEX
AEUYR
AFBPY
AFFPM
AFGKR
AFWVQ
AFZJQ
AGHNM
AGQPQ
AGXDD
AGYGG
AHBTC
AIDQK
AIDYY
AITYG
AIURR
AJXKR
ALMA_UNASSIGNED_HOLDINGS
ALUQN
ALVPJ
AMBMR
AMYDB
ARCSS
ASPBG
ATUGU
AUFTA
AVWKF
AZBYB
AZFZN
AZVAB
BAFTC
BDRZF
BFHJK
BHBCM
BMNLL
BROTX
BRXPI
BY8
CS3
D-E
D-F
DCZOG
DPXWK
DR2
DRFUL
DRSTM
DU5
EBS
EDO
EJD
F00
F01
F04
F5P
FEDTE
G-S
G.N
GNP
GODZA
HF~
HGLYW
HHY
HVGLF
HZ~
I-F
ITG
ITH
IX1
J0M
JPC
KQQ
LATKE
LAW
LC2
LC3
LEEKS
LH4
LITHE
LOXES
LP6
LP7
LUTES
LW6
LYRES
MEWTI
MK4
MRFUL
MRSTM
MSFUL
MSSTM
MXFUL
MXSTM
N9A
NF~
O66
O9-
OIG
P2W
P4D
PQQKQ
Q.N
Q11
QB0
QRW
R.K
ROL
RX1
RYL
SUPJJ
TN5
TUS
UB1
V2E
V8K
W8V
W99
WBKPD
WIH
WIK
WQJ
WXSBR
WYISQ
WZISG
XG1
XV2
~IA
~WT
AAHHS
AAYXX
ACCFJ
ADZOD
AEEZP
AEQDE
AIWBW
AJBDE
CITATION
7SC
8FD
JQ2
L7M
L~C
L~D
ID FETCH-LOGICAL-c2546-e0129a9f7001d4ef19c392abec0b265bcbdff13df61a3bad71e184d8439ae8393
IEDL.DBID DR2
ISSN 1546-4261
IngestDate Sat Jul 26 03:41:00 EDT 2025
Tue Jul 01 02:42:24 EDT 2025
Wed Aug 20 07:26:33 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 3
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c2546-e0129a9f7001d4ef19c392abec0b265bcbdff13df61a3bad71e184d8439ae8393
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0003-0827-4797
PQID 3071608431
PQPubID 2034909
PageCount 15
ParticipantIDs proquest_journals_3071608431
crossref_primary_10_1002_cav_2256
wiley_primary_10_1002_cav_2256_CAV2256
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate May/June 2024
2024-05-00
20240501
PublicationDateYYYYMMDD 2024-05-01
PublicationDate_xml – month: 05
  year: 2024
  text: May/June 2024
PublicationDecade 2020
PublicationPlace Chichester
PublicationPlace_xml – name: Chichester
PublicationTitle Computer animation and virtual worlds
PublicationYear 2024
Publisher Wiley Subscription Services, Inc
Publisher_xml – name: Wiley Subscription Services, Inc
References 2021; 23
2010; 24
2022
2017; 36
2020
2019
2018
2017
2016
2015
2020; 22
2014
2018; 37
1999
e_1_2_10_23_1
e_1_2_10_24_1
e_1_2_10_22_1
Simonyan K (e_1_2_10_31_1)
Ma L (e_1_2_10_21_1) 2017
e_1_2_10_2_1
e_1_2_10_4_1
e_1_2_10_18_1
e_1_2_10_3_1
e_1_2_10_19_1
e_1_2_10_6_1
e_1_2_10_16_1
e_1_2_10_5_1
Goodfellow IJ (e_1_2_10_14_1) 2014
e_1_2_10_8_1
Isola P (e_1_2_10_15_1) 2017
e_1_2_10_7_1
Park T (e_1_2_10_29_1) 2019
e_1_2_10_9_1
e_1_2_10_13_1
e_1_2_10_10_1
Wang T (e_1_2_10_17_1) 2018
e_1_2_10_33_1
e_1_2_10_11_1
Dong H (e_1_2_10_20_1) 2018
e_1_2_10_30_1
Ma L (e_1_2_10_26_1) 2019
Wu W (e_1_2_10_12_1)
Kingma DP (e_1_2_10_32_1)
e_1_2_10_27_1
e_1_2_10_28_1
e_1_2_10_25_1
References_xml – start-page: 2337
  year: 2019
  end-page: 2346
– start-page: 5325
  year: 2020
  end-page: 5334
– start-page: 1
  year: 2022
  article-title: 3D face reconstruction and gaze tracking in the HMD for virtual interaction '
  publication-title: IEEE Trans Multimed
– volume: 23
  start-page: 1160
  year: 2021
  end-page: 1172
  article-title: 3D face reconstruction from a single image assisted by 2D face images in the wild
  publication-title: IEEE Trans Multimed
– start-page: 2387
  year: 2016
  end-page: 2395
– volume: 36
  start-page: 95:1
  issue: 4
  year: 2017
  end-page: 95:13
  article-title: Synthesizing Obama: learning lip sync from audio
  publication-title: ACM Trans Graph
– volume: 22
  start-page: 2808
  issue: 11
  year: 2020
  end-page: 2819
  article-title: Learning how to smile: expression video generation with conditional adversarial recurrent nets
  publication-title: IEEE Trans Multimed
– start-page: 8107
  year: 2020
  end-page: 8116
– start-page: 472
  year: 2018
  end-page: 482
– start-page: 187
  year: 1999
  end-page: 194
– start-page: 627
  year: 2018
  end-page: 635
– start-page: 5967
  year: 2017
  end-page: 5976
– volume: 37
  start-page: 164:1
  issue: 4
  year: 2018
  end-page: 164:13
  article-title: : real‐time reenactment of human portrait videos
  publication-title: ACM Trans. Graph.
– volume: 24
  start-page: 1377
  year: 2010
  end-page: 1388
  article-title: Presentation and validation of the Radboud faces database
  publication-title: Cognit Emot
– volume: 22
  start-page: 730
  issue: 3
  year: 2020
  end-page: 743
  article-title: Realistic facial expression reconstruction for VR HMD users
  publication-title: IEEE Trans Multimed
– start-page: 4401
  year: 2019
  end-page: 4410
– start-page: 406
  year: 2017
  end-page: 416
– volume: 23
  start-page: 2998
  year: 2021
  end-page: 3012
  article-title: Expression‐aware face reconstruction via a dual‐stream network
  publication-title: IEEE Trans Multimed
– year: 2022
– article-title: AnyoneNet: synchronized speech and talking head generation for arbitrary persons
  publication-title: IEEE Trans Multimed
– start-page: 2672
  year: 2014
  end-page: 2680
– start-page: 11:1
  year: 2019
  end-page: 11:10
– start-page: 2242
  year: 2017
  end-page: 2251
– start-page: 2015
– year: 2017
– volume: 2018
  start-page: 622
  end-page: 638
– start-page: 8798
  year: 2018
  end-page: 8807
– year: 2019
– year: 2015
– start-page: 1692
  year: 2019
  end-page: 1701
– start-page: 11:1
  volume-title: Proceedings of the ACM SIGGRAPH symposium on interactive 3D graphics and games, I3D 2019, Montreal, QC, Canada, May 21‐23, 2019
  year: 2019
  ident: e_1_2_10_26_1
– ident: e_1_2_10_7_1
  doi: 10.1109/TMM.2019.2933338
– start-page: 472
  volume-title: Advances in neural information processing systems 31: annual conference on neural information processing systems 2018, NeurIPS 2018, 3‐8 December 2018, Montréal, Canada
  year: 2018
  ident: e_1_2_10_20_1
– ident: e_1_2_10_25_1
  doi: 10.1109/CVPR.2016.262
– ident: e_1_2_10_6_1
  doi: 10.1109/TMM.2021.3068567
– ident: e_1_2_10_10_1
  doi: 10.1145/3240508.3240612
– start-page: 2672
  volume-title: Advances in neural information processing systems 27: annual conference on neural information processing systems 2014, December 8‐13 2014, Montreal, Quebec, Canada
  year: 2014
  ident: e_1_2_10_14_1
– ident: e_1_2_10_28_1
– ident: e_1_2_10_5_1
  doi: 10.1109/TMM.2020.2993962
– ident: e_1_2_10_30_1
  doi: 10.1109/CVPR.2019.00179
– start-page: 2015
  volume-title: 3rd international conference on learning representations, ICLR 2015, San Diego, CA, USA, May 7‐9, 2015, conference track proceedings
  ident: e_1_2_10_31_1
– ident: e_1_2_10_9_1
  doi: 10.1145/3197517.3201350
– ident: e_1_2_10_2_1
  doi: 10.1023/A:1013737224969
– ident: e_1_2_10_3_1
  doi: 10.1109/TMM.2019.2963621
– ident: e_1_2_10_27_1
– start-page: 8798
  volume-title: 2018 {IEEE} conference on computer vision and pattern recognition, {CVPR} 2018, Salt Lake City, UT, USA, June 18‐22, 2018
  year: 2018
  ident: e_1_2_10_17_1
– start-page: 406
  volume-title: Advances in neural information processing systems 30: annual conference on neural information processing systems 2017, 4‐9 December 2017, Long Beach, CA, USA
  year: 2017
  ident: e_1_2_10_21_1
– start-page: 2015
  volume-title: 3rd international conference on learning representations, ICLR 2015, San Diego, CA, USA, May 7‐9, 2015, conference track proceedings
  ident: e_1_2_10_32_1
– ident: e_1_2_10_33_1
  doi: 10.1109/CVPR52688.2022.00072
– start-page: 622
  volume-title: Computer vision‐ECCV 2018‐15th European conference, Munich, Germany, September 8‐14, 2018, proceedings, part I. 11205 of lecture notes in computer science
  ident: e_1_2_10_12_1
– ident: e_1_2_10_19_1
  doi: 10.1109/CVPR42600.2020.00813
– ident: e_1_2_10_4_1
  doi: 10.1109/TMM.2022.3156820
– ident: e_1_2_10_18_1
  doi: 10.1109/CVPR.2019.00453
– start-page: 5967
  volume-title: 2017 {IEEE} conference on computer vision and pattern recognition, {CVPR} 2017, Honolulu, HI, USA, July 21‐26, 2017
  year: 2017
  ident: e_1_2_10_15_1
– ident: e_1_2_10_8_1
  doi: 10.1145/3072959.3073640
– ident: e_1_2_10_16_1
  doi: 10.1109/ICCV.2017.244
– ident: e_1_2_10_22_1
  doi: 10.1080/02699930903485076
– start-page: 2337
  volume-title: {IEEE} conference on computer vision and pattern recognition, {CVPR} 2019, Long Beach, CA, USA, June 16‐20, 2019
  year: 2019
  ident: e_1_2_10_29_1
– ident: e_1_2_10_13_1
  doi: 10.1109/CVPR42600.2020.00537
– ident: e_1_2_10_23_1
  doi: 10.1109/ICCV.2015.425
– ident: e_1_2_10_24_1
  doi: 10.1145/311535.311556
– ident: e_1_2_10_11_1
SSID ssj0026210
Score 2.3455055
Snippet Face reenactment technology is widely applied in various applications. However, the reconstruction effects of existing methods are often not quite realistic...
SourceID proquest
crossref
wiley
SourceType Aggregation Database
Index Database
Publisher
SubjectTerms Convolution
face reenactment
human‐centered computing
Image contrast
Image reconstruction
visualization
visualization application domains
Title Key‐point‐guided adaptive convolution and instance normalization for continuous transitive face reenactment of any person
URI https://onlinelibrary.wiley.com/doi/abs/10.1002%2Fcav.2256
https://www.proquest.com/docview/3071608431
Volume 35
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnZ3LSsQwFIaDuNKFd3G8EUHcdZy0aWdmOQyKKLgQHQZclFylKG2xraAg-Ag-o0_iOWnrDQSxm0JJestJ8iX985eQfc4k04ChXqis8bjkA0_C5uk-M0MRGM6MU1ucRydX_HQaThtVJa6Fqf0hPibcsGa49horuJDF4adpqBIPXQhGdNtGqRby0MWHc5Qf-bURQcgjD0cJre9szz9sM37viT7x8iukul7meJFct_dXi0tuu1Upu-rph3Xj_x5giSw08ElHdbQskxmTrpD5SVJU9dFilTyfmce3l9c8S9IS9jdVoo2mQosc20WKIvUmWKlINU0cXSpDU2Tfu2ZRJwUSxqRlklZZVdASO0QnUqJWQGLU-gjl9O00s3CiR5o78l8jV8dHl-MTr_lFg6fQSN8zOI8lhha_XmtuLBsqAC4BgdGTfhRKJbW1LNA2YiKQAiMAhpR6ABgkDLBZsE5m0yw1G4RqbQeAq7zPLeOQEnIyrWTAA6MGkVIdstcWV5zXThxx7bnsx_AqY3yVHbLdlmPc1MUihlaMRT24JOuQA1cgv-aPx6MJ7jf_mnCLzPlAObUCcpvMlveV2QFKKeWui8d3IUHqgA
linkProvider Wiley-Blackwell
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnZ1bS9xAFMcPah-0D9p6oatrHUF8y7qTzGZ36dOilfVSH0TFByHMtYSW7NJNBAXBj-Bn7CfxnEmitSCIeRkIZ3KbMzO_mZz5D8CW4IobxNCgo50NhBK9QOERmC63fRlZwa2PtjiJh-fi8LJzOQXf6rUwpT7E04Qb1QzfXlMFpwnpnWfVUC2vW-iN8TR8oA29afuCvdMn7agwDkspgo6IAxon1Mqz7XCnzvmyL3oGzH8x1fcz-wtwVT9hGV7yq1XkqqVv_xNvfOcrfIL5ij_ZoHSYzzBls0X4eJFOivLsZAnujuzN3_uH8SjNckx_Fqmxhkkjx9Q0MopTr_yVycyw1AOmtiwj_P1dretkCMNkmqdZMSomLKc-0ccpMSfRmMJ9pPYh7mzk8EI3bOzhfxnO97-f7Q6DapeGQJOWfmBpKkv2Hf3ANsI63tfIXBJ9o63CuKO0Ms7xyLiYy0hJcgIcVZoekpC0iGfRCsxko8x-AWaM6yGxiq5wXKAl5uRGq0hEVvdirRuwWZdXMi7FOJJSdjlM8FMm9Ckb0KwLMqmq4yTBhozHbbwlb8C2L5FX8ye7gwtKV99quAGzw7Mfx8nxwcnRGsyFCD1lQGQTZvI_hV1HaMnVV--cj62M7po
linkToPdf http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnZ3dStxAFMcPVqHUC7Uf4lprp1B6l9VJZrO7l8vaRbtFSumK0IswnxKUbHATYQXBR-gz-iSeM0l2a6FQmptAOBOSmXNmfjM58w_AR8EVN4ihQUc7GwgleoHCIzBdbvsysoJbn21xGh9PxJfzznmdVUl7YSp9iMWCG0WG768pwHPjDpaioVretNEZ42ewJmKMFQKi7wvpqDAOKyWCjogDmiY0wrOH4UFT8ulQtOTL3ynVDzOjTfjZPGCVXXLZLgvV1rd_aDf-3xtswUZNn2xQuctLWLHZK1g_S2dldXX2Gu7Gdv5w_yufplmB54syNdYwaWROHSOjLPXaW5nMDEs9XmrLMoLfq3pXJ0MUJtMizcppOWMFjYg-S4k5icaU7CO1T3BnU4c3mrPco_8bmIw-_xgeB_U_GgJNSvqBpYUs2Xf0-doI63hfI3FJ9IxDFcYdpZVxjkfGxVxGSpIL4JzS9JCDpEU4i7ZhNZtmdgeYMa6HvCq6wnGBlliSG60iEVndi7VuwYemuZK8kuJIKtHlMMGqTKgqW7DXtGNSB-MswW6Mo4cgKrXgk2-Qv5ZPhoMzOu_-q-F7eP7taJR8PTkdv4UXIRJPlQ25B6vFdWnfIbEUat-75iMhLO1S
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Key%E2%80%90point%E2%80%90guided+adaptive+convolution+and+instance+normalization+for+continuous+transitive+face+reenactment+of+any+person&rft.jtitle=Computer+animation+and+virtual+worlds&rft.au=Xu%2C+Shibiao&rft.au=Hua%2C+Miao&rft.au=Zhang%2C+Jiguang&rft.au=Zhang%2C+Zhaohui&rft.date=2024-05-01&rft.issn=1546-4261&rft.eissn=1546-427X&rft.volume=35&rft.issue=3&rft_id=info:doi/10.1002%2Fcav.2256&rft.externalDBID=n%2Fa&rft.externalDocID=10_1002_cav_2256
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1546-4261&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1546-4261&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1546-4261&client=summon