Talking Face Generation With Lip and Identity Priors
ABSTRACT Speech‐driven talking face video generation has attracted growing interest in recent research. While person‐specific approaches yield high‐fidelity results, they require extensive training data from each individual speaker. In contrast, general‐purpose methods often struggle with accurate l...
Saved in:
Published in | Computer animation and virtual worlds Vol. 36; no. 3 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
Hoboken, USA
John Wiley & Sons, Inc
01.05.2025
Wiley Subscription Services, Inc |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | ABSTRACT
Speech‐driven talking face video generation has attracted growing interest in recent research. While person‐specific approaches yield high‐fidelity results, they require extensive training data from each individual speaker. In contrast, general‐purpose methods often struggle with accurate lip synchronization, identity preservation, and natural facial movements. To address these limitations, we propose a novel architecture that combines an alignment model with a rendering model. The rendering model synthesizes identity‐consistent lip movements by leveraging facial landmarks derived from speech, a partially occluded target face, multi‐reference lip features, and the input audio. Concurrently, the alignment model estimates optical flow using the occluded face and a static reference image, enabling precise alignment of facial poses and lip shapes. This collaborative design enhances the rendering process, resulting in more realistic and identity‐preserving outputs. Extensive experiments demonstrate that our method significantly improves lip synchronization and identity retention, establishing a new benchmark in talking face video generation.
We propose a speech‐driven talking face generation framework that integrates optical flow‐based alignment and audio‐aware rendering with multi‐reference lip features. Our method effectively improves lip detail and identity preservation. |
---|---|
AbstractList | Speech‐driven talking face video generation has attracted growing interest in recent research. While person‐specific approaches yield high‐fidelity results, they require extensive training data from each individual speaker. In contrast, general‐purpose methods often struggle with accurate lip synchronization, identity preservation, and natural facial movements. To address these limitations, we propose a novel architecture that combines an alignment model with a rendering model. The rendering model synthesizes identity‐consistent lip movements by leveraging facial landmarks derived from speech, a partially occluded target face, multi‐reference lip features, and the input audio. Concurrently, the alignment model estimates optical flow using the occluded face and a static reference image, enabling precise alignment of facial poses and lip shapes. This collaborative design enhances the rendering process, resulting in more realistic and identity‐preserving outputs. Extensive experiments demonstrate that our method significantly improves lip synchronization and identity retention, establishing a new benchmark in talking face video generation. ABSTRACT Speech‐driven talking face video generation has attracted growing interest in recent research. While person‐specific approaches yield high‐fidelity results, they require extensive training data from each individual speaker. In contrast, general‐purpose methods often struggle with accurate lip synchronization, identity preservation, and natural facial movements. To address these limitations, we propose a novel architecture that combines an alignment model with a rendering model. The rendering model synthesizes identity‐consistent lip movements by leveraging facial landmarks derived from speech, a partially occluded target face, multi‐reference lip features, and the input audio. Concurrently, the alignment model estimates optical flow using the occluded face and a static reference image, enabling precise alignment of facial poses and lip shapes. This collaborative design enhances the rendering process, resulting in more realistic and identity‐preserving outputs. Extensive experiments demonstrate that our method significantly improves lip synchronization and identity retention, establishing a new benchmark in talking face video generation. We propose a speech‐driven talking face generation framework that integrates optical flow‐based alignment and audio‐aware rendering with multi‐reference lip features. Our method effectively improves lip detail and identity preservation. |
Author | Yang, Bailin Tam, Gary K. L. Nan, Fangzhe Wu, Jiajie Li, Frederick W. B. Pan, Jiahao |
Author_xml | – sequence: 1 givenname: Jiajie orcidid: 0009-0006-5947-2813 surname: Wu fullname: Wu, Jiajie organization: Zhejiang Gongshang University – sequence: 2 givenname: Frederick W. B. orcidid: 0000-0002-4283-4228 surname: Li fullname: Li, Frederick W. B. organization: University of Durham – sequence: 3 givenname: Gary K. L. surname: Tam fullname: Tam, Gary K. L. organization: Swansea University – sequence: 4 givenname: Bailin surname: Yang fullname: Yang, Bailin email: ybl@zjgsu.edu.cn organization: Zhejiang Gongshang University – sequence: 5 givenname: Fangzhe surname: Nan fullname: Nan, Fangzhe organization: Zhejiang Gongshang University – sequence: 6 givenname: Jiahao surname: Pan fullname: Pan, Jiahao organization: Zhejiang Gongshang University |
BookMark | eNp1kE1PAjEQhhuDiYAe_AdNPHlY2Ha3X0dCBElI9IAft6ZuZ7WIXWwXDP_e4hpvzmXew_POJM8A9XzjAaFLko9IntNxZfYjkQI_QX3CSp6VVDz3_jInZ2gQ4_pIUJL3Ubkym3fnX_HMVIDn4CGY1jUeP7n2DS_dFhtv8cKCb117wPfBNSGeo9PabCJc_O4hepjdrKa32fJuvphOlllFmeJZIaUtBLcmjWSEAM-VZYxzZWwhQb2UooaSEEaZLCQHK0itiDCGpoIlUAzRVXd3G5rPHcRWr5td8OmlLiiVSgolWKKuO6oKTYwBar0N7sOEgya5PkrRSYr-kZLYccd-uQ0c_gf1dPLYNb4BYdhi7g |
Cites_doi | 10.1109/CVPR.2019.00802 10.1145/3394171.3413532 10.1145/3072959.3073640 10.1109/ICCV48922.2021.00384 10.1145/3474085.3475318 10.1145/3414685.3417774 10.1145/3503161.3551574 10.1145/3490035.3490305 10.1145/3478513.3480484 10.1109/CVPR.2016.207 10.1007/978-3-031-19836-6_7 10.1109/CVPR.2018.00917 10.1109/CVPR.2018.00068 10.1145/3607541.3616812 10.1109/CVPR46437.2021.01386 10.1109/ICCV48922.2021.00573 10.1109/ICASSP48485.2024.10446049 10.1145/3596711.3596730 10.1109/CVPR.2019.00244 10.1109/CVPR52729.2023.00197 10.1145/3503250 10.1109/CVPR52729.2023.01408 10.1007/978-3-030-58545-7_3 10.1609/aaai.v36i2.20102 10.1007/978-3-319-54184-6_6 10.1109/CVPR46437.2021.00416 10.1109/TPAMI.2018.2889052 10.1109/TIP.2003.819861 10.1007/978-3-031-19775-8_39 10.1145/3528233.3530745 |
ContentType | Journal Article |
Copyright | 2025 John Wiley & Sons Ltd. 2025 John Wiley & Sons, Ltd. |
Copyright_xml | – notice: 2025 John Wiley & Sons Ltd. – notice: 2025 John Wiley & Sons, Ltd. |
DBID | AAYXX CITATION 7SC 8FD JQ2 L7M L~C L~D |
DOI | 10.1002/cav.70026 |
DatabaseName | CrossRef Computer and Information Systems Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
DatabaseTitle | CrossRef Computer and Information Systems Abstracts Technology Research Database Computer and Information Systems Abstracts – Academic Advanced Technologies Database with Aerospace ProQuest Computer Science Collection Computer and Information Systems Abstracts Professional |
DatabaseTitleList | Computer and Information Systems Abstracts CrossRef |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Visual Arts |
EISSN | 1546-427X |
EndPage | n/a |
ExternalDocumentID | 10_1002_cav_70026 CAV70026 |
Genre | researchArticle |
GrantInformation_xml | – fundername: National Natural Science Foundation of China funderid: 62172366 – fundername: Zhejiang Provincial Natural Science Foundation of China funderid: LD24F020003 – fundername: Major Sci‐Tech Innovation Project of Hangzhou City funderid: 2022AIZD0110 |
GroupedDBID | .3N .4S .DC .GA .Y3 05W 0R~ 10A 1L6 1OC 29F 31~ 33P 3SF 3WU 4.4 50Y 50Z 51W 51X 52M 52N 52O 52P 52S 52T 52U 52W 52X 5GY 5VS 66C 6J9 702 7PT 8-0 8-1 8-3 8-4 8-5 930 A03 AAESR AAEVG AAHHS AAHQN AAMNL AANHP AANLZ AAONW AASGY AAXRX AAYCA AAZKR ABCQN ABCUV ABEML ABIJN ABPVW ACAHQ ACBWZ ACCFJ ACCZN ACGFS ACPOU ACRPL ACSCC ACXBN ACXQS ACYXJ ADBBV ADEOM ADIZJ ADKYN ADMGS ADMLS ADNMO ADOZA ADXAS ADZMN ADZOD AEEZP AEIGN AEIMD AENEX AEQDE AEUYR AFBPY AFFPM AFGKR AFWVQ AFZJQ AGHNM AGQPQ AGYGG AHBTC AITYG AIURR AIWBW AJBDE AJXKR ALMA_UNASSIGNED_HOLDINGS ALUQN ALVPJ AMBMR AMYDB ARCSS ASPBG ATUGU AUFTA AVWKF AZBYB AZFZN AZVAB BAFTC BDRZF BFHJK BHBCM BMNLL BROTX BRXPI BY8 CS3 D-E D-F DCZOG DPXWK DR2 DRFUL DRSTM DU5 EBS EDO EJD F00 F01 F04 F5P FEDTE G-S G.N GNP GODZA HF~ HGLYW HHY HVGLF HZ~ I-F ITG ITH IX1 J0M JPC KQQ LATKE LAW LC2 LC3 LEEKS LH4 LITHE LOXES LP6 LP7 LUTES LW6 LYRES MEWTI MK4 MRFUL MRSTM MSFUL MSSTM MXFUL MXSTM N9A NF~ O66 O9- OIG P2W P4D PQQKQ Q.N Q11 QB0 QRW R.K ROL RX1 RYL SUPJJ TN5 TUS UB1 V2E V8K W8V W99 WBKPD WIH WIK WQJ WXSBR WYISQ WZISG XG1 XV2 ~IA ~WT AAMMB AAYXX AEFGJ AGXDD AIDQK AIDYY CITATION 1OB 7SC 8FD JQ2 L7M L~C L~D |
ID | FETCH-LOGICAL-c2596-388d376daaaa8511e609d55669ad38e9b47fe4115258386ed71f917aa2aaad1e3 |
IEDL.DBID | DR2 |
ISSN | 1546-4261 |
IngestDate | Sat Aug 23 13:22:39 EDT 2025 Thu Jul 03 08:37:46 EDT 2025 Wed Jun 25 09:40:24 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 3 |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c2596-388d376daaaa8511e609d55669ad38e9b47fe4115258386ed71f917aa2aaad1e3 |
Notes | Funding This work was supported by the Zhejiang Provincial Natural Science Foundation of China (Grant No. LD24F020003), the National Natural Science Foundation of China (Grant No. 62172366) and Major Sci‐Tech Innovation Project of Hangzhou City (2022AIZD0110). ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0009-0006-5947-2813 0000-0002-4283-4228 |
PQID | 3228987975 |
PQPubID | 2034909 |
PageCount | 11 |
ParticipantIDs | proquest_journals_3228987975 crossref_primary_10_1002_cav_70026 wiley_primary_10_1002_cav_70026_CAV70026 |
PublicationCentury | 2000 |
PublicationDate | May/June 2025 2025-05-00 20250501 |
PublicationDateYYYYMMDD | 2025-05-01 |
PublicationDate_xml | – month: 05 year: 2025 text: May/June 2025 |
PublicationDecade | 2020 |
PublicationPlace | Hoboken, USA |
PublicationPlace_xml | – name: Hoboken, USA – name: Chichester |
PublicationTitle | Computer animation and virtual worlds |
PublicationYear | 2025 |
Publisher | John Wiley & Sons, Inc Wiley Subscription Services, Inc |
Publisher_xml | – name: John Wiley & Sons, Inc – name: Wiley Subscription Services, Inc |
References | 2017; 30 2021; 65 2023 2022 2017; 36 2021 2020 2004; 13 2020; 39 2019 2022; 36 2018 2017 2016 2024 2018; 44 2021; 40 e_1_2_9_30_1 e_1_2_9_31_1 Thies J. (e_1_2_9_5_1) 2020 e_1_2_9_11_1 e_1_2_9_34_1 e_1_2_9_10_1 e_1_2_9_35_1 e_1_2_9_13_1 e_1_2_9_32_1 e_1_2_9_12_1 e_1_2_9_33_1 Zhou Y. (e_1_2_9_6_1) 2020; 39 Vaswani A. (e_1_2_9_24_1) 2017; 30 e_1_2_9_14_1 e_1_2_9_39_1 e_1_2_9_17_1 e_1_2_9_36_1 e_1_2_9_16_1 e_1_2_9_19_1 e_1_2_9_18_1 Prajwal K. R. (e_1_2_9_2_1) 2019 e_1_2_9_20_1 e_1_2_9_22_1 e_1_2_9_21_1 e_1_2_9_23_1 e_1_2_9_8_1 e_1_2_9_7_1 e_1_2_9_4_1 e_1_2_9_3_1 Heusel M. (e_1_2_9_37_1) 2017; 30 e_1_2_9_9_1 Zhong W. (e_1_2_9_15_1) 2023 e_1_2_9_26_1 e_1_2_9_25_1 e_1_2_9_28_1 e_1_2_9_27_1 Chung J. S. (e_1_2_9_38_1) 2017 e_1_2_9_29_1 |
References_xml | – start-page: 1428 year: 2019 end-page: 1436 – start-page: 2337 year: 2019 end-page: 2346 – start-page: 14653 year: 2023 end-page: 14662 – start-page: 157 year: 2023 end-page: 164 – start-page: 484 year: 2020 end-page: 492 – start-page: 35 year: 2020 end-page: 51 – start-page: 586 year: 2018 end-page: 595 – start-page: 9729 year: 2023 end-page: 9738 – start-page: 106 year: 2022 end-page: 125 – volume: 36 start-page: 1 issue: 4 year: 2017 end-page: 13 article-title: Synthesizing Obama: Learning Lip Sync From Audio publication-title: ACM Transactions on Graphics (ToG) – start-page: 87 year: 2017 end-page: 103 – start-page: 1874 year: 2016 end-page: 1883 – volume: 30 year: 2017 article-title: Attention Is All You Need publication-title: Advances in Neural Information Processing Systems – start-page: 3630 year: 2024 end-page: 3634 – start-page: 7035 year: 2022 end-page: 7039 – volume: 13 start-page: 600 issue: 4 year: 2004 end-page: 612 article-title: Image Quality Assessment: From Error Visibility to Structural Similarity publication-title: IEEE Transactions on Image Processing – start-page: 251 year: 2017 end-page: 263 – start-page: 716 year: 2020 end-page: 731 – volume: 40 start-page: 1 issue: 6 year: 2021 end-page: 17 article-title: Live Speech Portraits: Real‐Time Photorealistic Talking‐Head Animation publication-title: ACM Transactions on Graphics (ToG) – volume: 39 start-page: 1 issue: 6 year: 2020 end-page: 15 article-title: Makelttalk: Speaker‐Aware Talking‐Head Animation publication-title: ACM Transactions on Graphics (TOG) – start-page: 1 year: 2021 end-page: 9 – volume: 65 start-page: 99 issue: 1 year: 2021 end-page: 106 article-title: Nerf: Representing Scenes as Neural Radiance Fields for View Synthesis publication-title: Communications of the ACM – year: 2023 – volume: 36 start-page: 2062 issue: 2 year: 2022 end-page: 2070 article-title: Synctalkface: Talking Face Generation With Precise Lip‐Syncing via Audio‐Lip Memory publication-title: Proceedings of the AAAI Conference on Artificial Intelligence – start-page: 666 year: 2022 end-page: 682 – start-page: 81 year: 2023 end-page: 90 – year: 2017 – volume: 30 year: 2017 article-title: Gans Trained by a Two Time‐Scale Update Rule Converge to a Local Nash Equilibrium publication-title: Advances in Neural Information Processing Systems – volume: 44 start-page: 8717 issue: 12 year: 2018 end-page: 8727 article-title: Deep audio‐visual speech recognition publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence – start-page: 8798 year: 2018 end-page: 8807 – ident: e_1_2_9_22_1 doi: 10.1109/CVPR.2019.00802 – ident: e_1_2_9_3_1 doi: 10.1145/3394171.3413532 – ident: e_1_2_9_29_1 doi: 10.1145/3072959.3073640 – ident: e_1_2_9_25_1 – ident: e_1_2_9_11_1 doi: 10.1109/ICCV48922.2021.00384 – ident: e_1_2_9_4_1 doi: 10.1145/3474085.3475318 – volume: 39 start-page: 1 issue: 6 year: 2020 ident: e_1_2_9_6_1 article-title: Makelttalk: Speaker‐Aware Talking‐Head Animation publication-title: ACM Transactions on Graphics (TOG) doi: 10.1145/3414685.3417774 – ident: e_1_2_9_12_1 doi: 10.1145/3503161.3551574 – ident: e_1_2_9_20_1 doi: 10.1145/3490035.3490305 – start-page: 251 volume-title: Out of Time: Automated Lip Sync in the Wild year: 2017 ident: e_1_2_9_38_1 – ident: e_1_2_9_9_1 doi: 10.1145/3478513.3480484 – ident: e_1_2_9_31_1 doi: 10.1109/CVPR.2016.207 – ident: e_1_2_9_8_1 doi: 10.1007/978-3-031-19836-6_7 – ident: e_1_2_9_32_1 doi: 10.1109/CVPR.2018.00917 – ident: e_1_2_9_36_1 doi: 10.1109/CVPR.2018.00068 – ident: e_1_2_9_19_1 – start-page: 1428 volume-title: Proceedings of the 27th ACM International Conference on Multimedia year: 2019 ident: e_1_2_9_2_1 – ident: e_1_2_9_39_1 doi: 10.1145/3607541.3616812 – ident: e_1_2_9_23_1 doi: 10.1109/CVPR46437.2021.01386 – ident: e_1_2_9_7_1 doi: 10.1109/ICCV48922.2021.00573 – volume: 30 year: 2017 ident: e_1_2_9_37_1 article-title: Gans Trained by a Two Time‐Scale Update Rule Converge to a Local Nash Equilibrium publication-title: Advances in Neural Information Processing Systems – ident: e_1_2_9_18_1 doi: 10.1109/ICASSP48485.2024.10446049 – ident: e_1_2_9_26_1 doi: 10.1145/3596711.3596730 – ident: e_1_2_9_30_1 doi: 10.1109/CVPR.2019.00244 – ident: e_1_2_9_17_1 doi: 10.1109/CVPR52729.2023.00197 – ident: e_1_2_9_27_1 doi: 10.1145/3503250 – ident: e_1_2_9_14_1 doi: 10.1109/CVPR52729.2023.01408 – ident: e_1_2_9_21_1 doi: 10.1007/978-3-030-58545-7_3 – ident: e_1_2_9_13_1 doi: 10.1609/aaai.v36i2.20102 – start-page: 9729 volume-title: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) year: 2023 ident: e_1_2_9_15_1 – volume: 30 year: 2017 ident: e_1_2_9_24_1 article-title: Attention Is All You Need publication-title: Advances in Neural Information Processing Systems – ident: e_1_2_9_34_1 doi: 10.1007/978-3-319-54184-6_6 – ident: e_1_2_9_16_1 doi: 10.1109/CVPR46437.2021.00416 – ident: e_1_2_9_33_1 doi: 10.1109/TPAMI.2018.2889052 – start-page: 716 volume-title: Neural Voice Puppetry: Audio‐Driven Facial Reenactment year: 2020 ident: e_1_2_9_5_1 – ident: e_1_2_9_35_1 doi: 10.1109/TIP.2003.819861 – ident: e_1_2_9_10_1 doi: 10.1007/978-3-031-19775-8_39 – ident: e_1_2_9_28_1 doi: 10.1145/3528233.3530745 |
SSID | ssj0026210 |
Score | 2.3711364 |
Snippet | ABSTRACT
Speech‐driven talking face video generation has attracted growing interest in recent research. While person‐specific approaches yield high‐fidelity... Speech‐driven talking face video generation has attracted growing interest in recent research. While person‐specific approaches yield high‐fidelity results,... |
SourceID | proquest crossref wiley |
SourceType | Aggregation Database Index Database Publisher |
SubjectTerms | Alignment lip and identity priors Optical flow (image analysis) Rendering Speech speech‐driven Synchronism Talking talking face generation |
Title | Talking Face Generation With Lip and Identity Priors |
URI | https://onlinelibrary.wiley.com/doi/abs/10.1002%2Fcav.70026 https://www.proquest.com/docview/3228987975 |
Volume | 36 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3dS8MwEA_DJ33wW5xOCeKDL9n6kaYpPo3hGKIiss09CCVfxaJ0Y-0E_etNmnZTQRD7VEpTmkvu8rvj7ncAnJuqXCKJRFRQjnASJoglykVRIn1H8iQU0gT0b-_IYISvJ8GkAS7rWhjLD7EMuBnNKO21UXDG886KNFSwt3ZoXAhtf02ulgFED0vqKI94lokgwAQZN6FmFXK8znLk97NoBTC_wtTynOlvgaf6D216yUt7UfC2-PhB3vjPKWyDzQp_wq7dMDugobJdsDFO84V9mu8BPGSvJoAO-0woaHmpzfLBx7R4hjfpDLJMwqrC9x3ez9PpPN8Ho_7VsDdAVXMFJLTHQ5BPqdTGRTJ9GdSliBPJQIO7iEmfqojjMFHYNe2RqE-JkqGbaNeOMU8PkK7yD8BaNs3UIYAB8TkTRDgh8zHHQpsEETHDhceVUgQ3wVkt5nhmOTRiy5bsxVoEcSmCJmjVCxBXapTH2trQiIZRGDTBRSnJ3z8Q97rj8ubo768eg3XP9PMtExhbYK2YL9SJBhkFPy130yesCsuQ |
linkProvider | Wiley-Blackwell |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V3JTsMwEB2VcgAO7IhCAQuBxCVdEsdxDhwQpSp0EUJluQXHdkQFaqumBZVv4lf4J-wsLSAhceFATlGUWNZsfuOM3wAc6FO5RBBhUE59AwdOYLBAlg03EFZJ-IHDhd7Qb7ZI7Rpf3Nl3GXhLz8LE_BCTDTftGVG81g6uN6SLU9ZQzp4Ljs4hkpLKuhy_qIQtPD6vKO0emmb1rH1aM5KeAgZXQJ8YFqVC-ZRg6tJgQ5KSK2yFaVwmLCpdHzuBxGXdFYhalEjhlAOV0TBmqg9EWVpq3BmY1R3ENVN_5WpCVmUSM-Y-sDExdGKS8hiVzOJkql9Xvymk_QyMo5WtugTvqUzigpbHwmjoF_jrN7rI_yK0ZVhMIDY6iX1iBTKyuwoLN51wFD8N1wC32ZP-R4CqjEsUU29rC0W3neEDanT6iHUFSg4xj9HloNMbhOtw_SfT3oBst9eVm4BsYvmME15ymIV9zFXU4y7TdH--lJLgHOynevX6MU2IFxNCm54SuReJPAf5VONeEilCTwVU6lLHdewcHEWq-3kA7_TkJrrZ-v2rezBXazcbXuO8Vd-GeVO3L47qNfOQHQ5GckdhqqG_G5kygvu_NoMPyU0pDA |
linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1LSwMxEB5qBdGDb7FaNYiCl-1jN5vNHjyU1lJfpYiv25pNsliUtnRbpf4l_4o_ymQfrQqClx7c07JsQpjJTL5JMt8AHOisXCKIMCinvoEDJzBYIMuGGwirJPzA4UJv6F82SeMGn93b9xl4T3NhYn6I8YabtozIX2sD74mgOCEN5eyl4OgQIrlReS5HrypeC49Pa0q5h6ZZP7muNoykpIDBFc4nhkWpUCYlmHo01pCk5ApbQRqXCYtK18dOIHFZFwWiFiVSOOVABTSMmaqBKEtL9TsDs1g103UialdjriqTmDH1gY2JoeOSlMaoZBbHQ_2--E0Q7VdcHC1s9SX4SEUS32d5KgwHfoG__WCL_CcyW4bFBGCjSmwRK5CRnVVYuG2Hw_hruAb4mj3rEwJUZ1yimHhbz0901x48oot2D7GOQEkK8wi1-u1uP1yHm6kMewOynW5HbgKyieUzTnjJYRb2MVc-j7tMk_35UkqCc7CfqtXrxSQhXkwHbXpK5F4k8hzkU4V7iZ8IPeVOqUsd17FzcBRp7vcOvGrlNnrZ-vuvezDXqtW9i9Pm-TbMm7p2cXRZMw_ZQX8odxSgGvi70URG8DDtWfAJ_wknuw |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Talking+Face+Generation+With+Lip+and+Identity+Priors&rft.jtitle=Computer+animation+and+virtual+worlds&rft.au=Wu%2C+Jiajie&rft.au=Li%2C+Frederick+W.+B&rft.au=Tam%2C+Gary+K.+L&rft.au=Yang%2C+Bailin&rft.date=2025-05-01&rft.pub=Wiley+Subscription+Services%2C+Inc&rft.issn=1546-4261&rft.eissn=1546-427X&rft.volume=36&rft.issue=3&rft_id=info:doi/10.1002%2Fcav.70026&rft.externalDBID=NO_FULL_TEXT |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1546-4261&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1546-4261&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1546-4261&client=summon |