Frontal person image generation based on arbitrary‐view human images
Frontal person images contain the richest detailed features of humans, which can effectively assist in behavioral recognition, virtual dress fitting and other applications. While many remarkable networks are devoted to the person image generation task, most of them need accurate target poses as the...
Saved in:
Published in | Computer animation and virtual worlds Vol. 35; no. 4 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
Chichester
Wiley Subscription Services, Inc
01.07.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Frontal person images contain the richest detailed features of humans, which can effectively assist in behavioral recognition, virtual dress fitting and other applications. While many remarkable networks are devoted to the person image generation task, most of them need accurate target poses as the network inputs. However, the target pose annotation is difficult and time‐consuming. In this work, we proposed a first frontal person image generation network based on the proposed anchor pose set and the generative adversarial network. Specifically, our method first classify a rough frontal pose to the input human image based on the proposed anchor pose set, and regress all key points of the rough frontal pose to estimate an accurate frontal pose. Then, we consider the estimated frontal pose as the target pose, and construct a two‐stream generator based on the generative adversarial network to update the person's shape and appearance feature in a crossing way and generate a realistic frontal person image. Experiments on the challenging CMU Panoptic dataset show that our method can generate realistic frontal images from arbitrary‐view human images.
The process of frontal person image generation based on arbitrary‐view human images. |
---|---|
AbstractList | Frontal person images contain the richest detailed features of humans, which can effectively assist in behavioral recognition, virtual dress fitting and other applications. While many remarkable networks are devoted to the person image generation task, most of them need accurate target poses as the network inputs. However, the target pose annotation is difficult and time‐consuming. In this work, we proposed a first frontal person image generation network based on the proposed anchor pose set and the generative adversarial network. Specifically, our method first classify a rough frontal pose to the input human image based on the proposed anchor pose set, and regress all key points of the rough frontal pose to estimate an accurate frontal pose. Then, we consider the estimated frontal pose as the target pose, and construct a two‐stream generator based on the generative adversarial network to update the person's shape and appearance feature in a crossing way and generate a realistic frontal person image. Experiments on the challenging CMU Panoptic dataset show that our method can generate realistic frontal images from arbitrary‐view human images. Frontal person images contain the richest detailed features of humans, which can effectively assist in behavioral recognition, virtual dress fitting and other applications. While many remarkable networks are devoted to the person image generation task, most of them need accurate target poses as the network inputs. However, the target pose annotation is difficult and time‐consuming. In this work, we proposed a first frontal person image generation network based on the proposed anchor pose set and the generative adversarial network. Specifically, our method first classify a rough frontal pose to the input human image based on the proposed anchor pose set, and regress all key points of the rough frontal pose to estimate an accurate frontal pose. Then, we consider the estimated frontal pose as the target pose, and construct a two‐stream generator based on the generative adversarial network to update the person's shape and appearance feature in a crossing way and generate a realistic frontal person image. Experiments on the challenging CMU Panoptic dataset show that our method can generate realistic frontal images from arbitrary‐view human images. The process of frontal person image generation based on arbitrary‐view human images. |
Author | Yin, Baocai Sun, Yongliang Chen, Lufei Zhang, Yuqing Zhang, Yong |
Author_xml | – sequence: 1 givenname: Yong surname: Zhang fullname: Zhang, Yong email: zhangyong2010@bjut.edu.cn organization: Beijing Institute of Artificial Intelligence, Department of Information Science, Beijing University of Technology – sequence: 2 givenname: Yuqing orcidid: 0000-0002-0015-8874 surname: Zhang fullname: Zhang, Yuqing organization: Beijing Institute of Artificial Intelligence, Department of Information Science, Beijing University of Technology – sequence: 3 givenname: Lufei surname: Chen fullname: Chen, Lufei organization: Beijing Institute of Artificial Intelligence, Department of Information Science, Beijing University of Technology – sequence: 4 givenname: Baocai surname: Yin fullname: Yin, Baocai organization: Beijing Institute of Artificial Intelligence, Department of Information Science, Beijing University of Technology – sequence: 5 givenname: Yongliang surname: Sun fullname: Sun, Yongliang organization: Taiji Co LTD |
BookMark | eNp1kLtOwzAUhi1UJNqCxCNEYmFJsJ3EsceqooBUiQUQm2Unx8VVmhQ7bdWNR-AZeRLcixgQTOei7z-Xf4B6TdsAQpcEJwRjelOqdUJpmp2gPskzFme0eO395IycoYH380AySnAfTSaubTpVR0twvm0iu1AziGbQgFOdDQ2tPFRRSJTTtnPKbb8-PtcWNtHbaqGOAn-OTo2qPVwc4xA9T26fxvfx9PHuYTyaxiUVaRZnWgtFlQJulChKUbFQVrQoMqCa5blRRhTaaMYrnDIjmMYCMs5zbjgBVqZDdHWYu3Tt-wp8J-ftyjVhpUyxyLnAOKeBuj5QpWu9d2Dk0oU73VYSLHcuyeCS3LkU0OQXWtpu_3n41dZ_CeKDYGNr2P47WI5HL3v-G3toe08 |
CitedBy_id | crossref_primary_10_1007_s00371_025_03803_1 |
Cites_doi | 10.1109/CVPR.2017.495 10.1109/CVPR.2017.170 10.1007/978-3-319-46484-8_29 10.1109/CVPR.2018.00870 10.1145/3343031.3350980 10.1109/TPAMI.2016.2577031 10.1109/AVSS.2019.8909823 10.1109/CVPR52729.2023.00464 10.1016/j.neucom.2021.03.059 10.1109/ICCV.2019.00911 10.1007/978-3-030-58595-2_43 10.1016/j.patcog.2022.109246 10.1109/CVPR.2019.01225 10.1109/CVPR.2018.00018 10.1109/CVPR.2017.106 10.1109/CVPR.2019.00245 10.1007/978-3-319-49409-8_15 10.1109/ICCV.2017.405 10.1109/WACV45572.2020.9093602 10.1007/978-3-030-11012-3_30 10.1109/TPAMI.2017.2782743 10.1007/978-3-030-01252-6_26 10.1016/j.eswa.2023.123073 10.1109/CVPR52729.2023.00578 10.1109/CVPR52729.2023.00457 10.1109/ICCV.2017.244 10.1109/ICCV.2019.01023 10.1109/ACCESS.2021.3053408 10.1109/ICCV.2019.00600 10.1007/978-3-030-01240-3_40 10.1109/CVPR.2017.632 |
ContentType | Journal Article |
Copyright | 2024 John Wiley & Sons Ltd. 2024 John Wiley & Sons, Ltd. |
Copyright_xml | – notice: 2024 John Wiley & Sons Ltd. – notice: 2024 John Wiley & Sons, Ltd. |
DBID | AAYXX CITATION 7SC 8FD JQ2 L7M L~C L~D |
DOI | 10.1002/cav.2234 |
DatabaseName | CrossRef Computer and Information Systems Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
DatabaseTitle | CrossRef Computer and Information Systems Abstracts Technology Research Database Computer and Information Systems Abstracts – Academic Advanced Technologies Database with Aerospace ProQuest Computer Science Collection Computer and Information Systems Abstracts Professional |
DatabaseTitleList | Computer and Information Systems Abstracts CrossRef |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Visual Arts |
EISSN | 1546-427X |
EndPage | n/a |
ExternalDocumentID | 10_1002_cav_2234 CAV2234 |
Genre | article |
GrantInformation_xml | – fundername: National Key Research and Development Program of China funderid: 2021ZD0111902 – fundername: National Natural Science Foundation of China funderid: U21B2038; 61772209; 62072015 |
GroupedDBID | .3N .4S .DC .GA .Y3 05W 0R~ 10A 1L6 1OC 29F 31~ 33P 3SF 3WU 4.4 50Y 50Z 51W 51X 52M 52N 52O 52P 52S 52T 52U 52W 52X 5GY 5VS 66C 6J9 702 7PT 8-0 8-1 8-3 8-4 8-5 930 A03 AAESR AAEVG AAHHS AAHQN AAMNL AANHP AANLZ AAONW AASGY AAXRX AAYCA AAZKR ABCQN ABCUV ABEML ABIJN ABPVW ACAHQ ACBWZ ACCFJ ACCZN ACGFS ACPOU ACRPL ACSCC ACXBN ACXQS ACYXJ ADBBV ADEOM ADIZJ ADKYN ADMGS ADNMO ADOZA ADXAS ADZMN ADZOD AEEZP AEIGN AEIMD AENEX AEQDE AEUQT AEUYR AFBPY AFFPM AFGKR AFPWT AFWVQ AFZJQ AHBTC AITYG AIURR AIWBW AJBDE AJXKR ALMA_UNASSIGNED_HOLDINGS ALUQN ALVPJ AMBMR AMYDB ARCSS ASPBG ATUGU AUFTA AVWKF AZBYB AZFZN AZVAB BAFTC BDRZF BFHJK BHBCM BMNLL BROTX BRXPI BY8 CS3 D-E D-F DCZOG DPXWK DR2 DRFUL DRSTM DU5 EBS EDO EJD F00 F01 F04 F5P FEDTE G-S G.N GNP GODZA HF~ HGLYW HHY HVGLF HZ~ I-F ITG ITH IX1 J0M JPC KQQ LATKE LAW LC2 LC3 LEEKS LH4 LITHE LOXES LP6 LP7 LUTES LW6 LYRES MEWTI MK4 MRFUL MRSTM MSFUL MSSTM MXFUL MXSTM N9A NF~ O66 O9- OIG P2W P4D PQQKQ Q.N Q11 QB0 QRW R.K ROL RWI RX1 RYL SUPJJ TN5 TUS UB1 V2E V8K W8V W99 WBKPD WIH WIK WQJ WRC WXSBR WYISQ WZISG XG1 XV2 ~IA ~WT AAYXX ADMLS AGHNM AGQPQ AGYGG CITATION 7SC 8FD AAMMB AEFGJ AGXDD AIDQK AIDYY JQ2 L7M L~C L~D |
ID | FETCH-LOGICAL-c2934-4bb9a2aae8fa97c9d69a2d2774e2b655faf97bfb68d036f96b09e48858f81e6c3 |
IEDL.DBID | DR2 |
ISSN | 1546-4261 |
IngestDate | Sat Jul 26 02:28:52 EDT 2025 Tue Jul 01 02:42:24 EDT 2025 Thu Apr 24 22:58:58 EDT 2025 Wed Jan 22 17:14:55 EST 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 4 |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c2934-4bb9a2aae8fa97c9d69a2d2774e2b655faf97bfb68d036f96b09e48858f81e6c3 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0002-0015-8874 |
PQID | 3095890052 |
PQPubID | 2034909 |
PageCount | 16 |
ParticipantIDs | proquest_journals_3095890052 crossref_primary_10_1002_cav_2234 crossref_citationtrail_10_1002_cav_2234 wiley_primary_10_1002_cav_2234_CAV2234 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | July/August 2024 2024-07-00 20240701 |
PublicationDateYYYYMMDD | 2024-07-01 |
PublicationDate_xml | – month: 07 year: 2024 text: July/August 2024 |
PublicationDecade | 2020 |
PublicationPlace | Chichester |
PublicationPlace_xml | – name: Chichester |
PublicationTitle | Computer animation and virtual worlds |
PublicationYear | 2024 |
Publisher | Wiley Subscription Services, Inc |
Publisher_xml | – name: Wiley Subscription Services, Inc |
References | 2021; 9 2023 2019; 41 2017; 39 2020; 11:2227‐39 2020 2021; 449 2019 2018 2023; 137 2017 2020; 36 2016 2015 IEEE / CVF; 2018 2014 2024; 246 e_1_2_7_6_1 Peng H (e_1_2_7_12_1) 2020; 11 e_1_2_7_5_1 e_1_2_7_4_1 e_1_2_7_3_1 e_1_2_7_9_1 e_1_2_7_8_1 e_1_2_7_7_1 e_1_2_7_19_1 e_1_2_7_18_1 e_1_2_7_16_1 e_1_2_7_40_1 e_1_2_7_2_1 Vidanpathirana M (e_1_2_7_11_1) 2020; 36 e_1_2_7_15_1 e_1_2_7_14_1 e_1_2_7_42_1 e_1_2_7_13_1 e_1_2_7_43_1 e_1_2_7_44_1 e_1_2_7_45_1 e_1_2_7_10_1 e_1_2_7_46_1 e_1_2_7_47_1 e_1_2_7_26_1 e_1_2_7_28_1 e_1_2_7_29_1 Siarohin A (e_1_2_7_17_1) 2018 Cipolla R (e_1_2_7_41_1) 2018 He K (e_1_2_7_38_1) 2016 Benzine A (e_1_2_7_39_1) 2020 Chen Y (e_1_2_7_27_1) 2018 e_1_2_7_30_1 e_1_2_7_31_1 e_1_2_7_24_1 e_1_2_7_32_1 e_1_2_7_23_1 e_1_2_7_33_1 e_1_2_7_22_1 e_1_2_7_34_1 e_1_2_7_21_1 e_1_2_7_35_1 e_1_2_7_20_1 Cao Z (e_1_2_7_25_1) 2017 e_1_2_7_36_1 e_1_2_7_37_1 |
References_xml | – volume: 36 issue: 5 year: 2020 article-title: Tracking and frame‐rate enhancement for real‐time 2d human pose estimation publication-title: Vis Comput – volume: 39 start-page: 1137 issue: 6 year: 2017 end-page: 1149 article-title: Faster r‐cnn: towards real‐time object detection with region proposal networks publication-title: IEEE Trans Pattern Anal Mach Intell – volume: 449 start-page: 330 year: 2021 end-page: 341 article-title: Pccm‐gan: photographic text‐to‐image generation with pyramid contrastive consistency model publication-title: Neurocomputing – start-page: 7482 year: 2018 end-page: 7491 – volume: 41 start-page: 190 issue: 1 year: 2019 end-page: 204 article-title: Panoptic studio: a massively multiview system for social interaction capture publication-title: IEEE Trans Pattern Anal Mach Intell – start-page: 6855 year: 2020 end-page: 6864 – volume: 137 year: 2023 article-title: Pose‐driven attention‐guided image generation for person re‐identification publication-title: Pattern Recognit – start-page: 2342 year: 2019 end-page: 2351 – start-page: 770 year: 2016 end-page: 778 article-title: Deep residual learning for image recognition publication-title: Comput Vis Pattern Recognit – volume: 11:2227‐39 year: 2020 article-title: 3d hand mesh reconstruction from a monocular rgb image publication-title: Vis Comput – start-page: 99 year: 2018 end-page: 108 – start-page: 5967 year: 2017 end-page: 5976 – year: 2016 – year: 2018 – start-page: 1302 year: 2017 end-page: 1310 – year: 2014 – volume: 246 year: 2024 article-title: Attentional pixel‐wise deformation for pose‐based human image generation publication-title: Expert Syst Appl – start-page: 11969 year: 2019 end-page: 11978 – year: 2020 – year: 2023 – start-page: 3408 year: 2018 end-page: 3416 – start-page: 8340 year: IEEE / CVF; 2018 end-page: 8348 – start-page: 1561 year: 2017 end-page: 1570 – start-page: 4654 year: 2017 end-page: 4663 – year: 2017 – volume: 9 start-page: 16591 year: 2021 end-page: 16603 article-title: Inet: convolutional networks for biomedical image segmentation publication-title: IEEE Access – year: 2019 – year: 2015 – start-page: 7103 year: 2018 end-page: 7112 – start-page: 936 year: 2017 end-page: 944 – ident: e_1_2_7_30_1 doi: 10.1109/CVPR.2017.495 – ident: e_1_2_7_31_1 doi: 10.1109/CVPR.2017.170 – ident: e_1_2_7_20_1 – ident: e_1_2_7_46_1 doi: 10.1007/978-3-319-46484-8_29 – ident: e_1_2_7_3_1 doi: 10.1109/CVPR.2018.00870 – volume: 11 year: 2020 ident: e_1_2_7_12_1 article-title: 3d hand mesh reconstruction from a monocular rgb image publication-title: Vis Comput – ident: e_1_2_7_43_1 doi: 10.1145/3343031.3350980 – start-page: 3408 volume-title: Computer vision and pattern recognition year: 2018 ident: e_1_2_7_17_1 – ident: e_1_2_7_40_1 doi: 10.1109/TPAMI.2016.2577031 – ident: e_1_2_7_45_1 – ident: e_1_2_7_28_1 doi: 10.1109/AVSS.2019.8909823 – start-page: 770 year: 2016 ident: e_1_2_7_38_1 article-title: Deep residual learning for image recognition publication-title: Comput Vis Pattern Recognit – start-page: 6855 volume-title: Computer vision and pattern recognition year: 2020 ident: e_1_2_7_39_1 – start-page: 7103 volume-title: Computer vision and pattern recognition year: 2018 ident: e_1_2_7_27_1 – ident: e_1_2_7_35_1 doi: 10.1109/CVPR52729.2023.00464 – ident: e_1_2_7_21_1 doi: 10.1016/j.neucom.2021.03.059 – ident: e_1_2_7_33_1 – ident: e_1_2_7_8_1 doi: 10.1109/ICCV.2019.00911 – ident: e_1_2_7_47_1 – ident: e_1_2_7_44_1 – ident: e_1_2_7_23_1 doi: 10.1007/978-3-030-58595-2_43 – ident: e_1_2_7_18_1 doi: 10.1016/j.patcog.2022.109246 – ident: e_1_2_7_13_1 doi: 10.1109/CVPR.2019.01225 – ident: e_1_2_7_16_1 doi: 10.1109/CVPR.2018.00018 – start-page: 1302 volume-title: Computer vision and pattern recognition year: 2017 ident: e_1_2_7_25_1 – start-page: 7482 volume-title: Computer vision and pattern recognition year: 2018 ident: e_1_2_7_41_1 – ident: e_1_2_7_29_1 doi: 10.1109/CVPR.2017.106 – ident: e_1_2_7_5_1 doi: 10.1109/CVPR.2019.00245 – ident: e_1_2_7_32_1 doi: 10.1007/978-3-319-49409-8_15 – ident: e_1_2_7_7_1 doi: 10.1109/ICCV.2017.405 – ident: e_1_2_7_22_1 doi: 10.1109/WACV45572.2020.9093602 – ident: e_1_2_7_4_1 doi: 10.1007/978-3-030-11012-3_30 – ident: e_1_2_7_24_1 doi: 10.1109/TPAMI.2017.2782743 – ident: e_1_2_7_26_1 doi: 10.1007/978-3-030-01252-6_26 – ident: e_1_2_7_19_1 doi: 10.1016/j.eswa.2023.123073 – ident: e_1_2_7_37_1 doi: 10.1109/CVPR52729.2023.00578 – ident: e_1_2_7_14_1 doi: 10.1109/CVPR52729.2023.00457 – ident: e_1_2_7_42_1 – ident: e_1_2_7_10_1 doi: 10.1109/ICCV.2017.244 – ident: e_1_2_7_34_1 doi: 10.1109/ICCV.2019.01023 – ident: e_1_2_7_36_1 doi: 10.1109/ACCESS.2021.3053408 – ident: e_1_2_7_2_1 doi: 10.1109/ICCV.2019.00600 – ident: e_1_2_7_6_1 doi: 10.1007/978-3-030-01240-3_40 – ident: e_1_2_7_9_1 doi: 10.1109/CVPR.2017.632 – volume: 36 issue: 5 year: 2020 ident: e_1_2_7_11_1 article-title: Tracking and frame‐rate enhancement for real‐time 2d human pose estimation publication-title: Vis Comput – ident: e_1_2_7_15_1 |
SSID | ssj0026210 |
Score | 2.351083 |
Snippet | Frontal person images contain the richest detailed features of humans, which can effectively assist in behavioral recognition, virtual dress fitting and other... |
SourceID | proquest crossref wiley |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
SubjectTerms | Annotations arbitrary‐view images deep learning Feature recognition frontal person image generation frontal pose estimation Generative adversarial networks Image processing Virtual networks |
Title | Frontal person image generation based on arbitrary‐view human images |
URI | https://onlinelibrary.wiley.com/doi/abs/10.1002%2Fcav.2234 https://www.proquest.com/docview/3095890052 |
Volume | 35 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1PS8MwFA_iSQ_-F6dTKoieunVpkjbHMSxD0IO4MfBQkjSRoc6xbh48-RH8jH4S85p2U1EQT21KHrTv5eX9kr78HkInRIcC0hb9EFY6RBvqx4JEPhfW8zAOTBTDaeTLK9btkYsBHZRZlXAWxvFDzDfcwDOK-RocXMi8uSANVeK5YWMbUIFCqhbgoes5cxRm2BERUMJ8WCVUvLMBblaCXyPRAl5-BqlFlEnW0W31fi655L4xm8qGevlG3fi_D9hAayX49NputGyiJT3aQqv9YT5zT_NtlCTAaGCb4wKKe8NHO-F4dwU5NdjQg7CXefZGTOSwOLP__voG_xe8otyfE8h3UC85v-l0_bLWgq9swCc-kZILLISOjeCR4hmzzQxbcKixZJQaYXgkjWRxZmOe4UwGXFvnp7GJW5qpcBctj55Geg95caitjaWFJpkhUaB4qFtYaooDomgQmBo6q_SeqpKIHOphPKSOQhmnVjMpaKaGjuc9x45844c-9cp0ael-eRpa4Bhz2PKuodPCBr_Kp512H677f-14gFawBTYuZbeOlqeTmT60wGQqj4oh-AFd1994 |
linkProvider | Wiley-Blackwell |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1LT9wwEB5ROLQceLRFLCwQpD5OWYLjPHzggIDV8jxUgLildmKjFbCgzS4ITvwE_gd_hV_BL2EmThZaUakXDpzykB0lnhnPN874G4BvXPuS0hZdnyIdrk3gxpJHrpBoeYx5JoppN_LuXtg64FtHwdEQ3Fd7YSw_xGDBjSyjmK_JwGlBeumZNTSVlw10brzMqNzW11cYr-Urm-so3O-MNTf211puWVLATdGvcZcrJSSTUsdGiigVWYiXGUMMpJkKg8BIIyJlVBhnOLUbESpPaNTxIDbxsg5TH5_7AUaogDgR9a__GnBVsZBZ6oOAhy7FJRXTrceWqjf90_c9A9qXsLjwa81xeKhGxKaznDT6PdVIb_4ii3wnQzYBYyW-dlatQUzCkO58htHDdt63d_Mv0GwSaQNeXhTRhtM-wznVOS74t0lNHfLsmYMnsqvaBS3B4-0d_UJxioqGtkP-FQ7e5EOmYLhz3tHT4MS-RjVWiL4ywyMvFb5eZkoHzONp4HmmBj8rQSdpybVOJT9OE8sSzRKUREKSqMHioOWF5Rd5pU290pWknGHyxEdsHAta1a_Bj0Lo_-yfrK0e0nHmfxsuwMfW_u5OsrO5tz0LnxjiOJuhXIfhXrev5xCH9dR8of8O_H5r7XkCmsY_AQ |
linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V3NTttAEB6lqVTBgdIWRCClrtSfk4NZr9feQw9RUiuUNqoqQNzMrr2LIiBEcUJVTjxCn6OvwlvwJJ312klbUamXHHryj3Yte2dm55v17DcAr6jyhUlbdH0T6VClAzcSNHS5QMsjxNNhZHYjf-qz3iH9cBwc1-BHtRfG8kPMFtyMZRTztTHwUaZ35qShqbhqoW-jZULlvvr2FcO1_N1eF2X7mpD4_UGn55YVBdwU3Rp1qZRcECFUpAUPU54xvMwIQiBFJAsCLTQPpZYsynBm15xJjytU8SDS0a5iqY_PfQAPKfO4KRPR_TKjqiKMWOaDgDLXhCUV0a1Hdqo3_d31zfHsr6i4cGvxY7itBsRms5y1phPZSq__4Ir8P0ZsFVZKdO20rTk8gZoaPoXlo0E-tXfzZxDHhrIBL0dFrOEMLnBGdU4L9m2jpI7x65mDJ2IsBwUpwd3Nd_MDxSnqGdoO-RocLuRD1qE-vByqDXAiX6ESS8Remaahl3Jf7RKpAuLRNPA83YC3lZyTtGRaNwU_zhPLEU0SlERiJNGAl7OWI8suck-bZqUqSTm_5ImPyDjiZk2_AW8Kmf-1f9JpH5nj5r82fAGPPnfj5ONef38LlgiCOJue3IT6ZDxVzxGETeR2of0OnCxaeX4Cizs9sA |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Frontal+person+image+generation+based+on+arbitrary%E2%80%90view+human+images&rft.jtitle=Computer+animation+and+virtual+worlds&rft.au=Zhang%2C+Yong&rft.au=Zhang%2C+Yuqing&rft.au=Chen%2C+Lufei&rft.au=Yin%2C+Baocai&rft.date=2024-07-01&rft.pub=Wiley+Subscription+Services%2C+Inc&rft.issn=1546-4261&rft.eissn=1546-427X&rft.volume=35&rft.issue=4&rft_id=info:doi/10.1002%2Fcav.2234&rft.externalDBID=NO_FULL_TEXT |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1546-4261&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1546-4261&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1546-4261&client=summon |