Bidirectional temporal feature for 3D human pose and shape estimation from a video
3D human pose and shape estimation is the foundation of analyzing human motion. However, estimating accurate and temporally consistent 3D human motion from a video remains a challenge. By now, most of the video‐based methods for estimating 3D human pose and shape rely on unidirectional temporal feat...
Saved in:
Published in | Computer animation and virtual worlds Vol. 34; no. 3-4 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
Hoboken, USA
John Wiley & Sons, Inc
01.05.2023
Wiley Subscription Services, Inc |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | 3D human pose and shape estimation is the foundation of analyzing human motion. However, estimating accurate and temporally consistent 3D human motion from a video remains a challenge. By now, most of the video‐based methods for estimating 3D human pose and shape rely on unidirectional temporal features and lack more comprehensive information. To solve this problem, we propose a novel model “bidirectional temporal feature for human motion recovery” (BTMR), which consists of a human motion generator and a discriminator. The transformer‐based generator effectively captures the forward and reverse temporal features to enhance the temporal correlation between frames and reduces the loss of spatial information. The motion discriminator based on Bi‐LSTM can distinguish whether the generated pose sequences are consistent with the realistic sequences of the AMASS dataset. In the process of continuous generation and discrimination, the model can learn more realistic and accurate poses. We evaluate our BTMR on 3DPW and MPI‐INF‐3DHP datasets. Without the training set of 3DPW, BTMR outperforms VIBE by 2.4 mm and 14.9 mm/s2 in PA‐MPJPE and Accel metrics and outperforms TCMR by 1.7 mm in PA‐MPJPE metric on 3DPW. The results demonstrate that our BTMR produces better accurate and temporal consistent 3D human motion.
Our bidirectional temporal feature for human motion recovery improves both temporal consistency and accuracy for estimating human motion from a video. It also helps to solve the problem of abnormal pose estimation for complex human motion. |
---|---|
AbstractList | 3D human pose and shape estimation is the foundation of analyzing human motion. However, estimating accurate and temporally consistent 3D human motion from a video remains a challenge. By now, most of the video‐based methods for estimating 3D human pose and shape rely on unidirectional temporal features and lack more comprehensive information. To solve this problem, we propose a novel model “bidirectional temporal feature for human motion recovery” (BTMR), which consists of a human motion generator and a discriminator. The transformer‐based generator effectively captures the forward and reverse temporal features to enhance the temporal correlation between frames and reduces the loss of spatial information. The motion discriminator based on Bi‐LSTM can distinguish whether the generated pose sequences are consistent with the realistic sequences of the AMASS dataset. In the process of continuous generation and discrimination, the model can learn more realistic and accurate poses. We evaluate our BTMR on 3DPW and MPI‐INF‐3DHP datasets. Without the training set of 3DPW, BTMR outperforms VIBE by 2.4 mm and 14.9 mm/s2 in PA‐MPJPE and Accel metrics and outperforms TCMR by 1.7 mm in PA‐MPJPE metric on 3DPW. The results demonstrate that our BTMR produces better accurate and temporal consistent 3D human motion. 3D human pose and shape estimation is the foundation of analyzing human motion. However, estimating accurate and temporally consistent 3D human motion from a video remains a challenge. By now, most of the video‐based methods for estimating 3D human pose and shape rely on unidirectional temporal features and lack more comprehensive information. To solve this problem, we propose a novel model “bidirectional temporal feature for human motion recovery” (BTMR), which consists of a human motion generator and a discriminator. The transformer‐based generator effectively captures the forward and reverse temporal features to enhance the temporal correlation between frames and reduces the loss of spatial information. The motion discriminator based on Bi‐LSTM can distinguish whether the generated pose sequences are consistent with the realistic sequences of the AMASS dataset. In the process of continuous generation and discrimination, the model can learn more realistic and accurate poses. We evaluate our BTMR on 3DPW and MPI‐INF‐3DHP datasets. Without the training set of 3DPW, BTMR outperforms VIBE by 2.4 mm and 14.9 mm/s 2 in PA‐MPJPE and Accel metrics and outperforms TCMR by 1.7 mm in PA‐MPJPE metric on 3DPW. The results demonstrate that our BTMR produces better accurate and temporal consistent 3D human motion. 3D human pose and shape estimation is the foundation of analyzing human motion. However, estimating accurate and temporally consistent 3D human motion from a video remains a challenge. By now, most of the video‐based methods for estimating 3D human pose and shape rely on unidirectional temporal features and lack more comprehensive information. To solve this problem, we propose a novel model “bidirectional temporal feature for human motion recovery” (BTMR), which consists of a human motion generator and a discriminator. The transformer‐based generator effectively captures the forward and reverse temporal features to enhance the temporal correlation between frames and reduces the loss of spatial information. The motion discriminator based on Bi‐LSTM can distinguish whether the generated pose sequences are consistent with the realistic sequences of the AMASS dataset. In the process of continuous generation and discrimination, the model can learn more realistic and accurate poses. We evaluate our BTMR on 3DPW and MPI‐INF‐3DHP datasets. Without the training set of 3DPW, BTMR outperforms VIBE by 2.4 mm and 14.9 mm/s2 in PA‐MPJPE and Accel metrics and outperforms TCMR by 1.7 mm in PA‐MPJPE metric on 3DPW. The results demonstrate that our BTMR produces better accurate and temporal consistent 3D human motion. Our bidirectional temporal feature for human motion recovery improves both temporal consistency and accuracy for estimating human motion from a video. It also helps to solve the problem of abnormal pose estimation for complex human motion. |
Author | Sun, Libo Tang, Ting Qu, Yuke Qin, Wenhu |
Author_xml | – sequence: 1 givenname: Libo orcidid: 0000-0002-7838-9410 surname: Sun fullname: Sun, Libo email: sunlibo@seu.edu.cn organization: Southeast University – sequence: 2 givenname: Ting orcidid: 0009-0009-4845-4953 surname: Tang fullname: Tang, Ting organization: Southeast University – sequence: 3 givenname: Yuke orcidid: 0000-0003-0263-8262 surname: Qu fullname: Qu, Yuke organization: Southeast University – sequence: 4 givenname: Wenhu orcidid: 0000-0002-9265-7397 surname: Qin fullname: Qin, Wenhu email: qinwenhu@seu.edu.cn organization: Southeast University |
BookMark | eNp1kE9LAzEUxINUsK2CHyHgxcvWJLtJdo-1_oWCICrewms2S1N2N2uyrfTbm7biQfT05vCbx8yM0KB1rUHonJIJJYRdadhMGM3lERpSnokkY_J98KMFPUGjEFaRFIySIXq-tqX1RvfWtVDj3jSd81FUBvq1N7hyHqc3eLluoMWdCwZDW-KwhM5gE3rbwM6JK-8aDHhjS-NO0XEFdTBn33eMXu9uX2YPyfzp_nE2nSeaFalMKIiCCElMlue6opkGMKXmwnBW5FkJAIVmCymJpJrnZMG4hoKXZiFpLjgh6RhdHP523n2sYxi1cmsfWwTFcsa4SKmQkZocKO1dCN5UStt-H7r3YGtFidrtpuJuardbNFz-MnQ-1vTbv9DkgH7a2mz_5dRs-rbnvwDf8H2A |
CitedBy_id | crossref_primary_10_1007_s00371_024_03329_y crossref_primary_10_1007_s00371_024_03389_0 crossref_primary_10_1007_s10639_024_13279_6 crossref_primary_10_1007_s00371_024_03331_4 crossref_primary_10_1007_s00371_024_03614_w crossref_primary_10_1109_ACCESS_2024_3494023 crossref_primary_10_1002_cav_2209 crossref_primary_10_1109_TMM_2024_3521755 crossref_primary_10_1007_s00371_024_03601_1 |
Cites_doi | 10.1109/CVPR46437.2021.00339 10.1109/CVPR42600.2020.00316 10.1109/ICCV.2019.00234 10.1109/TPAMI.2021.3050505 10.1109/CVPR46437.2021.00199 10.1109/CVPR42600.2020.00530 10.1109/3DV.2017.00064 10.1016/j.cviu.2021.103305 10.1109/TPAMI.2022.3194167 10.1109/ICCV.2019.00554 10.1109/TPAMI.2013.248 10.1007/978-3-030-58539-6_36 10.1109/ICCV48922.2021.01145 10.1109/WACV51458.2022.00071 10.1007/978-3-319-46454-1_34 10.1016/j.neunet.2005.06.042 10.1109/ICCV.2019.00545 10.1109/CVPR.2019.00576 10.1109/CVPR52688.2022.01286 10.1007/978-3-031-20065-6_34 10.1109/CVPR.2019.01123 10.1109/CVPR46437.2021.00894 10.1109/CVPR52688.2022.01280 10.1109/CVPR46437.2021.00200 10.1007/978-3-030-58452-8_13 10.1145/2816795.2818013 10.1109/ICCV48922.2021.01279 10.1109/CVPR.2018.00744 |
ContentType | Journal Article |
Copyright | 2023 John Wiley & Sons Ltd. 2023 John Wiley & Sons, Ltd. |
Copyright_xml | – notice: 2023 John Wiley & Sons Ltd. – notice: 2023 John Wiley & Sons, Ltd. |
DBID | AAYXX CITATION 7SC 8FD JQ2 L7M L~C L~D |
DOI | 10.1002/cav.2187 |
DatabaseName | CrossRef Computer and Information Systems Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
DatabaseTitle | CrossRef Computer and Information Systems Abstracts Technology Research Database Computer and Information Systems Abstracts – Academic Advanced Technologies Database with Aerospace ProQuest Computer Science Collection Computer and Information Systems Abstracts Professional |
DatabaseTitleList | Computer and Information Systems Abstracts CrossRef |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Visual Arts |
EISSN | 1546-427X |
EndPage | n/a |
ExternalDocumentID | 10_1002_cav_2187 CAV2187 |
Genre | article |
GrantInformation_xml | – fundername: National Key Research and Development Program of China funderid: 2020YFB160070301 – fundername: Jiangsu Provincial Key Research and Development Program funderid: BE2019311 |
GroupedDBID | .3N .4S .DC .GA .Y3 05W 0R~ 10A 1L6 1OC 29F 31~ 33P 3SF 3WU 4.4 50Y 50Z 51W 51X 52M 52N 52O 52P 52S 52T 52U 52W 52X 5GY 5VS 66C 6J9 702 7PT 8-0 8-1 8-3 8-4 8-5 930 A03 AAESR AAEVG AAHHS AAHQN AAMNL AANHP AANLZ AAONW AASGY AAXRX AAYCA AAZKR ABCQN ABCUV ABEML ABIJN ABPVW ACAHQ ACBWZ ACCFJ ACCZN ACGFS ACPOU ACRPL ACSCC ACXBN ACXQS ACYXJ ADBBV ADEOM ADIZJ ADKYN ADMGS ADNMO ADOZA ADXAS ADZMN ADZOD AEEZP AEIGN AEIMD AENEX AEQDE AEUQT AEUYR AFBPY AFFPM AFGKR AFPWT AFWVQ AFZJQ AHBTC AITYG AIURR AIWBW AJBDE AJXKR ALMA_UNASSIGNED_HOLDINGS ALUQN ALVPJ AMBMR AMYDB ARCSS ASPBG ATUGU AUFTA AVWKF AZBYB AZFZN AZVAB BAFTC BDRZF BFHJK BHBCM BMNLL BROTX BRXPI BY8 CS3 D-E D-F DCZOG DPXWK DR2 DRFUL DRSTM DU5 EBS EDO EJD F00 F01 F04 F5P FEDTE G-S G.N GNP GODZA HF~ HGLYW HHY HVGLF HZ~ I-F ITG ITH IX1 J0M JPC KQQ LATKE LAW LC2 LC3 LEEKS LH4 LITHE LOXES LP6 LP7 LUTES LW6 LYRES MEWTI MK4 MRFUL MRSTM MSFUL MSSTM MXFUL MXSTM N9A NF~ O66 O9- OIG P2W P4D PQQKQ Q.N Q11 QB0 QRW R.K ROL RWI RX1 RYL SUPJJ TN5 TUS UB1 V2E V8K W8V W99 WBKPD WIH WIK WQJ WRC WXSBR WYISQ WZISG XG1 XV2 ~IA ~WT AAYXX ADMLS AGHNM AGQPQ AGYGG CITATION 7SC 8FD AAMMB AEFGJ AGXDD AIDQK AIDYY JQ2 L7M L~C L~D |
ID | FETCH-LOGICAL-c2937-1a690670e488cf14caaedc56e52984daaa9c2b77071c580b25ca95deb71865003 |
IEDL.DBID | DR2 |
ISSN | 1546-4261 |
IngestDate | Fri Jul 25 04:17:10 EDT 2025 Tue Jul 01 02:42:24 EDT 2025 Thu Apr 24 22:58:18 EDT 2025 Wed Jan 22 16:22:48 EST 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 3-4 |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c2937-1a690670e488cf14caaedc56e52984daaa9c2b77071c580b25ca95deb71865003 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0002-7838-9410 0000-0003-0263-8262 0009-0009-4845-4953 0000-0002-9265-7397 |
PQID | 2822563167 |
PQPubID | 2034909 |
PageCount | 13 |
ParticipantIDs | proquest_journals_2822563167 crossref_citationtrail_10_1002_cav_2187 crossref_primary_10_1002_cav_2187 wiley_primary_10_1002_cav_2187_CAV2187 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | May/August 2023 2023-05-00 20230501 |
PublicationDateYYYYMMDD | 2023-05-01 |
PublicationDate_xml | – month: 05 year: 2023 text: May/August 2023 |
PublicationDecade | 2020 |
PublicationPlace | Hoboken, USA |
PublicationPlace_xml | – name: Hoboken, USA – name: Chichester |
PublicationTitle | Computer animation and virtual worlds |
PublicationYear | 2023 |
Publisher | John Wiley & Sons, Inc Wiley Subscription Services, Inc |
Publisher_xml | – name: John Wiley & Sons, Inc – name: Wiley Subscription Services, Inc |
References | 2015; 34 2016 2013; 36 2023; 45 2022 2020 2005; 18 2021; 213 e_1_2_10_23_1 e_1_2_10_24_1 e_1_2_10_22_1 e_1_2_10_20_1 e_1_2_10_2_1 e_1_2_10_4_1 e_1_2_10_18_1 e_1_2_10_3_1 e_1_2_10_19_1 e_1_2_10_6_1 Hongsuk C (e_1_2_10_12_1) 2020 e_1_2_10_16_1 e_1_2_10_5_1 e_1_2_10_17_1 e_1_2_10_8_1 e_1_2_10_14_1 e_1_2_10_37_1 e_1_2_10_7_1 e_1_2_10_15_1 e_1_2_10_36_1 e_1_2_10_35_1 e_1_2_10_9_1 e_1_2_10_13_1 e_1_2_10_34_1 e_1_2_10_10_1 e_1_2_10_33_1 e_1_2_10_11_1 e_1_2_10_32_1 e_1_2_10_31_1 e_1_2_10_30_1 Luo Z (e_1_2_10_21_1) 2020 e_1_2_10_29_1 e_1_2_10_27_1 e_1_2_10_28_1 e_1_2_10_25_1 e_1_2_10_26_1 |
References_xml | – start-page: 213 year: 2020 end-page: 29 – volume: 45 start-page: 5070 issue: 4 year: 2023 end-page: 86 article-title: Out‐of‐domain human mesh reconstruction via dynamic bilevel online adaptation publication-title: IEEE Trans Pattern Anal Mach Intell – volume: 36 start-page: 1325 issue: 7 year: 2013 end-page: 39 article-title: Human3.6 m: large scale datasets and predictive methods for 3d human sensing in natural environments publication-title: IEEE Trans Pattern Anal Mach Intell – start-page: 3170 year: 2022 end-page: 84 article-title: PaMIR: parametric model‐conditioned implicit representation for image‐based human reconstruction publication-title: IEEE Trans Pattern Anal Mach Intell – volume: 34 start-page: 1 issue: 6 year: 2015 end-page: 16 article-title: SMPL: a skinned multi‐person linear model publication-title: ACM Trans Graph – start-page: 769 year: 2020 end-page: 87 – start-page: 590 year: 2022 end-page: 606 – volume: 18 start-page: 602 issue: 5–6 year: 2005 end-page: 10 article-title: Framewise phoneme classification with bidirectional LSTM and other neural network architectures publication-title: Neural Netw – start-page: 598 year: 2020 end-page: 613 – start-page: 561 year: 2016 end-page: 78 – year: 2020 – volume: 213 year: 2021 article-title: Self‐attentive 3D human pose and shape estimation from videos publication-title: Comput Vis Image Underst – ident: e_1_2_10_20_1 doi: 10.1109/CVPR46437.2021.00339 – ident: e_1_2_10_3_1 doi: 10.1109/CVPR42600.2020.00316 – ident: e_1_2_10_6_1 doi: 10.1109/ICCV.2019.00234 – ident: e_1_2_10_2_1 doi: 10.1109/TPAMI.2021.3050505 – ident: e_1_2_10_27_1 doi: 10.1109/CVPR46437.2021.00199 – ident: e_1_2_10_33_1 – ident: e_1_2_10_13_1 – ident: e_1_2_10_10_1 doi: 10.1109/CVPR42600.2020.00530 – ident: e_1_2_10_35_1 doi: 10.1109/3DV.2017.00064 – ident: e_1_2_10_31_1 doi: 10.1016/j.cviu.2021.103305 – ident: e_1_2_10_32_1 doi: 10.1109/TPAMI.2022.3194167 – ident: e_1_2_10_36_1 – ident: e_1_2_10_16_1 doi: 10.1109/ICCV.2019.00554 – ident: e_1_2_10_37_1 doi: 10.1109/TPAMI.2013.248 – ident: e_1_2_10_17_1 doi: 10.1007/978-3-030-58539-6_36 – ident: e_1_2_10_28_1 doi: 10.1109/ICCV48922.2021.01145 – ident: e_1_2_10_29_1 doi: 10.1109/WACV51458.2022.00071 – start-page: 769 volume-title: ECCV year: 2020 ident: e_1_2_10_12_1 – ident: e_1_2_10_19_1 doi: 10.1007/978-3-319-46454-1_34 – ident: e_1_2_10_24_1 – ident: e_1_2_10_15_1 doi: 10.1016/j.neunet.2005.06.042 – ident: e_1_2_10_14_1 doi: 10.1109/ICCV.2019.00545 – ident: e_1_2_10_34_1 – ident: e_1_2_10_9_1 doi: 10.1109/CVPR.2019.00576 – ident: e_1_2_10_22_1 doi: 10.1109/CVPR52688.2022.01286 – ident: e_1_2_10_7_1 doi: 10.1007/978-3-031-20065-6_34 – ident: e_1_2_10_18_1 doi: 10.1109/CVPR.2019.01123 – ident: e_1_2_10_4_1 doi: 10.1109/CVPR46437.2021.00894 – ident: e_1_2_10_30_1 doi: 10.1109/CVPR52688.2022.01280 – ident: e_1_2_10_11_1 doi: 10.1109/CVPR46437.2021.00200 – ident: e_1_2_10_26_1 doi: 10.1007/978-3-030-58452-8_13 – ident: e_1_2_10_25_1 – volume-title: ACCV year: 2020 ident: e_1_2_10_21_1 – ident: e_1_2_10_8_1 doi: 10.1145/2816795.2818013 – ident: e_1_2_10_23_1 doi: 10.1109/ICCV48922.2021.01279 – ident: e_1_2_10_5_1 doi: 10.1109/CVPR.2018.00744 |
SSID | ssj0026210 |
Score | 2.3604558 |
Snippet | 3D human pose and shape estimation is the foundation of analyzing human motion. However, estimating accurate and temporally consistent 3D human motion from a... |
SourceID | proquest crossref wiley |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
SubjectTerms | Bi‐LSTM Datasets Discriminators Estimation Human motion human pose and shape estimation Spatial data Three dimensional motion transformer |
Title | Bidirectional temporal feature for 3D human pose and shape estimation from a video |
URI | https://onlinelibrary.wiley.com/doi/abs/10.1002%2Fcav.2187 https://www.proquest.com/docview/2822563167 |
Volume | 34 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnZ3fS8MwEMeD7Ekf_C1Op0QQferWpkl_PM7pEEEfhhsDH8olTXE4tmE3H_zrzTXtpqIgPhVKAm0u1_s23H2OkHOfxyn2JXSYNO7GPQ6O9DLhhD5InknF4wI8f_8Q3Pb53VAMy6xKrIWxfIjlgRt6RvG9RgcHmbdW0FAFb00Tn7CQHFO1UA_1luQoFjALIhA8cPAvoeLOuqxVTfwaiVby8rNILaJMd4s8Vc9nk0temou5bKr3b-jG_73ANtksxSdt292yQ9b0ZJdsDEb5wt7N90jvamSjXHFESEty1ZhmuiCAUqNxqX9Ni9Z-dDbNNYVJSvNnmGmKwA5bCUmxaoUCxSK_6T7pd28eO7dO2XfBUSb4h44HSC8OXW2cW2UeVwA6VSLQgsURTwEgVkyGoVEnSkSuZEJBLFItTZwzgs_1D0htMp3oQ0Ix1UpGaYzYeW6UF0DG_Sx1QXAt3Sirk8vKBokqoeTYG2OcWJwyS8wqJbhKdXK2HDmzII4fxjQqMyalK-YJ5smKAAv-6-SisMev85NOe4DXo78OPCbr2H7eJkA2SG3-utAnRqTM5WmxHT8A9TTi3A |
linkProvider | Wiley-Blackwell |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1Z3JTsMwEEBHLAfgwI4oq5FYTmlTx06aAwegoLIeECBuwXYcgUBtRVoQ_BK_wkfhiZOyCCQuHDhFipwo8cx4xtbMG4BVj4Ux9iV0qDTmxqpMOLKacCfwhGSJVCzMwPPHJ37jnB1c8ss-eClqYSwfonfghpaRrddo4HggXXmnhirxUDYOKsgzKg_106PZr6Wb-3Uj3DVK93bPdhpO3lLAUcavBU5VIJg3cLXRW5VUmRJCx4r7mtOwxmIhRKioDALjeBWvuZJyJUIea2mWcBPLuJ55bz8MYgNxBPXXT3usKupTiz7gzHdwX1KQbl1aKb70s-97D2g_hsWZX9sbg9diRmw6y22525Fl9fwFFvlPpmwcRvP4mmxZg5iAPt2chJGLm7Rr76ZTcLp9Yx15dgpKcjjXHUl0BjklJownXp1k3QtJu5VqIpoxSa9FWxNkkthiT4KFOUQQrGNsTcP5n_zUDAw0W009CwSzyWQtDpGsz0xwKUTCvCR2BWdaurWkBBuF0COVc9ex_cddZInRNDJSiVAqJVjpjWxb1sg3YxYKvYny1SaNMBWY-8g0KMF6pgA_Ph_tbF3gde63A5dhqHF2fBQd7Z8czsMwNTGezfdcgIHOfVcvmpisI5cyWyBw9dea9AZoyEAo |
linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1Z1bS8MwFMcPOkH0wbs4nRrBy1O3Lk3a9cGH6RzeEdGxt5qkKQ7HNuym6Efyq_ilTJp2XlDwxQefCiUtbU5Ozj_h5HcANh3ih7ouoYW5cjdSJszi5YhansM4ibggfgKePzt3D6_JcZM2R-AlOwtj-BDDDTftGcl8rR28F0ald2ioYA9FFZ-8NKHyRD49quVavHtUU7bdwrh-cLV_aKUVBSyhwppnlZnm8nq2VMNWRGUiGJOhoK6k2K-QkDHmC8w9T8VdQSs2x1Qwn4aSqxlcSRnbUe8dhTHi2r4uE1G7HKKqsIsN-YAS19LLkgx0a-NS9qWfQ9-7nv2oipOwVp-G16xDTDbLXXHQ50Xx_IUV-T96bAamUnWNqsYdZmFEduZgstGKB-ZuPA-Xey0TxpM9UJSiudookgniFCkRj5waSmoXol43loh1QhTfsp5EmkhijnoifSwHMaRPMXYX4PpPfmoRcp1uRy4B0rlkvBL6mqtPlLRkLCJOFNqMEsntSpSHnczmgUip67r4RzswvGgcKKsE2ip52Bi27BnSyDdtCtmwCdK5Jg50IjB1NdEgD9uJ_X98PtivNvR1-bcN12H8olYPTo_OT1ZgAiuBZ5I9C5Dr3w_kqhJkfb6WeAKCm78eSG9L0z7X |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Bidirectional+temporal+feature+for+3D+human+pose+and+shape+estimation+from+a+video&rft.jtitle=Computer+animation+and+virtual+worlds&rft.au=Sun%2C+Libo&rft.au=Tang%2C+Ting&rft.au=Qu%2C+Yuke&rft.au=Qin%2C+Wenhu&rft.date=2023-05-01&rft.pub=John+Wiley+%26+Sons%2C+Inc&rft.issn=1546-4261&rft.eissn=1546-427X&rft.volume=34&rft.issue=3-4&rft.epage=n%2Fa&rft_id=info:doi/10.1002%2Fcav.2187&rft.externalDBID=10.1002%252Fcav.2187&rft.externalDocID=CAV2187 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1546-4261&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1546-4261&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1546-4261&client=summon |