MarkerNet: A divide‐and‐conquer solution to motion capture solving from raw markers
Marker‐based optical motion capture (MoCap) aims to localize 3D human motions from a sequence of input raw markers. It is widely used to produce physical movements for virtual characters in various games such as the role‐playing game, the fighting game, and the action‐adventure game. However, the co...
Saved in:
Published in | Computer animation and virtual worlds Vol. 35; no. 1 |
---|---|
Main Authors | , , , , , , |
Format | Journal Article |
Language | English |
Published |
Chichester
Wiley Subscription Services, Inc
01.01.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Marker‐based optical motion capture (MoCap) aims to localize 3D human motions from a sequence of input raw markers. It is widely used to produce physical movements for virtual characters in various games such as the role‐playing game, the fighting game, and the action‐adventure game. However, the conventional MoCap cleaning and solving process is extremely labor‐intensive, time‐consuming, and usually the most costly part of game animation production. Thus, there is a high demand for automated algorithms to replace costly manual operations and achieve accurate MoCap cleaning and solving in the game industry. In this article, we design a divide‐and‐conquer‐based MoCap solving network, dubbed MarkerNet, to estimate human skeleton motions from sequential raw markers effectively. In a nutshell, our key idea is to decompose the task of direct solving of global motion from all markers into first modeling sub‐motions of local parts from the corresponding marker subsets and then aggregating sub‐motions into a global one. In this manner, our model can effectively capture local motion patterns w.r.t. different marker subsets, thus producing more accurate results compared to the existing methods. Extensive experiments on both real and synthetic data verify the effectiveness of the proposed method.
The overall motion of a human body can be decomposed into several sub‐motions of different local parts. Thus, we divide all sequential markers into different subsets and learn different local sub‐motions from the corresponding marker subsets within local spatio‐temporal ranges. |
---|---|
AbstractList | Marker‐based optical motion capture (MoCap) aims to localize 3D human motions from a sequence of input raw markers. It is widely used to produce physical movements for virtual characters in various games such as the role‐playing game, the fighting game, and the action‐adventure game. However, the conventional MoCap cleaning and solving process is extremely labor‐intensive, time‐consuming, and usually the most costly part of game animation production. Thus, there is a high demand for automated algorithms to replace costly manual operations and achieve accurate MoCap cleaning and solving in the game industry. In this article, we design a divide‐and‐conquer‐based MoCap solving network, dubbed
MarkerNet
, to estimate human skeleton motions from sequential raw markers effectively. In a nutshell, our key idea is to decompose the task of direct solving of global motion from all markers into first modeling sub‐motions of local parts from the corresponding marker subsets and then aggregating sub‐motions into a global one. In this manner, our model can effectively capture local motion patterns w.r.t. different marker subsets, thus producing more accurate results compared to the existing methods. Extensive experiments on both real and synthetic data verify the effectiveness of the proposed method. Marker‐based optical motion capture (MoCap) aims to localize 3D human motions from a sequence of input raw markers. It is widely used to produce physical movements for virtual characters in various games such as the role‐playing game, the fighting game, and the action‐adventure game. However, the conventional MoCap cleaning and solving process is extremely labor‐intensive, time‐consuming, and usually the most costly part of game animation production. Thus, there is a high demand for automated algorithms to replace costly manual operations and achieve accurate MoCap cleaning and solving in the game industry. In this article, we design a divide‐and‐conquer‐based MoCap solving network, dubbed MarkerNet, to estimate human skeleton motions from sequential raw markers effectively. In a nutshell, our key idea is to decompose the task of direct solving of global motion from all markers into first modeling sub‐motions of local parts from the corresponding marker subsets and then aggregating sub‐motions into a global one. In this manner, our model can effectively capture local motion patterns w.r.t. different marker subsets, thus producing more accurate results compared to the existing methods. Extensive experiments on both real and synthetic data verify the effectiveness of the proposed method. Marker‐based optical motion capture (MoCap) aims to localize 3D human motions from a sequence of input raw markers. It is widely used to produce physical movements for virtual characters in various games such as the role‐playing game, the fighting game, and the action‐adventure game. However, the conventional MoCap cleaning and solving process is extremely labor‐intensive, time‐consuming, and usually the most costly part of game animation production. Thus, there is a high demand for automated algorithms to replace costly manual operations and achieve accurate MoCap cleaning and solving in the game industry. In this article, we design a divide‐and‐conquer‐based MoCap solving network, dubbed MarkerNet, to estimate human skeleton motions from sequential raw markers effectively. In a nutshell, our key idea is to decompose the task of direct solving of global motion from all markers into first modeling sub‐motions of local parts from the corresponding marker subsets and then aggregating sub‐motions into a global one. In this manner, our model can effectively capture local motion patterns w.r.t. different marker subsets, thus producing more accurate results compared to the existing methods. Extensive experiments on both real and synthetic data verify the effectiveness of the proposed method. The overall motion of a human body can be decomposed into several sub‐motions of different local parts. Thus, we divide all sequential markers into different subsets and learn different local sub‐motions from the corresponding marker subsets within local spatio‐temporal ranges. |
Author | Yu, Xin Li, Lincheng Bu, Jiajun Hu, Zhipeng Tang, Jilin Xin, Haoran Hou, Jie |
Author_xml | – sequence: 1 givenname: Zhipeng surname: Hu fullname: Hu, Zhipeng organization: Zhejiang University – sequence: 2 givenname: Jilin orcidid: 0000-0001-9478-7489 surname: Tang fullname: Tang, Jilin organization: NetEase Fuxi AI Lab – sequence: 3 givenname: Lincheng surname: Li fullname: Li, Lincheng email: lilincheng@corp.netease.com organization: NetEase Fuxi AI Lab – sequence: 4 givenname: Jie surname: Hou fullname: Hou, Jie organization: NetEase Fuxi AI Lab – sequence: 5 givenname: Haoran surname: Xin fullname: Xin, Haoran organization: NetEase Fuxi AI Lab – sequence: 6 givenname: Xin surname: Yu fullname: Yu, Xin organization: University of Queensland – sequence: 7 givenname: Jiajun surname: Bu fullname: Bu, Jiajun organization: Zhejiang University |
BookMark | eNp1kM1KAzEUhYNUsK2CjxBw42ZqfiaZGXelWBWqbvzbhUySkaltUpOZlu58BJ_RJzHTijs39x64H-cezgD0rLMGgFOMRhghcqHkekQIyQ9AH7OUJynJXnt_muMjMAhhHklOMOqDlzvp342_N80lHENdr2ttvj-_pNVxKmc_WuNhcIu2qZ2FjYNLt1NKrprWm-60ru0brLxbQi83cLnzC8fgsJKLYE5-9xA8Ta8eJzfJ7OH6djKeJYqwNE9ylWFVMmUqxGWpYiRFqSmR0WmWZkpxSRUrZJGVlc4N1zSTOdGSaioLxgimQ3C29115F7OGRsxd6218KUhBUco4yzvqfE8p70LwphIrX8ekW4GR6GoTsTbR1RbRZI9u6oXZ_suJyfh5x_8A5_hy2Q |
Cites_doi | 10.1109/CVPR.2016.308 10.1007/978-3-030-01252-6_24 10.1109/ICSP.2016.7877975 10.1145/2820903.2820918 10.1109/CVPR52688.2022.01286 10.1145/2816795.2818013 10.1109/CVPR.2019.01123 10.1145/3197517.3201302 10.1007/s00371-006-0080-9 10.1117/12.57955 10.1109/CVPR52729.2023.01236 10.1109/TPAMI.2023.3271691/mm1 10.1109/CVPR.2016.90 10.1016/j.cagx.2019.100011 10.1145/3450626.3459681 10.21236/ADA406704 10.1145/3065386 10.1109/CVPR52688.2022.00320 10.1109/3DV57658.2022.00013 10.1016/j.sigpro.2014.08.017 10.1016/j.patcog.2019.01.006 10.1038/nature14539 10.1109/CVPR.2015.7298594 10.1145/1966394.1966397 10.1007/978-3-031-20047-2_30 10.1145/2897824.2925975 10.1109/CVPR52729.2023.00063 10.1145/1186822.1073248 10.1109/ICCV48922.2021.00986 |
ContentType | Journal Article |
Copyright | 2024 John Wiley & Sons Ltd. 2024 John Wiley & Sons, Ltd. |
Copyright_xml | – notice: 2024 John Wiley & Sons Ltd. – notice: 2024 John Wiley & Sons, Ltd. |
DBID | AAYXX CITATION 7SC 8FD JQ2 L7M L~C L~D |
DOI | 10.1002/cav.2228 |
DatabaseName | CrossRef Computer and Information Systems Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
DatabaseTitle | CrossRef Computer and Information Systems Abstracts Technology Research Database Computer and Information Systems Abstracts – Academic Advanced Technologies Database with Aerospace ProQuest Computer Science Collection Computer and Information Systems Abstracts Professional |
DatabaseTitleList | CrossRef Computer and Information Systems Abstracts |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Visual Arts |
EISSN | 1546-427X |
EndPage | n/a |
ExternalDocumentID | 10_1002_cav_2228 CAV2228 |
Genre | article |
GroupedDBID | .3N .4S .DC .GA .Y3 05W 0R~ 10A 1L6 1OC 29F 31~ 33P 3SF 3WU 4.4 50Y 50Z 51W 51X 52M 52N 52O 52P 52S 52T 52U 52W 52X 5GY 5VS 66C 6J9 702 7PT 8-0 8-1 8-3 8-4 8-5 930 A03 AAESR AAEVG AAHHS AAHQN AAMNL AANHP AANLZ AAONW AASGY AAXRX AAYCA AAZKR ABCQN ABCUV ABEML ABIJN ABPVW ACAHQ ACBWZ ACCFJ ACCZN ACGFS ACPOU ACRPL ACSCC ACXBN ACXQS ACYXJ ADBBV ADEOM ADIZJ ADKYN ADMGS ADNMO ADOZA ADXAS ADZMN ADZOD AEEZP AEIGN AEIMD AENEX AEQDE AEUQT AEUYR AFBPY AFFPM AFGKR AFPWT AFWVQ AFZJQ AHBTC AITYG AIURR AIWBW AJBDE AJXKR ALMA_UNASSIGNED_HOLDINGS ALUQN ALVPJ AMBMR AMYDB ARCSS ASPBG ATUGU AUFTA AVWKF AZBYB AZFZN AZVAB BAFTC BDRZF BFHJK BHBCM BMNLL BROTX BRXPI BY8 CS3 D-E D-F DCZOG DPXWK DR2 DRFUL DRSTM DU5 EBS EDO EJD F00 F01 F04 F5P FEDTE G-S G.N GNP GODZA HF~ HGLYW HHY HVGLF HZ~ I-F ITG ITH IX1 J0M JPC KQQ LATKE LAW LC2 LC3 LEEKS LH4 LITHE LOXES LP6 LP7 LUTES LW6 LYRES MEWTI MK4 MRFUL MRSTM MSFUL MSSTM MXFUL MXSTM N9A NF~ O66 O9- OIG P2W P4D PQQKQ Q.N Q11 QB0 QRW R.K ROL RWI RX1 RYL SUPJJ TN5 TUS UB1 V2E V8K W8V W99 WBKPD WIH WIK WQJ WRC WXSBR WYISQ WZISG XG1 XV2 ~IA ~WT AAYXX ADMLS AGHNM AGQPQ AGYGG CITATION 7SC 8FD AAMMB AEFGJ AGXDD AIDQK AIDYY JQ2 L7M L~C L~D |
ID | FETCH-LOGICAL-c2548-8c71cb5cef06abc210c33eb0ed4747cc6a3c59a97bfd8e6d37a82da3d3a955213 |
IEDL.DBID | DR2 |
ISSN | 1546-4261 |
IngestDate | Fri Jul 25 04:32:17 EDT 2025 Tue Jul 01 02:42:24 EDT 2025 Wed Jan 22 16:14:30 EST 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 1 |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c2548-8c71cb5cef06abc210c33eb0ed4747cc6a3c59a97bfd8e6d37a82da3d3a955213 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0001-9478-7489 |
PQID | 2930456581 |
PQPubID | 2034909 |
PageCount | 19 |
ParticipantIDs | proquest_journals_2930456581 crossref_primary_10_1002_cav_2228 wiley_primary_10_1002_cav_2228_CAV2228 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | January/February 2024 2024-01-00 20240101 |
PublicationDateYYYYMMDD | 2024-01-01 |
PublicationDate_xml | – month: 01 year: 2024 text: January/February 2024 |
PublicationDecade | 2020 |
PublicationPlace | Chichester |
PublicationPlace_xml | – name: Chichester |
PublicationTitle | Computer animation and virtual worlds |
PublicationYear | 2024 |
Publisher | Wiley Subscription Services, Inc |
Publisher_xml | – name: Wiley Subscription Services, Inc |
References | 2015; 34 2019; 90 2017; 60 2012 2011 2015; 521 2019; 2 2017; 30:6000‐6010 2011; 30 2005 2003 2023;45(10):12287‐12303 2002 2016; 35 1992; 1611 2023 2022 2021 2006; 22 2015; 110 2019 2019; 32:8026–8037 2018 2016 2015 2014 2005; 2 2016; 29:4905‐4913 2021; 40 2018; 37 e_1_2_6_32_1 e_1_2_6_10_1 e_1_2_6_31_1 Chen K (e_1_2_6_4_1) 2021; 40 e_1_2_6_30_1 Zordan VB (e_1_2_6_18_1) 2003 Kirk AG (e_1_2_6_19_1) 2005 Paszke A (e_1_2_6_45_1) 2019; 32 Ioffe S (e_1_2_6_36_1) 2015 e_1_2_6_13_1 e_1_2_6_35_1 e_1_2_6_11_1 e_1_2_6_34_1 e_1_2_6_12_1 Hornung A (e_1_2_6_20_1) 2005 e_1_2_6_33_1 Girshick R (e_1_2_6_39_1) 2015 Vaswani A (e_1_2_6_14_1) 2017; 30 e_1_2_6_38_1 e_1_2_6_16_1 e_1_2_6_37_1 Jia Y (e_1_2_6_17_1) 2012 e_1_2_6_42_1 e_1_2_6_43_1 e_1_2_6_21_1 e_1_2_6_41_1 e_1_2_6_40_1 e_1_2_6_9_1 Baumann J (e_1_2_6_22_1) 2011 e_1_2_6_8_1 Luo W (e_1_2_6_15_1) 2016; 29 e_1_2_6_5_1 e_1_2_6_7_1 e_1_2_6_6_1 e_1_2_6_25_1 e_1_2_6_24_1 e_1_2_6_23_1 e_1_2_6_2_1 Perepichka M (e_1_2_6_3_1) 2019 e_1_2_6_29_1 e_1_2_6_44_1 e_1_2_6_28_1 e_1_2_6_27_1 e_1_2_6_46_1 e_1_2_6_26_1 e_1_2_6_47_1 |
References_xml | – start-page: 3370 year: 2012 end-page: 3377 – volume: 35 start-page: 1 issue: 4 year: 2016 end-page: 11 article-title: A deep learning framework for character motion synthesis and editing publication-title: ACM Trans Graph – volume: 521 start-page: 436 issue: 7553 year: 2015 end-page: 444 article-title: Deep learning publication-title: Nature – volume: 1611 start-page: 586 year: 1992 end-page: 606 – volume: 40 start-page: 1 issue: 4 year: 2021 end-page: 11 article-title: Mocap‐solver: A neural solver for optical motion capture data publication-title: ACM Trans Graph – volume: 2 start-page: 782 year: 2005 end-page: 788 – start-page: 770 year: 2016 end-page: 778 – start-page: 2818 year: 2016 end-page: 2826 – year: 2023;45(10):12287‐12303 article-title: Pymaf‐x: Towards well‐aligned full‐body model regression from monocular images publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence – volume: 30 start-page: 1 issue: 3 year: 2011 end-page: 12 article-title: Motion reconstruction using sparse accelerometer data publication-title: ACM Transactions on Graphics (ToG) – start-page: 516 year: 2022 end-page: 533 – start-page: 12 858 year: 2023 end-page: 12 868 – volume: 2 year: 2019 article-title: Real‐time neural network prediction for handling two‐hands mutual occlusions publication-title: Comput Graph: X – start-page: 75 year: 2005 end-page: 82 – volume: 90 start-page: 119 year: 2019 end-page: 133 article-title: Wider or deeper: Revisiting the resnet model for visual recognition publication-title: Patt Recognit – year: 2016 – year: 2014 – volume: 60 start-page: 84 issue: 6 year: 2017 end-page: 90 article-title: Imagenet classification with deep convolutional neural networks publication-title: Commun ACM – start-page: 574 year: 2023 end-page: 584 – start-page: 686 year: 2005 end-page: 696 – start-page: 13 211 year: 2022 end-page: 13 220 – volume: 34 start-page: 1 issue: 6 year: 2015 end-page: 16 article-title: Smpl: a skinned multi‐person linear model publication-title: ACM Trans Graph – start-page: 1440 year: 2015 end-page: 1448 – start-page: 10012 year: 2021 end-page: 10022 – start-page: 111 year: 2011 end-page: 118 article-title: Data‐driven completion of motion capture data publication-title: Vriphys – volume: 30:6000‐6010 year: 2017 article-title: Attention is all you need publication-title: Adv Neural Inform Process Syst – volume: 22 start-page: 721 issue: 9 year: 2006 end-page: 728 article-title: Estimation of missing markers in human motion capture publication-title: Visual Comput – start-page: 1 year: 2015 end-page: 9 – start-page: 1 year: 2022 end-page: 11 – start-page: 10 975 year: 2019 end-page: 10 985 – year: 2002 – volume: 29:4905‐4913 year: 2016 article-title: Understanding the effective receptive field in deep convolutional neural networks publication-title: Adv Neural Inform Process Syst – year: 2022 – start-page: 245 year: 2003 end-page: 250 – volume: 32:8026–8037 year: 2019 article-title: Pytorch: an imperative style, high‐performance deep learning library publication-title: Adv Neural Inform Process Syst – start-page: 1 year: 2019 end-page: 10 article-title: Robust marker trajectory repair for mocap using kinematic reference publication-title: Motion, Interact Games – start-page: 1 year: 2015 end-page: 4 – volume: 110 start-page: 108 year: 2015 end-page: 122 article-title: Sparse motion bases selection for human motion denoising publication-title: Signal Process – start-page: 3202 year: 2022 end-page: 3211 – start-page: 975 year: 2016 end-page: 982 – start-page: 448 year: 2015 end-page: 456 – volume: 37 start-page: 1 issue: 4 year: 2018 end-page: 12 article-title: Robust solving of optical motion capture data by denoising publication-title: ACM Trans Graph – start-page: 385 year: 2018 end-page: 400 – start-page: 75 volume-title: IEEE Proceedings. Virtual reality, 2005 (VR 2005) year: 2005 ident: e_1_2_6_20_1 – ident: e_1_2_6_10_1 doi: 10.1109/CVPR.2016.308 – ident: e_1_2_6_16_1 doi: 10.1007/978-3-030-01252-6_24 – ident: e_1_2_6_26_1 doi: 10.1109/ICSP.2016.7877975 – ident: e_1_2_6_27_1 doi: 10.1145/2820903.2820918 – ident: e_1_2_6_30_1 doi: 10.1109/CVPR52688.2022.01286 – ident: e_1_2_6_40_1 doi: 10.1145/2816795.2818013 – ident: e_1_2_6_41_1 doi: 10.1109/CVPR.2019.01123 – ident: e_1_2_6_2_1 doi: 10.1145/3197517.3201302 – ident: e_1_2_6_42_1 – ident: e_1_2_6_37_1 – ident: e_1_2_6_21_1 doi: 10.1007/s00371-006-0080-9 – ident: e_1_2_6_47_1 doi: 10.1117/12.57955 – ident: e_1_2_6_33_1 doi: 10.1109/CVPR52729.2023.01236 – ident: e_1_2_6_35_1 doi: 10.1109/TPAMI.2023.3271691/mm1 – ident: e_1_2_6_7_1 – volume: 30 year: 2017 ident: e_1_2_6_14_1 article-title: Attention is all you need publication-title: Adv Neural Inform Process Syst – ident: e_1_2_6_8_1 doi: 10.1109/CVPR.2016.90 – ident: e_1_2_6_29_1 doi: 10.1016/j.cagx.2019.100011 – volume: 40 start-page: 1 issue: 4 year: 2021 ident: e_1_2_6_4_1 article-title: Mocap‐solver: A neural solver for optical motion capture data publication-title: ACM Trans Graph doi: 10.1145/3450626.3459681 – ident: e_1_2_6_43_1 doi: 10.21236/ADA406704 – ident: e_1_2_6_44_1 – ident: e_1_2_6_6_1 doi: 10.1145/3065386 – ident: e_1_2_6_13_1 doi: 10.1109/CVPR52688.2022.00320 – start-page: 1440 volume-title: Proceedings of the IEEE international conference on computer vision. Piscataway, NJ: IEEE year: 2015 ident: e_1_2_6_39_1 – start-page: 3370 volume-title: 2012 IEEE Conference on computer vision and pattern recognition year: 2012 ident: e_1_2_6_17_1 – ident: e_1_2_6_32_1 doi: 10.1109/3DV57658.2022.00013 – start-page: 111 year: 2011 ident: e_1_2_6_22_1 article-title: Data‐driven completion of motion capture data publication-title: Vriphys – ident: e_1_2_6_25_1 doi: 10.1016/j.sigpro.2014.08.017 – ident: e_1_2_6_11_1 doi: 10.1016/j.patcog.2019.01.006 – volume: 29 year: 2016 ident: e_1_2_6_15_1 article-title: Understanding the effective receptive field in deep convolutional neural networks publication-title: Adv Neural Inform Process Syst – ident: e_1_2_6_5_1 doi: 10.1038/nature14539 – start-page: 245 volume-title: Proceedings of the 2003 ACM SIGGRAPH/Eurographics symposium on computer animation year: 2003 ident: e_1_2_6_18_1 – ident: e_1_2_6_38_1 – ident: e_1_2_6_46_1 – ident: e_1_2_6_9_1 doi: 10.1109/CVPR.2015.7298594 – ident: e_1_2_6_23_1 doi: 10.1145/1966394.1966397 – ident: e_1_2_6_31_1 doi: 10.1007/978-3-031-20047-2_30 – start-page: 448 volume-title: International conference on machine learning year: 2015 ident: e_1_2_6_36_1 – volume: 32 year: 2019 ident: e_1_2_6_45_1 article-title: Pytorch: an imperative style, high‐performance deep learning library publication-title: Adv Neural Inform Process Syst – start-page: 1 year: 2019 ident: e_1_2_6_3_1 article-title: Robust marker trajectory repair for mocap using kinematic reference publication-title: Motion, Interact Games – ident: e_1_2_6_28_1 doi: 10.1145/2897824.2925975 – ident: e_1_2_6_34_1 doi: 10.1109/CVPR52729.2023.00063 – ident: e_1_2_6_24_1 doi: 10.1145/1186822.1073248 – ident: e_1_2_6_12_1 doi: 10.1109/ICCV48922.2021.00986 – start-page: 782 volume-title: 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR'05) year: 2005 ident: e_1_2_6_19_1 |
SSID | ssj0026210 |
Score | 2.3472867 |
Snippet | Marker‐based optical motion capture (MoCap) aims to localize 3D human motions from a sequence of input raw markers. It is widely used to produce physical... |
SourceID | proquest crossref wiley |
SourceType | Aggregation Database Index Database Publisher |
SubjectTerms | Algorithms Animation Cleaning deep learning Games MoCap solving Motion capture Synthetic data virtual character animation |
Title | MarkerNet: A divide‐and‐conquer solution to motion capture solving from raw markers |
URI | https://onlinelibrary.wiley.com/doi/abs/10.1002%2Fcav.2228 https://www.proquest.com/docview/2930456581 |
Volume | 35 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1LS8NAEF5EL3rwLVZrWUG8pc17E2-hWorYHtTWgoewr4CIaWlSBU_-BH-jv8TZbNKqIIiXDSTZsJmd2f1mMvMFoRMXUDEoDjMsImzDJcrmGKGGJVwaJK5IWJFE0-v73YF7OfJGZValqoXR_BDzgJuyjGK9VgZOWdZakIZy-txU4QtYflWqlsJD13PmKNu3NRGB5_qG8hIq3lnTblUdv-9EC3j5FaQWu0xnA91X49PJJY_NWc6a_PUHdeP_XmATrZfgE0daW7bQkky30drwIZvps9kOulPFO3Lal_kZjnBRrCU_3t5pKqAF3xlGP8WVvuJ8jPVvgDCnE_UtQl1SIQqsylbwlL7gp-J52S4adC5u212j_PmCwcFnDIyAE4szj8vE9CnjIE7uOJKZUrjggXDuU4d7IQ0JS0QgfeEQGtiCOsKhoQeYwNlDy-k4lfsIC5L4IkwsCo1y5xglAGxo4iUmgE_Tr6HjaiLiiebYiDWbsh2DkGIlpBqqVzMUl1aWxQBVCkQaWDV0Woj61_5xOxqq48FfbzxEqzbgFx1tqaPlfDqTR4A_ctZAK9F57-qmUWjcJwDO2vU |
linkProvider | Wiley-Blackwell |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1LS8NAEB5qPagH32K16griLW3eDz2Vaqna9iBt7UEI-wqImJYmVfDkT_A3-kvczTatCoJ42UCSDZvZmew3k51vAE5sgYqF4hDN8Jip2Z60OeJhzWA29iObRSTbRNPuuM2efT1wBgU4z3NhFD_ELOAmLSP7XksDlwHp6pw1lOLnioxfLMCiLOgtifMvbmfcUaZrKioCx3Y16SfkzLO6Wc17fl-L5gDzK0zN1pnGGtznI1TbSx4rk5RU6OsP8sZ_vsI6rE7xJ6ophdmAAo83YaX_kEzU2WQL7mT-Dh93eHqGaijL1-Ifb-84ZqIV7rMY_hjlKovSIVKVgBDFI_k7Ql6SUQokM1fQGL-gp-x5yTb0GpfdelOb1l_QqHAbfc2nnkGJQ3mku5hQIU9qWZzonNnCCaHUxRZ1Ahx4JGI-d5nlYd9k2GIWDhwBC6wdKMbDmO8CYl7ksiAysGikR0ewJ7ANjpxIF_hTd0twnM9EOFI0G6EiVDZDIaRQCqkE5XyKwqmhJaFAKxko9Y0SnGay_rV_WK_15XHvrzcewVKz226FravOzT4smwLOqOBLGYrpeMIPBBxJyWGmdp_CDt19 |
linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1LS8NAEB60gujBt1ituoJ4S81zk3grraW-ioitBQ9hnyBiW9pUwZM_wd_oL3E327QqCOJlA0k2bGZnst9MZr4FOPQVKlaKQy0n5K7lh9rmaEgsh_skkj6XNEuiuWriRss_7wSdcValroUx_BCTgJu2jOx7rQ28z-XxlDSUkeeyDl_MwpyP7Vhv21C7mVBHudg1TASBjy3tJuTEs7Z7nPf8vhRN8eVXlJotM_VluM8HaLJLHsujlJbZ6w_uxv-9wQosjdEnqhh1WYUZ0V2DxfbDcGTODtfhTlfviEFTpCeogrJqLfHx9k66XLXKeVajH6BcYVHaQ2YfIMRIX_-M0Jd0jALpuhU0IC_oKXvecANa9dPbasMa775gMeU0RlbEQofRgAlpY0KZEifzPEFtwX3lgjCGiceCmMQhlTwSmHshiVxOPO6ROFCgwNuEQrfXFVuAeCgxj6VDVKP9OUpChWyIDKSt0KeNi3CQT0TSNyQbiaFTdhMlpEQLqQilfIaSsZkNE4VVMkgaOUU4ykT9a_-kWmnr4_Zfb9yH-etaPbk8a17swIKrsIyJvJSgkA5GYldhkZTuZUr3CVVG3Cw |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=MarkerNet%3A+A+divide%E2%80%90and%E2%80%90conquer+solution+to+motion+capture+solving+from+raw+markers&rft.jtitle=Computer+animation+and+virtual+worlds&rft.au=Hu%2C+Zhipeng&rft.au=Tang%2C+Jilin&rft.au=Li%2C+Lincheng&rft.au=Hou%2C+Jie&rft.date=2024-01-01&rft.issn=1546-4261&rft.eissn=1546-427X&rft.volume=35&rft.issue=1&rft.epage=n%2Fa&rft_id=info:doi/10.1002%2Fcav.2228&rft.externalDBID=10.1002%252Fcav.2228&rft.externalDocID=CAV2228 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1546-4261&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1546-4261&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1546-4261&client=summon |