LGNet: Local‐And‐Global Feature Adaptive Network for Single Image Two‐Hand Reconstruction
ABSTRACT Accurate 3D interacting hand mesh reconstruction from RGB images is crucial for applications such as robotics, augmented reality (AR), and virtual reality (VR). Especially in the field of robotics, accurate interacting hand mesh reconstruction can significantly improve the accuracy and natu...
Saved in:
Published in | Computer animation and virtual worlds Vol. 36; no. 4 |
---|---|
Main Authors | , |
Format | Journal Article |
Language | English |
Published |
Hoboken, USA
John Wiley & Sons, Inc
01.07.2025
Wiley Subscription Services, Inc |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | ABSTRACT
Accurate 3D interacting hand mesh reconstruction from RGB images is crucial for applications such as robotics, augmented reality (AR), and virtual reality (VR). Especially in the field of robotics, accurate interacting hand mesh reconstruction can significantly improve the accuracy and naturalness of human‐robot interaction. This task requires an accurate understanding of complex interactions between two hands and ensuring reasonable alignment of the hand mesh with the image. Recent Transformer‐based methods directly utilize the features of the two hands as input tokens, ignoring the correlation between local and global features of the interacting hands, leading to hand ambiguity, self‐occlusion, and self‐similarity problems. We propose LGNet, Local and Global Feature Adaptive Network, through separating the hand mesh reconstruction process into three stages: A joint stage for predicting hand joints; a mesh stage for predicting a rough hand mesh; and a refine stage for fine‐tuning the mesh‐image alignment using an offset mesh. LGNet enables high‐quality fingertip‐level mesh‐image alignment, effectively models the spatial relationship between two hands, and supports real‐time prediction. Comprehensive quantitative and qualitative evaluations on benchmark datasets reveal that LGNet surpasses existing methods in mesh accuracy and alignment accuracy, while also showcasing robust generalization performance in tests on in‐the‐wild images.
We propose LGNet, Local and Global Feature Adaptive Network, through separating the hand mesh reconstruction process into three stages: a joint stage for predicting hand joints; a mesh stage for predicting a rough hand mesh; and a refine stage for fine‐tuning the mesh image alignment using an offset mesh. LGNet enables high‐quality fingertip‐level mesh image alignment, effectively models the spatial relationship between two hands, and supports real‐time prediction. Comprehensive quantitative and qualitative evaluations on benchmark datasets reveal that LGNet surpasses existing methods in mesh accuracy and alignment accuracy, while also showcasing robust generalization performance in tests on in‐the‐wild images. Our source code will be made available to the community. |
---|---|
AbstractList | Accurate 3D interacting hand mesh reconstruction from RGB images is crucial for applications such as robotics, augmented reality (AR), and virtual reality (VR). Especially in the field of robotics, accurate interacting hand mesh reconstruction can significantly improve the accuracy and naturalness of human‐robot interaction. This task requires an accurate understanding of complex interactions between two hands and ensuring reasonable alignment of the hand mesh with the image. Recent Transformer‐based methods directly utilize the features of the two hands as input tokens, ignoring the correlation between local and global features of the interacting hands, leading to hand ambiguity, self‐occlusion, and self‐similarity problems. We propose LGNet, Local and Global Feature Adaptive Network, through separating the hand mesh reconstruction process into three stages: A joint stage for predicting hand joints; a mesh stage for predicting a rough hand mesh; and a refine stage for fine‐tuning the mesh‐image alignment using an offset mesh. LGNet enables high‐quality fingertip‐level mesh‐image alignment, effectively models the spatial relationship between two hands, and supports real‐time prediction. Comprehensive quantitative and qualitative evaluations on benchmark datasets reveal that LGNet surpasses existing methods in mesh accuracy and alignment accuracy, while also showcasing robust generalization performance in tests on in‐the‐wild images. ABSTRACT Accurate 3D interacting hand mesh reconstruction from RGB images is crucial for applications such as robotics, augmented reality (AR), and virtual reality (VR). Especially in the field of robotics, accurate interacting hand mesh reconstruction can significantly improve the accuracy and naturalness of human‐robot interaction. This task requires an accurate understanding of complex interactions between two hands and ensuring reasonable alignment of the hand mesh with the image. Recent Transformer‐based methods directly utilize the features of the two hands as input tokens, ignoring the correlation between local and global features of the interacting hands, leading to hand ambiguity, self‐occlusion, and self‐similarity problems. We propose LGNet, Local and Global Feature Adaptive Network, through separating the hand mesh reconstruction process into three stages: A joint stage for predicting hand joints; a mesh stage for predicting a rough hand mesh; and a refine stage for fine‐tuning the mesh‐image alignment using an offset mesh. LGNet enables high‐quality fingertip‐level mesh‐image alignment, effectively models the spatial relationship between two hands, and supports real‐time prediction. Comprehensive quantitative and qualitative evaluations on benchmark datasets reveal that LGNet surpasses existing methods in mesh accuracy and alignment accuracy, while also showcasing robust generalization performance in tests on in‐the‐wild images. We propose LGNet, Local and Global Feature Adaptive Network, through separating the hand mesh reconstruction process into three stages: a joint stage for predicting hand joints; a mesh stage for predicting a rough hand mesh; and a refine stage for fine‐tuning the mesh image alignment using an offset mesh. LGNet enables high‐quality fingertip‐level mesh image alignment, effectively models the spatial relationship between two hands, and supports real‐time prediction. Comprehensive quantitative and qualitative evaluations on benchmark datasets reveal that LGNet surpasses existing methods in mesh accuracy and alignment accuracy, while also showcasing robust generalization performance in tests on in‐the‐wild images. Our source code will be made available to the community. |
Author | Xue, Haowei Wang, Meili |
Author_xml | – sequence: 1 givenname: Haowei orcidid: 0009-0006-0377-8143 surname: Xue fullname: Xue, Haowei organization: Northwest A&F University – sequence: 2 givenname: Meili orcidid: 0000-0001-7901-1789 surname: Wang fullname: Wang, Meili email: wml@nwsuaf.edu.cn organization: Shaanxi Key Laboratory of Agricultural Information Perception and Intelligent Service |
BookMark | eNp1kL9OwzAYxC0EEm1h4A0sMTG0tZ3YTtiiiv6RIpCgIDbLTpwqJbWLk7TqxiPwjDwJLkFsLN_d8Lv7pOuDU2ONBuAKoxFGiIwzuRtxb_AJ6GEasmFI-Ovpn2f4HPTreu0JRjDqAZHO7nVzC1Obyerr4zMxub-zyipZwamWTes0THK5bcqdhh7dW_cGC-vgU2lWlYaLjVxpuNxbH5tLk8NHnVlTN67NmtKaC3BWyKrWl786AM_Tu-VkPkwfZotJkg4zQmM8jAul8oBTxEgYEapCzQnlEcvzKNJKKiW5lyimGDGuco0lYyQISIQxxTqmwQBcd71bZ99bXTdibVtn_EsRkDDgHBN0pG46KnO2rp0uxNaVG-kOAiNx3E_4_cTPfp4dd-y-rPThf1BMkpcu8Q1rx3Ti |
Cites_doi | 10.1109/ICCVW54120.2021.00201 10.1007/978-3-642-33783-3_46 10.1109/CVPR.2009.5206848 10.1007/978-3-030-58565-5_33 10.1109/CVPR.2012.6247885 10.1145/3306346.3322958 10.1145/3414685.3417852 10.1145/3130800.3130853 10.1109/3DV53792.2021.00053 10.1007/978-3-030-58607-2_2 |
ContentType | Journal Article |
Copyright | 2025 John Wiley & Sons Ltd. 2025 John Wiley & Sons, Ltd. |
Copyright_xml | – notice: 2025 John Wiley & Sons Ltd. – notice: 2025 John Wiley & Sons, Ltd. |
DBID | AAYXX CITATION 7SC 8FD JQ2 L7M L~C L~D |
DOI | 10.1002/cav.70021 |
DatabaseName | CrossRef Computer and Information Systems Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
DatabaseTitle | CrossRef Computer and Information Systems Abstracts Technology Research Database Computer and Information Systems Abstracts – Academic Advanced Technologies Database with Aerospace ProQuest Computer Science Collection Computer and Information Systems Abstracts Professional |
DatabaseTitleList | Computer and Information Systems Abstracts CrossRef |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Visual Arts |
EISSN | 1546-427X |
EndPage | n/a |
ExternalDocumentID | 10_1002_cav_70021 CAV70021 |
Genre | researchArticle |
GrantInformation_xml | – fundername: Museum Cultural Relics Visualization System Fund funderid: K4050722011 |
GroupedDBID | .3N .4S .DC .GA .Y3 05W 0R~ 10A 1L6 1OB 1OC 29F 31~ 33P 3SF 3WU 4.4 50Y 50Z 51W 51X 52M 52N 52O 52P 52S 52T 52U 52W 52X 5GY 5VS 66C 6J9 702 7PT 8-0 8-1 8-3 8-4 8-5 930 A03 AAESR AAEVG AAHQN AAMMB AAMNL AANHP AANLZ AAONW AASGY AAXRX AAYCA AAZKR ABCQN ABCUV ABEML ABIJN ABPVW ACAHQ ACBWZ ACCZN ACGFS ACPOU ACRPL ACSCC ACXBN ACXQS ACYXJ ADBBV ADEOM ADIZJ ADKYN ADMGS ADMLS ADNMO ADOZA ADXAS ADZMN AEFGJ AEIGN AEIMD AENEX AEUYR AFBPY AFFPM AFGKR AFWVQ AFZJQ AGHNM AGQPQ AGXDD AGYGG AHBTC AIDQK AIDYY AITYG AIURR AJXKR ALMA_UNASSIGNED_HOLDINGS ALUQN ALVPJ AMBMR AMYDB ARCSS ASPBG ATUGU AUFTA AVWKF AZBYB AZFZN AZVAB BAFTC BDRZF BFHJK BHBCM BMNLL BROTX BRXPI BY8 CS3 D-E D-F DCZOG DPXWK DR2 DRFUL DRSTM DU5 EBS EDO EJD F00 F01 F04 F5P FEDTE G-S G.N GNP GODZA HF~ HGLYW HHY HVGLF HZ~ I-F ITG ITH IX1 J0M JPC KQQ LATKE LAW LC2 LC3 LEEKS LH4 LITHE LOXES LP6 LP7 LUTES LW6 LYRES MEWTI MK4 MRFUL MRSTM MSFUL MSSTM MXFUL MXSTM N9A NF~ O66 O9- OIG P2W P4D PQQKQ Q.N Q11 QB0 QRW R.K ROL RX1 RYL SUPJJ TN5 TUS UB1 V2E V8K W8V W99 WBKPD WIH WIK WQJ WXSBR WYISQ WZISG XG1 XV2 ~IA ~WT AAYXX CITATION 7SC 8FD JQ2 L7M L~C L~D |
ID | FETCH-LOGICAL-c2591-9fbbd3750624825b4e725786dd88ebabba7eba8951067bde1a66233281151e953 |
IEDL.DBID | DR2 |
ISSN | 1546-4261 |
IngestDate | Fri Aug 29 01:47:14 EDT 2025 Wed Jul 16 16:36:18 EDT 2025 Wed Aug 27 10:02:05 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 4 |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c2591-9fbbd3750624825b4e725786dd88ebabba7eba8951067bde1a66233281151e953 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0001-7901-1789 0009-0006-0377-8143 |
PQID | 3243771205 |
PQPubID | 2034909 |
PageCount | 11 |
ParticipantIDs | proquest_journals_3243771205 crossref_primary_10_1002_cav_70021 wiley_primary_10_1002_cav_70021_CAV70021 |
PublicationCentury | 2000 |
PublicationDate | July/August 2025 2025-07-00 20250701 |
PublicationDateYYYYMMDD | 2025-07-01 |
PublicationDate_xml | – month: 07 year: 2025 text: July/August 2025 |
PublicationDecade | 2020 |
PublicationPlace | Hoboken, USA |
PublicationPlace_xml | – name: Hoboken, USA – name: Chichester |
PublicationTitle | Computer animation and virtual worlds |
PublicationYear | 2025 |
Publisher | John Wiley & Sons, Inc Wiley Subscription Services, Inc |
Publisher_xml | – name: John Wiley & Sons, Inc – name: Wiley Subscription Services, Inc |
References | 2017; 30 2012 2023 2022 2017; 36 2021 2020 2020; 39 2009 2019 2019; 38 2018 2017 2016 2015 2014 Moon G. (e_1_2_9_2_1) 2020 Meng H. (e_1_2_9_24_1) 2022 Wang J. (e_1_2_9_21_1) 2020; 39 e_1_2_9_33_1 Ren P. (e_1_2_9_36_1) 2023 Boukhayma A. (e_1_2_9_35_1) 2019 Oikonomidis I. (e_1_2_9_11_1) 2012 Zhou Y. (e_1_2_9_18_1) 2021 He K. (e_1_2_9_25_1) 2016 Kyriazis N. (e_1_2_9_12_1) 2014 e_1_2_9_15_1 Joo H. (e_1_2_9_14_1) 2018 Zimmermann C. (e_1_2_9_34_1) 2017 Moon G. (e_1_2_9_22_1) 2022 Ballan L. (e_1_2_9_10_1) 2012 Yu Z. (e_1_2_9_37_1) 2023 Xiang D. (e_1_2_9_16_1) 2019 Vaswani A. (e_1_2_9_9_1) 2017; 30 Li M. (e_1_2_9_7_1) 2022 Di X. (e_1_2_9_8_1) 2022 Bambach S. (e_1_2_9_31_1) 2015 Taylor J. (e_1_2_9_19_1) 2017; 36 Mueller F. (e_1_2_9_20_1) 2019; 38 Hampali S. (e_1_2_9_6_1) 2022 Fan Z. (e_1_2_9_5_1) 2021 Kim D. U. (e_1_2_9_4_1) 2021 Choutas V. (e_1_2_9_13_1) 2020 Zhang Y. (e_1_2_9_17_1) 2021 Spurr A. (e_1_2_9_30_1) 2018 Zhang B. (e_1_2_9_3_1) 2021 Rong Y. (e_1_2_9_23_1) 2021 Paszke A. (e_1_2_9_32_1) 2017 e_1_2_9_27_1 Park J. (e_1_2_9_28_1) 2023 e_1_2_9_29_1 Deng J. (e_1_2_9_26_1) 2009 |
References_xml | – start-page: 11189 year: 2021 end-page: 11198 – start-page: 640 year: 2012 end-page: 653 – start-page: 20 year: 2020 end-page: 40 – volume: 39 start-page: 1 issue: 6 year: 2020 end-page: 16 article-title: Rgb2hands: Real‐Time Tracking of 3d Hand Interactions From Monocular Rgb Video publication-title: ACM Transactions on Graphics (ToG) – volume: 36 start-page: 1 issue: 6 year: 2017 end-page: 12 article-title: Articulated Distance Fields for Ultra‐Fast Tracking of Hands Interacting publication-title: ACM Transactions on Graphics (TOG) – start-page: 12955 year: 2023 end-page: 12964 – start-page: 89 year: 2018 end-page: 98 – start-page: 548 year: 2020 end-page: 564 – start-page: 770 year: 2016 end-page: 778 – start-page: 2299 year: 2022 end-page: 2307 – start-page: 1949 year: 2015 end-page: 1957 – start-page: 5560 year: 2021 end-page: 5569 – start-page: 11354 year: 2021 end-page: 11363 – volume: 30 start-page: 5998 year: 2017 end-page: 6008 article-title: Attention Is All You Need publication-title: Advances in Neural Information Processing Systems – start-page: 8320 year: 2018 end-page: 8329 – start-page: 4903 year: 2017 end-page: 4911 – start-page: 248 year: 2009 end-page: 255 – start-page: 10843 year: 2019 end-page: 10852 – start-page: 2761 year: 2022 end-page: 2770 – year: 2014 – start-page: 8014 year: 2023 end-page: 8025 – start-page: 1862 year: 2012 end-page: 1869 – start-page: 380 year: 2022 end-page: 397 – start-page: 1 year: 2021 end-page: 10 – start-page: 11090 year: 2022 end-page: 11100 – start-page: 4811 year: 2021 end-page: 4822 – volume: 38 start-page: 1 issue: 4 year: 2019 end-page: 13 article-title: Real‐Time Pose and Shape Reconstruction of Two Interacting Hands With a Single Depth Camera publication-title: ACM Transactions on Graphics (ToG) – year: 2022 – year: 2020 – start-page: 3430 year: 2014 end-page: 3437 – start-page: 432 year: 2021 end-page: 441 – start-page: 722 year: 2022 end-page: 738 – start-page: 10965 year: 2019 end-page: 10974 – year: 2017 – start-page: 4200 year: 2023 end-page: 4209 – start-page: 2299 volume-title: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition year: 2022 ident: e_1_2_9_22_1 – volume-title: Proceedings of the 31st Conference on Neural Information Processing Systems (NeurIPS) Workshops year: 2017 ident: e_1_2_9_32_1 – ident: e_1_2_9_15_1 doi: 10.1109/ICCVW54120.2021.00201 – start-page: 11354 volume-title: Proceedings of the IEEE/CVF International Conference on Computer Vision year: 2021 ident: e_1_2_9_3_1 – start-page: 770 volume-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition year: 2016 ident: e_1_2_9_25_1 – start-page: 640 volume-title: Computer Vision–ECCV 2012: 12th European Conference on Computer Vision, Florence, Italy, October 7‐13, 2012, Proceedings year: 2012 ident: e_1_2_9_10_1 doi: 10.1007/978-3-642-33783-3_46 – start-page: 248 volume-title: Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition year: 2009 ident: e_1_2_9_26_1 doi: 10.1109/CVPR.2009.5206848 – start-page: 4903 volume-title: Proceedings of the IEEE International Conference on Computer Vision year: 2017 ident: e_1_2_9_34_1 – start-page: 4200 volume-title: Proceedings of the IEEE/CVF International Conference on Computer Vision year: 2023 ident: e_1_2_9_28_1 – volume: 30 start-page: 5998 year: 2017 ident: e_1_2_9_9_1 article-title: Attention Is All You Need publication-title: Advances in Neural Information Processing Systems – start-page: 548 volume-title: Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XX 16 year: 2020 ident: e_1_2_9_2_1 doi: 10.1007/978-3-030-58565-5_33 – start-page: 1862 volume-title: Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition year: 2012 ident: e_1_2_9_11_1 doi: 10.1109/CVPR.2012.6247885 – start-page: 5560 volume-title: Proceedings of the IEEE/CVF International Conference on Computer Vision year: 2021 ident: e_1_2_9_17_1 – volume: 38 start-page: 1 issue: 4 year: 2019 ident: e_1_2_9_20_1 article-title: Real‐Time Pose and Shape Reconstruction of Two Interacting Hands With a Single Depth Camera publication-title: ACM Transactions on Graphics (ToG) doi: 10.1145/3306346.3322958 – start-page: 8014 volume-title: Proceedings of the IEEE/CVF International Conference on Computer Vision year: 2023 ident: e_1_2_9_36_1 – start-page: 1 volume-title: Proceedings of the 2021 International Conference on 3D Vision (3DV) year: 2021 ident: e_1_2_9_5_1 – volume: 39 start-page: 1 issue: 6 year: 2020 ident: e_1_2_9_21_1 article-title: Rgb2hands: Real‐Time Tracking of 3d Hand Interactions From Monocular Rgb Video publication-title: ACM Transactions on Graphics (ToG) doi: 10.1145/3414685.3417852 – start-page: 1949 volume-title: Proceedings of the IEEE International Conference on Computer Vision year: 2015 ident: e_1_2_9_31_1 – start-page: 10965 volume-title: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition year: 2019 ident: e_1_2_9_16_1 – start-page: 4811 volume-title: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition year: 2021 ident: e_1_2_9_18_1 – start-page: 11090 volume-title: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition year: 2022 ident: e_1_2_9_6_1 – ident: e_1_2_9_27_1 – start-page: 12955 volume-title: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition year: 2023 ident: e_1_2_9_37_1 – start-page: 3430 volume-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition year: 2014 ident: e_1_2_9_12_1 – ident: e_1_2_9_33_1 – start-page: 11189 volume-title: Proceedings of the IEEE/CVF International Conference on Computer Vision year: 2021 ident: e_1_2_9_4_1 – volume: 36 start-page: 1 issue: 6 year: 2017 ident: e_1_2_9_19_1 article-title: Articulated Distance Fields for Ultra‐Fast Tracking of Hands Interacting publication-title: ACM Transactions on Graphics (TOG) doi: 10.1145/3130800.3130853 – start-page: 2761 volume-title: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition year: 2022 ident: e_1_2_9_7_1 – start-page: 89 volume-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition year: 2018 ident: e_1_2_9_30_1 – start-page: 432 volume-title: Proceedings of the 2021 International Conference on 3D Vision (3DV) year: 2021 ident: e_1_2_9_23_1 doi: 10.1109/3DV53792.2021.00053 – start-page: 10843 volume-title: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition year: 2019 ident: e_1_2_9_35_1 – start-page: 722 volume-title: European Conference on Computer Vision year: 2022 ident: e_1_2_9_8_1 – start-page: 8320 volume-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition year: 2018 ident: e_1_2_9_14_1 – start-page: 20 volume-title: Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part X 16 year: 2020 ident: e_1_2_9_13_1 doi: 10.1007/978-3-030-58607-2_2 – ident: e_1_2_9_29_1 – start-page: 380 volume-title: European Conference on Computer Vision year: 2022 ident: e_1_2_9_24_1 |
SSID | ssj0026210 |
Score | 2.3768084 |
Snippet | ABSTRACT
Accurate 3D interacting hand mesh reconstruction from RGB images is crucial for applications such as robotics, augmented reality (AR), and virtual... Accurate 3D interacting hand mesh reconstruction from RGB images is crucial for applications such as robotics, augmented reality (AR), and virtual reality... |
SourceID | proquest crossref wiley |
SourceType | Aggregation Database Index Database Publisher |
SubjectTerms | 3D mesh reconstruction Accuracy Alignment Augmented reality Color imagery Hands human computer interaction Image quality Image reconstruction Occlusion Robotics transformer‐based Virtual reality |
Title | LGNet: Local‐And‐Global Feature Adaptive Network for Single Image Two‐Hand Reconstruction |
URI | https://onlinelibrary.wiley.com/doi/abs/10.1002%2Fcav.70021 https://www.proquest.com/docview/3243771205 |
Volume | 36 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1PS8MwFA9jJz34X5xOCeLBS7c2bdNOT2U4p-gOuo0dhJI0EUTtht0UPPkR_Ix-Et9r1k0FQYTSltKE9uX9-b325RdCDrTjeknohRbk08rydODh_10FumxDA1tDRMHJyZcd3u555wN_UCLHxVwYww8x--CGlpH7azRwIbP6nDQ0Ec-1AEMU-F-s1UJAdDWjjmKcGSYC3-MWpgkFq5DN6rOW32PRHGB-hal5nGktk5viCU15yX1tMpa15PUHeeM_X2GFLE3xJ42MwqySkk7XyGL_LpuYq9k6iS9OO3p8RC8wzH28vUepgr1ZHIAiZJw8aRopMUJPSTumjpwC-KXXEAcfND17BCdFuy9DaNYWqaKY486ZajdIr3XSbbat6ToMVgLJkWM1bqVULkALzjxIKCWMKBo6VyoMtRRSigAOIWI1HkilHcEBVLksBLTp6IbvbpJyOkz1FqHSQQQpVEO7ypOJChkoii0c2BQXjFXIfjEi8cjQbcSGWJnFIK04l1aFVIuxiqcWl8UuMisGDrP9CjnMhf57B3Ez6ucn23-_dYcsMFz6N6_UrZIyyEzvAh4Zy71c8T4BBNHcPQ |
linkProvider | Wiley-Blackwell |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V3NTuMwEB4hOAAH_hZE-bUQSHtJSZw0SVfiUPGzLZQeoCBuxo6NhIAU0RbEnngEHoRX4SV4EmbipmVXQtoLB6QoiaIkcjwznm-c8TcA68bzgyQOYgfjae0EJgro_65GXXbxAdegR6HFyYeNsHoS7J-VzobgJV8LY_kh-hNuZBnZeE0GThPSmwPW0ETeFyPyUb2UygPz-IABW3urtoPS3eB8b7e5XXV6NQWcBIG-55QvlNI-usmQBxgcKWwdKW2odRwbJZWSER5iwh1hpLTxZIgAwecxIifPlKlGBA74I1RBnJj6d476ZFU85Jb7oBSEDgUmOY-Ryzf7Tf3b-w0g7UdgnHm2vUl4zfvEJrRcFbsdVUz-_EMX-V06bQomehCbVaxNTMOQSWdg_PSy3bVX2z9A1H83TOcXq5Mnf3t6rqQa97b-ASNU3L0zrKLlLTkD1rCp8gzxPTtGV39tWO0Gx2HWfGjhY1WZakZh_ICMdxZOvuQL52A4baVmHpjyCCRLXTa-DlSiY4624EoPNx1KzguwlquAuLWMIsJyR3OB0hGZdAqwlCuH6A0qbeETeWTkcbdUgJ-ZlD9_gdiunGYnC_9_6yqMVpuHdVGvNQ4WYYxTpeMsMXkJhrH_zDLCr45aybSewflXa8w7E3M4hA |
linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1RT9swED4hJiH2sA3YtDI2rIlJvKQkjuukk_ZQUUoLXYU2QLx5dmykaSNUawtiT_sJ_A_-Cr-CX7K7uGlhEtJeeECKkiiKI8d35_suOX8HsOaiWGSpSAOMp20gXCLo_65FXQ6xQejQo9Di5M892T4QO0e1oxm4KtfCeH6IyQc3soxiviYD79vjjSlpaKbPqgm5qHFG5a67OMd4bfCp00ThfuC8tbW_2Q7GJQWCDHF-FNSPjbExeknJBcZGBjtHOiutTVNntDE6wUNKsEMmxrpIS8QHMU8ROEWuTiUicL5_ImRYpzoRzS8Triouuac-qAkZUFxS0hiFfGPS1bvOb4pob-PiwrG1nsN1OSQ-n-VHdTQ01ez3P2yRj2TMXsCzMcBmDW8RCzDj8kV4evh9MPJXB0uguts9N_zIuuTHb_5cNnKLe1_9gBEmHv1yrGF1n1wB6_lEeYbonn1FR__Tsc4JzsJs__wUm7V1bhkF8VMq3pdw8CBv-Apm89PcvQZmIoLI2tZdbIXJbMrREkId4Wal5rwC70sNUH3PJ6I8czRXKB1VSKcCK6VuqPGUMlAxUUcmEQ9rFVgvhHz_A9Rm47A4Wf7_W1dhbq_ZUt1Ob_cNzHMqc1xkJa_ALA6fe4vYa2jeFTrP4NtDK8xfEuY3Mw |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=LGNet%3A+Local%E2%80%90And%E2%80%90Global+Feature+Adaptive+Network+for+Single+Image+Two%E2%80%90Hand+Reconstruction&rft.jtitle=Computer+animation+and+virtual+worlds&rft.au=Xue%2C+Haowei&rft.au=Wang%2C+Meili&rft.date=2025-07-01&rft.issn=1546-4261&rft.eissn=1546-427X&rft.volume=36&rft.issue=4&rft_id=info:doi/10.1002%2Fcav.70021&rft.externalDBID=n%2Fa&rft.externalDocID=10_1002_cav_70021 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1546-4261&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1546-4261&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1546-4261&client=summon |