Multi‐temporal scale aggregation refinement graph convolutional network for skeleton‐based action recognition
Skeleton‐based human action recognition is gaining significant attention and finding widespread application in various fields, such as virtual reality and human‐computer interaction systems. Recent studies have highlighted the effectiveness of graph convolutional network (GCN) based methods in this...
Saved in:
Published in | Computer animation and virtual worlds Vol. 35; no. 1 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
Chichester
Wiley Subscription Services, Inc
01.01.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Skeleton‐based human action recognition is gaining significant attention and finding widespread application in various fields, such as virtual reality and human‐computer interaction systems. Recent studies have highlighted the effectiveness of graph convolutional network (GCN) based methods in this task, leading to a remarkable improvement in prediction accuracy. However, most GCN‐based methods overlook the varying contributions of self, centripetal and centrifugal subsets. Besides, only a single‐scale temporal feature is adopted, and the multi‐temporal scale information is ignored. To this end, firstly, in order to differentiate the importance of different skeleton subsets, we develop a refinement graph convolution, which can adaptively learn a weight for each subset feature. Secondly, a multi‐temporal scale aggregation module is proposed to extract more discriminative temporal dynamic information. Furthermore, a multi‐temporal scale aggregation refinement graph convolutional network (MTSA‐RGCN) is proposed, and four‐stream structure is also adopted in this paper, which can comprehensively model complementary features and eventually achieves a significant performance boost. In the empirical experiments, the performance of our approach has been greatly improved on both NTU‐RGB+D 60 and NTU‐RGB+D 120 datasets, compared to other state‐of‐the‐art methods.
The overall pipeline of our proposed method. The skeleton data is first input into RGCN to obtain basic feature expressions. RGCN can learn more spatial motion information of actions. Features with different temporal resolutions are then modulated in the temporal and spatial dimensions and aggregated into features with rich discriminative temporal information for final classification. |
---|---|
AbstractList | Skeleton‐based human action recognition is gaining significant attention and finding widespread application in various fields, such as virtual reality and human‐computer interaction systems. Recent studies have highlighted the effectiveness of graph convolutional network (GCN) based methods in this task, leading to a remarkable improvement in prediction accuracy. However, most GCN‐based methods overlook the varying contributions of self, centripetal and centrifugal subsets. Besides, only a single‐scale temporal feature is adopted, and the multi‐temporal scale information is ignored. To this end, firstly, in order to differentiate the importance of different skeleton subsets, we develop a refinement graph convolution, which can adaptively learn a weight for each subset feature. Secondly, a multi‐temporal scale aggregation module is proposed to extract more discriminative temporal dynamic information. Furthermore, a multi‐temporal scale aggregation refinement graph convolutional network (MTSA‐RGCN) is proposed, and four‐stream structure is also adopted in this paper, which can comprehensively model complementary features and eventually achieves a significant performance boost. In the empirical experiments, the performance of our approach has been greatly improved on both NTU‐RGB+D 60 and NTU‐RGB+D 120 datasets, compared to other state‐of‐the‐art methods. Skeleton‐based human action recognition is gaining significant attention and finding widespread application in various fields, such as virtual reality and human‐computer interaction systems. Recent studies have highlighted the effectiveness of graph convolutional network (GCN) based methods in this task, leading to a remarkable improvement in prediction accuracy. However, most GCN‐based methods overlook the varying contributions of self, centripetal and centrifugal subsets. Besides, only a single‐scale temporal feature is adopted, and the multi‐temporal scale information is ignored. To this end, firstly, in order to differentiate the importance of different skeleton subsets, we develop a refinement graph convolution, which can adaptively learn a weight for each subset feature. Secondly, a multi‐temporal scale aggregation module is proposed to extract more discriminative temporal dynamic information. Furthermore, a multi‐temporal scale aggregation refinement graph convolutional network (MTSA‐RGCN) is proposed, and four‐stream structure is also adopted in this paper, which can comprehensively model complementary features and eventually achieves a significant performance boost. In the empirical experiments, the performance of our approach has been greatly improved on both NTU‐RGB+D 60 and NTU‐RGB+D 120 datasets, compared to other state‐of‐the‐art methods. The overall pipeline of our proposed method. The skeleton data is first input into RGCN to obtain basic feature expressions. RGCN can learn more spatial motion information of actions. Features with different temporal resolutions are then modulated in the temporal and spatial dimensions and aggregated into features with rich discriminative temporal information for final classification. |
Author | Zhou, Jian Lu, Jian Liu, Wei Li, Xuanfeng Zhang, Kaibing |
Author_xml | – sequence: 1 givenname: Xuanfeng surname: Li fullname: Li, Xuanfeng organization: Xi'an Polytechnic University – sequence: 2 givenname: Jian surname: Lu fullname: Lu, Jian email: lujian_studio@163.com organization: Xi'an Polytechnic University – sequence: 3 givenname: Jian surname: Zhou fullname: Zhou, Jian organization: Xi'an Polytechnic University – sequence: 4 givenname: Wei surname: Liu fullname: Liu, Wei organization: Xi'an Polytechnic University – sequence: 5 givenname: Kaibing surname: Zhang fullname: Zhang, Kaibing organization: Xi'an Polytechnic University |
BookMark | eNp1kMtOAjEYhRuDiYAmPsIkbtwM9jLT4pIQbwnGjRp3k07pjIXSDm2BsPMRfEafxA4QF0ZX_dN-5_w9pwc6xhoJwDmCAwQhvhJ8PcAYoyPQRXlG0wyzt87PTNEJ6Hk_iyTFCHbB8nGlg_r6-Axy0VjHdeIF1zLhde1kzYOyJnGyUkYupAlJ7Xjznghr1lav2scoMDJsrJsnlXWJn0stgzXRsOReThMuDhbC1ka18yk4rrj28uxw9sHL7c3z-D6dPN09jEeTVOBrgtKMCCIooTAGIIxVRJQoXmDMaCa45DSnGcWEEcryIS0pHzKCSAkrOSUQDTnpg4u9b-PsciV9KGZ25eKHfREXwCynJGeRGuwp4az3MWkhVNjFDo4rXSBYtLUWsdairTUKLn8JGqcW3G3_QtM9ulFabv_livHodcd_A6sXi5E |
CitedBy_id | crossref_primary_10_1007_s00371_024_03601_1 |
Cites_doi | 10.1145/3394171.3413802 10.1109/TPAMI.2019.2916873 10.1109/CVPR.2018.00572 10.1109/TCSVT.2020.3015051 10.1109/LSP.2017.2678539 10.1109/LRA.2021.3056361 10.1109/JSEN.2021.3075722 10.1109/ICCV.2017.233 10.1109/LSP.2022.3142675 10.1145/3474085.3475574 10.1109/TCSVT.2021.3124562 10.1145/3123266.3123277 10.1109/CVPR.2016.115 10.1109/CVPR.2014.82 10.1109/TIP.2020.3028207 10.1016/j.cviu.2010.10.002 10.1016/j.neucom.2021.02.001 10.1145/3394171.3413910 |
ContentType | Journal Article |
Copyright | 2023 John Wiley & Sons Ltd. 2024 John Wiley & Sons, Ltd. |
Copyright_xml | – notice: 2023 John Wiley & Sons Ltd. – notice: 2024 John Wiley & Sons, Ltd. |
DBID | AAYXX CITATION 7SC 8FD JQ2 L7M L~C L~D |
DOI | 10.1002/cav.2221 |
DatabaseName | CrossRef Computer and Information Systems Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
DatabaseTitle | CrossRef Computer and Information Systems Abstracts Technology Research Database Computer and Information Systems Abstracts – Academic Advanced Technologies Database with Aerospace ProQuest Computer Science Collection Computer and Information Systems Abstracts Professional |
DatabaseTitleList | CrossRef Computer and Information Systems Abstracts |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Visual Arts |
EISSN | 1546-427X |
EndPage | n/a |
ExternalDocumentID | 10_1002_cav_2221 CAV2221 |
Genre | article |
GrantInformation_xml | – fundername: National Natural Science Foundation of China funderid: 61971339; 62101419 – fundername: Applied Technology Research and Development Project in Beilin District, Xi'an City funderid: GX2007 – fundername: Natural Science Project of Shaanxi Provincial Department of Science and Technology funderid: 2022JM‐146 |
GroupedDBID | .3N .4S .DC .GA .Y3 05W 0R~ 10A 1L6 1OC 29F 31~ 33P 3SF 3WU 4.4 50Y 50Z 51W 51X 52M 52N 52O 52P 52S 52T 52U 52W 52X 5GY 5VS 66C 6J9 702 7PT 8-0 8-1 8-3 8-4 8-5 930 A03 AAESR AAEVG AAHHS AAHQN AAMNL AANHP AANLZ AAONW AASGY AAXRX AAYCA AAZKR ABCQN ABCUV ABEML ABIJN ABPVW ACAHQ ACBWZ ACCFJ ACCZN ACGFS ACPOU ACRPL ACSCC ACXBN ACXQS ACYXJ ADBBV ADEOM ADIZJ ADKYN ADMGS ADNMO ADOZA ADXAS ADZMN ADZOD AEEZP AEIGN AEIMD AENEX AEQDE AEUQT AEUYR AFBPY AFFPM AFGKR AFPWT AFWVQ AFZJQ AHBTC AITYG AIURR AIWBW AJBDE AJXKR ALMA_UNASSIGNED_HOLDINGS ALUQN ALVPJ AMBMR AMYDB ARCSS ASPBG ATUGU AUFTA AVWKF AZBYB AZFZN AZVAB BAFTC BDRZF BFHJK BHBCM BMNLL BROTX BRXPI BY8 CS3 D-E D-F DCZOG DPXWK DR2 DRFUL DRSTM DU5 EBS EDO EJD F00 F01 F04 F5P FEDTE G-S G.N GNP GODZA HF~ HGLYW HHY HVGLF HZ~ I-F ITG ITH IX1 J0M JPC KQQ LATKE LAW LC2 LC3 LEEKS LH4 LITHE LOXES LP6 LP7 LUTES LW6 LYRES MEWTI MK4 MRFUL MRSTM MSFUL MSSTM MXFUL MXSTM N9A NF~ O66 O9- OIG P2W P4D PQQKQ Q.N Q11 QB0 QRW R.K ROL RWI RX1 RYL SUPJJ TN5 TUS UB1 V2E V8K W8V W99 WBKPD WIH WIK WQJ WRC WXSBR WYISQ WZISG XG1 XV2 ~IA ~WT AAYXX ADMLS AGHNM AGQPQ AGYGG CITATION 1OB 7SC 8FD AAMMB AEFGJ AGXDD AIDQK AIDYY JQ2 L7M L~C L~D |
ID | FETCH-LOGICAL-c2931-43c3c6360427377f3cb13c622764caea65646237367586b6a87313b0fed3018a3 |
IEDL.DBID | DR2 |
ISSN | 1546-4261 |
IngestDate | Tue Aug 12 12:50:37 EDT 2025 Tue Jul 01 02:42:24 EDT 2025 Thu Apr 24 23:12:58 EDT 2025 Wed Jan 22 16:14:30 EST 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 1 |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c2931-43c3c6360427377f3cb13c622764caea65646237367586b6a87313b0fed3018a3 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
PQID | 2930456357 |
PQPubID | 2034909 |
PageCount | 14 |
ParticipantIDs | proquest_journals_2930456357 crossref_citationtrail_10_1002_cav_2221 crossref_primary_10_1002_cav_2221 wiley_primary_10_1002_cav_2221_CAV2221 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | January/February 2024 2024-01-00 20240101 |
PublicationDateYYYYMMDD | 2024-01-01 |
PublicationDate_xml | – month: 01 year: 2024 text: January/February 2024 |
PublicationDecade | 2020 |
PublicationPlace | Chichester |
PublicationPlace_xml | – name: Chichester |
PublicationTitle | Computer animation and virtual worlds |
PublicationYear | 2024 |
Publisher | Wiley Subscription Services, Inc |
Publisher_xml | – name: Wiley Subscription Services, Inc |
References | 2011; 115 2021; 6 2021; 21 2021; 31 2020; 42 2021 2020 2020; 12663 2017; 24 2019 2018 2017 2016 2015 2014 2022; 32 2021; 440 2022; 29 2020; 29 e_1_2_8_28_1 Yan S (e_1_2_8_10_1) 2018 e_1_2_8_29_1 Shi L (e_1_2_8_11_1) 2019 Shi L (e_1_2_8_12_1) 2019 Papadopoulos K (e_1_2_8_22_1) 2020 e_1_2_8_27_1 e_1_2_8_3_1 e_1_2_8_2_1 e_1_2_8_7_1 Kipf TN (e_1_2_8_9_1) 2017 Cheng K (e_1_2_8_26_1) 2020 e_1_2_8_20_1 e_1_2_8_21_1 Li C (e_1_2_8_6_1) 2017 Simonyan K (e_1_2_8_24_1) 2014 Thakkar KC (e_1_2_8_23_1) 2018 Si C (e_1_2_8_13_1) 2019 Li C (e_1_2_8_5_1) 2018 e_1_2_8_17_1 Liu Z (e_1_2_8_14_1) 2020 e_1_2_8_18_1 e_1_2_8_19_1 e_1_2_8_36_1 e_1_2_8_35_1 e_1_2_8_15_1 Plizzari C (e_1_2_8_34_1) 2020 Liang D (e_1_2_8_8_1) 2019 e_1_2_8_16_1 Du Y (e_1_2_8_4_1) 2015 Li M (e_1_2_8_25_1) 2019 e_1_2_8_32_1 e_1_2_8_31_1 e_1_2_8_33_1 e_1_2_8_30_1 |
References_xml | – start-page: 140 year: 2020 end-page: 149 – start-page: 180 year: 2020 end-page: 189 – start-page: 4334 year: 2021 end-page: 4342 – start-page: 1625 year: 2020 end-page: 1633 – start-page: 2136 year: 2017 end-page: 2145 – volume: 32 start-page: 4893 issue: 7 year: 2022 end-page: 4899 article-title: A central difference graph convolutional operator for skeleton‐based action recognition publication-title: IEEE Trans Circuits Syst Video Technol – start-page: 786 year: 2018 end-page: 792 – start-page: 7444 year: 2018 end-page: 7452 – start-page: 452 year: 2020 end-page: 458 – volume: 31 start-page: 1915 issue: 5 year: 2021 end-page: 1925 article-title: Richly activated graph convolutional network for robust skeleton‐based action recognition publication-title: IEEE Trans Circuits Syst Video Technol – volume: 440 start-page: 230 year: 2021 end-page: 239 article-title: Attention adjacency matrix based graph convolutional networks for skeleton‐based action recognition publication-title: Neurocomputing – volume: 6 start-page: 1028 issue: 2 year: 2021 end-page: 1035 article-title: Pose refinement graph convolutional network for skeleton‐based action recognition publication-title: IEEE Robotics Autom Lett – volume: 29 start-page: 9532 year: 2020 end-page: 9545 article-title: Skeleton‐based action recognition with multi‐stream adaptive graph convolutional networks publication-title: IEEE Trans Image Process – start-page: 1110 year: 2015 end-page: 1118 – start-page: 1227 year: 2019 end-page: 1236 – start-page: 588 year: 2014 end-page: 595 – volume: 42 start-page: 2684 issue: 10 year: 2020 end-page: 2701 article-title: NTU RGB+D 120: A Large‐Scale Benchmark for 3D Human Activity Understanding publication-title: IEEE Trans Pattern Anal Mach Intell – volume: 24 start-page: 624 issue: 5 year: 2017 end-page: 628 article-title: Joint distance maps based action recognition with convolutional neural networks publication-title: IEEE Signal Process Lett – start-page: 934 year: 2019 end-page: 940 – start-page: 270 year: 2018 – volume: 12663 start-page: 694 year: 2020 end-page: 701 – start-page: 597 year: 2017 end-page: 600 – start-page: 568 year: 2014 end-page: 576 – start-page: 199 year: 2017 end-page: 207 – start-page: 5457 year: 2018 end-page: 5466 – start-page: 1010 year: 2016 end-page: 1019 – start-page: 3595 year: 2019 end-page: 3603 – year: 2017 – start-page: 12026 year: 2019 end-page: 12035 – start-page: 7912 year: 2019 end-page: 7921 – volume: 29 start-page: 528 year: 2022 end-page: 532 article-title: MTT: Multi‐scale temporal transformer for skeleton‐based action recognition publication-title: IEEE Signal Processing Lett – start-page: 1432 year: 2020 end-page: 1440 – volume: 115 start-page: 224 issue: 2 year: 2011 end-page: 241 article-title: A survey of vision‐based methods for action representation, segmentation and recognition publication-title: Comput Vis Image Underst – volume: 21 start-page: 16183 issue: 14 year: 2021 end-page: 16191 article-title: Pyramidal graph convolutional network for skeleton‐based human action recognition publication-title: IEEE Sensors J – ident: e_1_2_8_27_1 doi: 10.1145/3394171.3413802 – start-page: 7444 volume-title: Proceedings of the Thirty‐Second AAAI Conference on Artificial Intelligence, (AAAI‐18), the 30th innovative Applications of Artificial Intelligence (IAAI‐18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI‐18) year: 2018 ident: e_1_2_8_10_1 – start-page: 270 volume-title: British Machine Vision Conference 2018, BMVC 2018 year: 2018 ident: e_1_2_8_23_1 – start-page: 1227 volume-title: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019 year: 2019 ident: e_1_2_8_13_1 – ident: e_1_2_8_30_1 doi: 10.1109/TPAMI.2019.2916873 – ident: e_1_2_8_31_1 doi: 10.1109/CVPR.2018.00572 – ident: e_1_2_8_21_1 doi: 10.1109/TCSVT.2020.3015051 – ident: e_1_2_8_17_1 doi: 10.1109/LSP.2017.2678539 – ident: e_1_2_8_20_1 doi: 10.1109/LRA.2021.3056361 – ident: e_1_2_8_15_1 doi: 10.1109/JSEN.2021.3075722 – start-page: 786 volume-title: International Joint Conference on Artificial Intelligence, IJCAI 2018 year: 2018 ident: e_1_2_8_5_1 – ident: e_1_2_8_19_1 doi: 10.1109/ICCV.2017.233 – ident: e_1_2_8_16_1 doi: 10.1109/LSP.2022.3142675 – volume-title: International Conference on Learning Representations, ICLR 2017 year: 2017 ident: e_1_2_8_9_1 – start-page: 140 volume-title: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2020 year: 2020 ident: e_1_2_8_14_1 – start-page: 1110 volume-title: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015 year: 2015 ident: e_1_2_8_4_1 – ident: e_1_2_8_7_1 – start-page: 3595 volume-title: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019 year: 2019 ident: e_1_2_8_25_1 – start-page: 597 volume-title: 2017 IEEE International Conference on Multimedia & Expo Workshops, ICME Workshops year: 2017 ident: e_1_2_8_6_1 – start-page: 568 volume-title: Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014 year: 2014 ident: e_1_2_8_24_1 – start-page: 452 volume-title: 25th International Conference on Pattern Recognition, ICPR 2020 year: 2020 ident: e_1_2_8_22_1 – start-page: 180 volume-title: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020 year: 2020 ident: e_1_2_8_26_1 – ident: e_1_2_8_28_1 doi: 10.1145/3474085.3475574 – ident: e_1_2_8_36_1 doi: 10.1109/TCSVT.2021.3124562 – ident: e_1_2_8_3_1 doi: 10.1145/3123266.3123277 – start-page: 12026 volume-title: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019 year: 2019 ident: e_1_2_8_12_1 – ident: e_1_2_8_29_1 doi: 10.1109/CVPR.2016.115 – ident: e_1_2_8_18_1 doi: 10.1109/CVPR.2014.82 – ident: e_1_2_8_33_1 doi: 10.1109/TIP.2020.3028207 – ident: e_1_2_8_2_1 doi: 10.1016/j.cviu.2010.10.002 – start-page: 694 volume-title: Pattern Recognition. ICPR International Workshops and Challenges—Virtual Event, January 10‐15, 2021, Proceedings, Part III year: 2020 ident: e_1_2_8_34_1 – start-page: 934 volume-title: IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2019 year: 2019 ident: e_1_2_8_8_1 – start-page: 7912 volume-title: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019 year: 2019 ident: e_1_2_8_11_1 – ident: e_1_2_8_35_1 doi: 10.1016/j.neucom.2021.02.001 – ident: e_1_2_8_32_1 doi: 10.1145/3394171.3413910 |
SSID | ssj0026210 |
Score | 2.3648586 |
Snippet | Skeleton‐based human action recognition is gaining significant attention and finding widespread application in various fields, such as virtual reality and... |
SourceID | proquest crossref wiley |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
SubjectTerms | action recognition Artificial neural networks graph convolution Human activity recognition skeleton data System effectiveness temporal information Virtual reality |
Title | Multi‐temporal scale aggregation refinement graph convolutional network for skeleton‐based action recognition |
URI | https://onlinelibrary.wiley.com/doi/abs/10.1002%2Fcav.2221 https://www.proquest.com/docview/2930456357 |
Volume | 35 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3NSsNAEF6kJz34L1arrCB6Sn-yye72WIqlCHoQWwoewiTZFmlJ1bQePPkIPqNP4kw2aVUUxFNg2U2yO7M73wwz3zJ2SjorfA2OBqkdD0TshA2IneawCQLdCVCKaoevrmW3510O_EGeVUm1MJYfYhFwo52Rnde0wSFMa0vS0Aieq2jcyPOhVC3CQzcL5ihXupaIwPekQ15CwTtbd2vFwK-WaAkvP4PUzMp0Nthd8X82uWRcnc_CavTyjbrxfxPYZOs5-OQtqy1bbMUk22ytf5_ObWu6wx6zitz317ecs2rCU5Si4TBCx3yUiZHjVPDjFFbkGd81p9T1XIVxQGIzyznCYZ6O0awhvMQXkr2Mua2j4Iu8pWmyy3qdi9t218mvZXAixAbocYpIREQz5iH0UWooorCBDa6rpBeBAUSIHoIqJcgXkaEErURDhPWhifE00SD2WCmZJmafcRmLUPuu8aCuiXsMTFMZjYhGNkHXQZbZeSGiIMo5y-nqjElg2ZbdABcxoEUss5NFzwfL0_FDn0oh5SDfqWmAUyJUK3xVZmeZuH4dH7RbfXoe_LXjIVt1EQPZiE2FlWZPc3OEGGYWHmfa-gH9Zu_r |
linkProvider | Wiley-Blackwell |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1LT8MwDLYQHIADb8R4BgnBqWNr2iQTp2liGs8DAsQBqUrbbEKbyqMbB078BH4jvwS7aTdAICFOlaKkbWIn_mzZXwB2SGe5r7SjtFCOp3nshFUdO7V2TXN0J7SUVDt8di5aV97xjX8zBgdFLYzlhxgG3GhnZOc1bXAKSO-PWEMj_VxG64auzwRd6J35UxdD7ihXuJaKwPeEQ35CwTxbcfeLkV9t0QhgfoapmZ1pzsJt8Yc2vaRbHvTDcvTyjbzxn1OYg5kcf7K6VZh5GDPJAkxf36UD25ouwmNWlPv--pbTVvVYioI0THfQN-9kkmQ4F_w6RRZZRnnNKHs912IckNjkcoaImKVdtGyIMPGFZDJjZksp2DB16T5Zgqvm4WWj5eQ3MzgRwgN0OnnEI2Ia8xD9SNnmUVjFBteVwou00QgSPcRVkpM7IkKhleRVHlbaJsYDRWm-DOPJfWJWgImYh8p3jacriujHtKlJoxDUiJpWFS1KsFfIKIhy2nK6PaMXWMJlN8BFDGgRS7A97PlgqTp-6LNeiDnIN2sa4JQI2HJflmA3k9ev44NG_Zqeq3_tuAWTrcuz0-D06PxkDaZchEQ2gLMO4_2ngdlASNMPNzPV_QDvLvQG |
linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1fS8MwEA-iIPrgf3E6NYLoU2fXtEn2ODbH_DdE3BB8KGmbiTi6aTcffPIj-Bn9JN417aaiID4VQtI2uUvud8fdL4Tso84yTypLKi4tV7HICsoqsirdimLgTighsHb4osWbbff0xrvJsiqxFsbwQ4wDbrgz0vMaN_gg6h5NSEND9VwC4waez4zLbYkaXb8aU0c53DFMBJ7LLXQTcuJZ2znKR341RRN8-RmlpmamsUhu8x802SUPpdEwKIUv37gb_zeDJbKQoU9aNeqyTKZ0vELmO_fJyLQmq-QxLcl9f33LSKt6NAExaqruwDO_S-VIYSrwcYwr0pTwmmLueqbDMCA2qeUU8DBNHsCuAb6EF6LBjKgppKDjxKV-vEbajePrWtPK7mWwQgAH4HKykIXIM-YC9hGiy8KgDA2OI7gbKq0AIrqAqgRDZ4QHXEnByiywuzqC40Qqtk6m436sNwjlEQuk52hX2RLJx5SuCC0B0vCKkrbiBXKYi8gPM9JyvDuj5xu6ZceHRfRxEQtkb9xzYIg6fuhTzKXsZ1s18WFKCGuZJwrkIBXXr-P9WrWDz82_dtwls5f1hn9-0jrbInMO4CETvSmS6eHTSG8DnhkGO6nifgBDJfK- |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Multi%E2%80%90temporal+scale+aggregation+refinement+graph+convolutional+network+for+skeleton%E2%80%90based+action+recognition&rft.jtitle=Computer+animation+and+virtual+worlds&rft.au=Li%2C+Xuanfeng&rft.au=Lu%2C+Jian&rft.au=Zhou%2C+Jian&rft.au=Liu%2C+Wei&rft.date=2024-01-01&rft.issn=1546-4261&rft.eissn=1546-427X&rft.volume=35&rft.issue=1&rft.epage=n%2Fa&rft_id=info:doi/10.1002%2Fcav.2221&rft.externalDBID=10.1002%252Fcav.2221&rft.externalDocID=CAV2221 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1546-4261&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1546-4261&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1546-4261&client=summon |