Skeleton-based Human Action Recognition via Large-kernel Attention Graph Convolutional Network

The skeleton-based human action recognition has broad application prospects in the field of virtual reality, as skeleton data is more resistant to data noise such as background interference and camera angle changes. Notably, recent works treat the human skeleton as a non-grid representation, e.g., s...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on visualization and computer graphics Vol. 29; no. 5; pp. 2575 - 2585
Main Authors Liu, Yanan, Zhang, Hao, Li, Yanqiu, He, Kangjian, Xu, Dan
Format Journal Article
LanguageEnglish
Published United States IEEE 01.05.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
Abstract The skeleton-based human action recognition has broad application prospects in the field of virtual reality, as skeleton data is more resistant to data noise such as background interference and camera angle changes. Notably, recent works treat the human skeleton as a non-grid representation, e.g., skeleton graph, then learns the spatio-temporal pattern via graph convolution operators. Still, the stacked graph convolution plays a marginal role in modeling long-range dependences that may contain crucial action semantic cues. In this work, we introduce a skeleton large kernel attention operator (SLKA), which can enlarge the receptive field and improve channel adaptability without increasing too much computational burden. Then a spatiotemporal SLKA module (ST-SLKA) is integrated, which can aggregate long-range spatial features and learn long-distance temporal correlations. Further, we have designed a novel skeleton-based action recognition network architecture called the spatiotemporal large-kernel attention graph convolution network (LKA-GCN). In addition, large-movement frames may carry significant action information. This work proposes a joint movement modeling strategy (JMM) to focus on valuable temporal interactions. Ultimately, on the NTU-RGBD 60, NTU-RGBD 120 and Kinetics-Skeleton 400 action datasets, the performance of our LKA-GCN has achieved a state-of-the-art level.
AbstractList The skeleton-based human action recognition has broad application prospects in the field of virtual reality, as skeleton data is more resistant to data noise such as background interference and camera angle changes. Notably, recent works treat the human skeleton as a non-grid representation, e.g., skeleton graph, then learns the spatio-temporal pattern via graph convolution operators. Still, the stacked graph convolution plays a marginal role in modeling long-range dependences that may contain crucial action semantic cues. In this work, we introduce a skeleton large kernel attention operator (SLKA), which can enlarge the receptive field and improve channel adaptability without increasing too much computational burden. Then a spatiotemporal SLKA module (ST-SLKA) is integrated, which can aggregate long-range spatial features and learn long-distance temporal correlations. Further, we have designed a novel skeleton-based action recognition network architecture called the spatiotemporal large-kernel attention graph convolution network (LKA-GCN). In addition, large-movement frames may carry significant action information. This work proposes a joint movement modeling strategy (JMM) to focus on valuable temporal interactions. Ultimately, on the NTU-RGBD 60, NTU-RGBD 120 and Kinetics-Skeleton 400 action datasets, the performance of our LKA-GCN has achieved a state-of-the-art level.
The skeleton-based human action recognition has broad application prospects in the field of virtual reality, as skeleton data is more resistant to data noise such as background interference and camera angle changes. Notably, recent works treat the human skeleton as a non-grid representation, e.g., skeleton graph, then learns the spatio-temporal pattern via graph convolution operators. Still, the stacked graph convolution plays a marginal role in modeling long-range dependences that may contain crucial action semantic cues. In this work, we introduce a skeleton large kernel attention operator (SLKA), which can enlarge the receptive field and improve channel adaptability without increasing too much computational burden. Then a spatiotemporal SLKA module (ST-SLKA) is integrated, which can aggregate long-range spatial features and learn long-distance temporal correlations. Further, we have designed a novel skeleton-based action recognition network architecture called the spatiotemporal large-kernel attention graph convolution network (LKA-GCN). In addition, large-movement frames may carry significant action information. This work proposes a joint movement modeling strategy (JMM) to focus on valuable temporal interactions. Ultimately, on the NTU-RGBD 60, NTU-RGBD 120 and Kinetics-Skeleton 400 action datasets, the performance of our LKA-GCN has achieved a state-of-the-art level.The skeleton-based human action recognition has broad application prospects in the field of virtual reality, as skeleton data is more resistant to data noise such as background interference and camera angle changes. Notably, recent works treat the human skeleton as a non-grid representation, e.g., skeleton graph, then learns the spatio-temporal pattern via graph convolution operators. Still, the stacked graph convolution plays a marginal role in modeling long-range dependences that may contain crucial action semantic cues. In this work, we introduce a skeleton large kernel attention operator (SLKA), which can enlarge the receptive field and improve channel adaptability without increasing too much computational burden. Then a spatiotemporal SLKA module (ST-SLKA) is integrated, which can aggregate long-range spatial features and learn long-distance temporal correlations. Further, we have designed a novel skeleton-based action recognition network architecture called the spatiotemporal large-kernel attention graph convolution network (LKA-GCN). In addition, large-movement frames may carry significant action information. This work proposes a joint movement modeling strategy (JMM) to focus on valuable temporal interactions. Ultimately, on the NTU-RGBD 60, NTU-RGBD 120 and Kinetics-Skeleton 400 action datasets, the performance of our LKA-GCN has achieved a state-of-the-art level.
Author He, Kangjian
Zhang, Hao
Li, Yanqiu
Liu, Yanan
Xu, Dan
Author_xml – sequence: 1
  givenname: Yanan
  surname: Liu
  fullname: Liu, Yanan
  email: 731696785@qq.com
  organization: School of Information Science and Engineering, Yunnan University, Kunming, China
– sequence: 2
  givenname: Hao
  surname: Zhang
  fullname: Zhang, Hao
  organization: School of Information Science and Engineering, Yunnan University, Kunming, China
– sequence: 3
  givenname: Yanqiu
  surname: Li
  fullname: Li, Yanqiu
  organization: School of Information Science and Engineering, Yunnan University, Kunming, China
– sequence: 4
  givenname: Kangjian
  surname: He
  fullname: He, Kangjian
  organization: School of Information Science and Engineering, Yunnan University, Kunming, China
– sequence: 5
  givenname: Dan
  orcidid: 0000-0003-4602-3550
  surname: Xu
  fullname: Xu, Dan
  email: danxu@ynu.edu.cn
  organization: School of Information Science and Engineering, Yunnan University, Kunming, China
BackLink https://www.ncbi.nlm.nih.gov/pubmed/37027698$$D View this record in MEDLINE/PubMed
BookMark eNp9kV-L1DAUxYOsuH_0AwgiBV986ZibpEn6OAw6KwwKuvpoSNPbtTudZEzSFb-97cwsyD74lEPyO4d7cy7JmQ8eCXkJdAFA63c331frBaOMLzgTiqrqCbmAWkBJKyrPJk2VKplk8pxcpnRHKQih62fknCvKlKz1BfnxdYsD5uDLxiZsi-txZ32xdLkPvviCLtz6_qDve1tsbLzFcovR41Asc0Z_eFpHu_9ZrIK_D8M439ih-IT5d4jb5-RpZ4eEL07nFfn24f3N6rrcfF5_XC03pRMgc6klh8ZRWmkFrZUUmkq2jYVOduA6WUkrXau1lq7puoYzrazgGmTrwKJoOn5F3h5z9zH8GjFls-uTw2GwHsOYDFP1FE0V8Al98wi9C2OcZj5QDEDVYqZen6ix2WFr9rHf2fjHPPzcBKgj4GJIKWJnXJ_tvH2Oth8MUDN3ZOaOzNyROXU0OeGR8yH8f55XR0-PiP_wVNSKVfwvogOcXw
CODEN ITVGEA
CitedBy_id crossref_primary_10_1007_s11042_024_18864_y
crossref_primary_10_1016_j_imavis_2024_104919
crossref_primary_10_1109_TCSVT_2024_3358836
crossref_primary_10_1007_s00371_024_03601_1
crossref_primary_10_1109_JSEN_2024_3491183
crossref_primary_10_1007_s10489_024_05544_5
crossref_primary_10_1016_j_imavis_2024_105277
crossref_primary_10_1016_j_neucom_2024_128426
crossref_primary_10_1007_s13735_024_00341_9
crossref_primary_10_1016_j_asoc_2025_112797
crossref_primary_10_1109_TASE_2024_3431128
crossref_primary_10_1109_TCE_2024_3384974
crossref_primary_10_1002_aisy_202300266
crossref_primary_10_1587_transinf_2023EDP7223
crossref_primary_10_3390_app15010198
crossref_primary_10_1002_est2_70060
crossref_primary_10_1109_TIP_2024_3391913
crossref_primary_10_1007_s11760_024_03805_x
crossref_primary_10_1049_ipr2_12868
crossref_primary_10_1109_ACCESS_2024_3452553
crossref_primary_10_3390_s24216817
crossref_primary_10_1109_TCSVT_2024_3386553
crossref_primary_10_1016_j_ins_2024_121832
crossref_primary_10_1109_ACCESS_2023_3263155
crossref_primary_10_1109_TCSVT_2024_3491133
crossref_primary_10_1080_00207543_2024_2448604
crossref_primary_10_1002_cav_2244
crossref_primary_10_1016_j_jksuci_2024_102072
crossref_primary_10_1007_s10489_023_05173_4
crossref_primary_10_1016_j_chb_2024_108482
crossref_primary_10_1016_j_knosys_2024_112868
crossref_primary_10_1080_1206212X_2024_2427287
crossref_primary_10_1515_nleng_2024_0071
crossref_primary_10_1007_s10044_024_01319_3
crossref_primary_10_1016_j_patcog_2024_110427
crossref_primary_10_1109_JSEN_2023_3306819
crossref_primary_10_1038_s41598_024_65850_3
crossref_primary_10_1109_ACCESS_2024_3525185
crossref_primary_10_2478_amns_2024_2297
crossref_primary_10_3390_s24082567
crossref_primary_10_1016_j_neunet_2024_107114
crossref_primary_10_1007_s00530_024_01358_0
Cites_doi 10.1109/cvpr.2019.01230
10.1109/CVPR52688.2022.01165
10.1109/CVPR.2017.387
10.2307/1269835
10.1007/s11263-012-0550-7
10.1109/CVPR.2018.00230
10.48550/arXiv.1512.03385
10.1109/cvpr42600.2020.00026
10.1155/2021/3495203
10.1109/ICME.2017.8019438
10.1109/TPAMI.2019.2929257
10.1109/ICPR.2014.772
10.1109/tpami.2019.2896631
10.1609/aaai.v34i04.5747
10.1016/j.patcog.2021.107921
10.1007/978-3-030-68796-0_50
10.1109/tip.2021.3129117
10.1609/aaai.v31i1.11231
10.1016/j.knosys.2018.05.029
10.1109/cvpr.2019.00132
10.1109/cvpr.2018.00558
10.1109/iccv.2017.115
10.1109/tpami.2019.2916873
10.1109/CVPR.2011.5995488
10.1145/3306214.3338550
10.1109/cvpr52688.2022.00298
10.1007/s41095-023-0364-2
10.1109/CVPR.2017.486
10.1109/cvpr.2019.00810
10.1007/s00779-016-0918-8
10.1007/978-3-030-69541-5_3
10.1016/j.neucom.2021.02.001
10.1109/ICCV48922.2021.00986
10.1109/cvpr.2016.115
10.1109/icmew.2017.8026281
10.1145/3343031.3351170
10.1007/978-3-642-33709-3_62
10.1109/cvpr52688.2022.01166
10.1609/aaai.v32i1.12328
10.1109/CVPR42600.2020.00059
10.1109/cvpr42600.2020.00022
10.1109/ICME.2017.8019545
10.1109/tpami.2021.3053765
10.1109/CVPR.2017.189
10.24963/ijcai.2018/227
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023
DBID 97E
RIA
RIE
AAYXX
CITATION
NPM
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
7X8
DOI 10.1109/TVCG.2023.3247075
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE/IET Electronic Library
CrossRef
PubMed
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
DatabaseTitle CrossRef
PubMed
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
DatabaseTitleList
PubMed
MEDLINE - Academic
Technology Research Database
Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 1941-0506
EndPage 2585
ExternalDocumentID 37027698
10_1109_TVCG_2023_3247075
10049725
Genre orig-research
Journal Article
GrantInformation_xml – fundername: Science Research Fund Project of Yunnan Provincial Department of Education
  grantid: 2021Y027
– fundername: Yunnan Province Ten Thousand Talents Program and Yunling Scholars Special
  grantid: YNWR-YLXZ-2018-022
– fundername: Yunnan Provincial Science and Technology Department-Yunnan University
  grantid: 2019fy003012
– fundername: provincial major science and technology special plan
  grantid: 202202AD080003
– fundername: National Natural Science Foundation of China
  grantid: 62162068; 61761049; 62202416
  funderid: 10.13039/501100001809
– fundername: Graduate Research and Innovation Foundation of Yunnan University
  grantid: KC-22221726
GroupedDBID ---
-~X
.DC
0R~
29I
4.4
53G
5GY
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACGFO
ACIWK
AENEX
AETIX
AGQYO
AGSQL
AHBIQ
AI.
AIBXA
AKJIK
AKQYR
ALLEH
ALMA_UNASSIGNED_HOLDINGS
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
EBS
EJD
F5P
HZ~
H~9
IEDLZ
IFIPE
IFJZH
IPLJI
JAVBF
LAI
M43
O9-
OCL
P2P
PQQKQ
RIA
RIE
RNI
RNS
RZB
TN5
VH1
AAYOK
AAYXX
CITATION
RIG
NPM
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
7X8
ID FETCH-LOGICAL-c416t-8631bc005871da601b56dba1f6f1cf656a6cd8886cbffb3287a43816dc1ae4bf3
IEDL.DBID RIE
ISSN 1077-2626
1941-0506
IngestDate Fri Jul 11 02:45:28 EDT 2025
Sun Jun 29 14:16:16 EDT 2025
Sun Apr 06 01:21:17 EDT 2025
Tue Jul 01 02:12:17 EDT 2025
Thu Apr 24 23:10:43 EDT 2025
Wed Aug 27 02:18:07 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 5
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c416t-8631bc005871da601b56dba1f6f1cf656a6cd8886cbffb3287a43816dc1ae4bf3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ORCID 0000-0003-4602-3550
PMID 37027698
PQID 2792117943
PQPubID 75741
PageCount 11
ParticipantIDs proquest_journals_2792117943
ieee_primary_10049725
proquest_miscellaneous_2798710713
pubmed_primary_37027698
crossref_primary_10_1109_TVCG_2023_3247075
crossref_citationtrail_10_1109_TVCG_2023_3247075
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2023-05-01
PublicationDateYYYYMMDD 2023-05-01
PublicationDate_xml – month: 05
  year: 2023
  text: 2023-05-01
  day: 01
PublicationDecade 2020
PublicationPlace United States
PublicationPlace_xml – name: United States
– name: New York
PublicationTitle IEEE transactions on visualization and computer graphics
PublicationTitleAbbrev TVCG
PublicationTitleAlternate IEEE Trans Vis Comput Graph
PublicationYear 2023
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref13
ref15
ref52
ref11
ref10
ref17
ref16
ref19
ref18
Simonyan (ref37) 2014
ref51
ref50
Kay (ref14) 2017
ref46
ref45
ref48
ref47
ref42
ref41
ref44
ref49
ref8
ref7
ref9
ref4
ref3
ref6
ref5
ref35
ref34
Wang (ref43)
ref36
ref31
ref30
ref33
ref2
ref1
ref39
ref38
Shi (ref32) 2018
ref24
ref23
ref26
ref25
ref20
ref22
ref21
Rao (ref29) 2021
ref28
ref27
Vaswani (ref40) 2017
Howard (ref12) 2017
References_xml – ident: ref34
  doi: 10.1109/cvpr.2019.01230
– year: 2017
  ident: ref40
  article-title: Attention is all you need
  publication-title: neural information processing systems
– ident: ref52
  doi: 10.1109/CVPR52688.2022.01165
– ident: ref41
  doi: 10.1109/CVPR.2017.387
– ident: ref24
  doi: 10.2307/1269835
– ident: ref7
  doi: 10.1007/s11263-012-0550-7
– ident: ref26
  doi: 10.1109/CVPR.2018.00230
– ident: ref11
  doi: 10.48550/arXiv.1512.03385
– ident: ref3
  doi: 10.1109/cvpr42600.2020.00026
– ident: ref25
  doi: 10.1155/2021/3495203
– ident: ref20
  doi: 10.1109/ICME.2017.8019438
– ident: ref1
  doi: 10.1109/TPAMI.2019.2929257
– ident: ref8
  doi: 10.1109/ICPR.2014.772
– ident: ref51
  doi: 10.1109/tpami.2019.2896631
– ident: ref2
  doi: 10.1609/aaai.v34i04.5747
– ident: ref46
  doi: 10.1016/j.patcog.2021.107921
– ident: ref28
  doi: 10.1007/978-3-030-68796-0_50
– ident: ref49
  doi: 10.1109/tip.2021.3129117
– ident: ref38
  doi: 10.1609/aaai.v31i1.11231
– ident: ref44
  doi: 10.1016/j.knosys.2018.05.029
– ident: ref36
  doi: 10.1109/cvpr.2019.00132
– year: 2021
  ident: ref29
  article-title: Global filter networks for image classification
  publication-title: arXiv: Computer Vision and Pattern Recognition
– ident: ref39
  doi: 10.1109/cvpr.2018.00558
– ident: ref16
  doi: 10.1109/iccv.2017.115
– ident: ref19
  doi: 10.1109/tpami.2019.2916873
– ident: ref43
  article-title: Actionclip: A new paradigm for video action recognition
  publication-title: arXiv: Computer Vision and Pattern Recognition, 2021
– ident: ref50
  doi: 10.1109/CVPR.2011.5995488
– ident: ref30
  doi: 10.1145/3306214.3338550
– ident: ref6
  doi: 10.1109/cvpr52688.2022.00298
– ident: ref10
  doi: 10.1007/s41095-023-0364-2
– ident: ref15
  doi: 10.1109/CVPR.2017.486
– ident: ref33
  doi: 10.1109/cvpr.2019.00810
– year: 2017
  ident: ref12
  article-title: Mobilenets: Efficient convolutional neural networks for mobile vision applications
  publication-title: arXiv: Computer Vision and Pattern Recognition
– ident: ref21
  doi: 10.1007/s00779-016-0918-8
– ident: ref35
  doi: 10.1007/978-3-030-69541-5_3
– ident: ref13
  doi: 10.1016/j.neucom.2021.02.001
– year: 2018
  ident: ref32
  article-title: Adaptive spectral graph convolutional networks for skeleton-based action recognition
  publication-title: CoRR, abs/1805.07694
– ident: ref22
  doi: 10.1109/ICCV48922.2021.00986
– ident: ref31
  doi: 10.1109/cvpr.2016.115
– year: 2017
  ident: ref14
  article-title: The kinetics human action video dataset
  publication-title: CoRR
– ident: ref17
  doi: 10.1109/icmew.2017.8026281
– ident: ref9
  doi: 10.1145/3343031.3351170
– ident: ref42
  doi: 10.1007/978-3-642-33709-3_62
– ident: ref5
  doi: 10.1109/cvpr52688.2022.01166
– ident: ref48
  doi: 10.1609/aaai.v32i1.12328
– ident: ref45
  doi: 10.1109/CVPR42600.2020.00059
– ident: ref23
  doi: 10.1109/cvpr42600.2020.00022
– ident: ref4
  doi: 10.1109/ICME.2017.8019545
– ident: ref18
  doi: 10.1109/tpami.2021.3053765
– ident: ref27
  doi: 10.1109/CVPR.2017.189
– year: 2014
  ident: ref37
  article-title: Very deep convolutional networks for large-scale image recognition
  publication-title: computer vision and pattern recognition
– ident: ref47
  doi: 10.24963/ijcai.2018/227
SSID ssj0014489
Score 2.6075108
Snippet The skeleton-based human action recognition has broad application prospects in the field of virtual reality, as skeleton data is more resistant to data noise...
SourceID proquest
pubmed
crossref
ieee
SourceType Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 2575
SubjectTerms action recognition
Adaptation models
Artificial neural networks
Background noise
Computer architecture
Convolution
graph convolution
Graphical representations
Human activity recognition
human skeleton
Joints
Kernel
Kernels
large kernels
Modelling
Skeleton
Task analysis
Topology
Virtual reality
Title Skeleton-based Human Action Recognition via Large-kernel Attention Graph Convolutional Network
URI https://ieeexplore.ieee.org/document/10049725
https://www.ncbi.nlm.nih.gov/pubmed/37027698
https://www.proquest.com/docview/2792117943
https://www.proquest.com/docview/2798710713
Volume 29
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1NT9tAEB0RTnAoLaStgSIjcaq0jh3HdnxEUROEIIc2QTnVmv2SqlROBQ4Hfj0zu04UVaLqbSWvP2etebPz5g3AVWbJi8tCiSxGKwaqGAhUKEUaxzbOrM5T13nufprfzAe3i2zRFqu7WhhjjCOfmYiHLpevV2rNW2U9Vjcri37WgQ5Fbr5Ya5syoDij9ATDQvQJprcpzCQue7OH0STiPuERwYciZk7hjhNyXVXeBpjO0YyPYLp5RM8vWUbrRkbq5S_1xv9-h_fwroWc4bVfIx9gz9THcLgjRHgCP38syf1wN2H2ajp0W_vhtat5CL9vOEY0fv6F4R2Tx8XSPNaGrto0njAZTlj6Ohyt6ud2NdM9p55k3oX5-NtsdCPazgtCEUBrxJBMJBW3HCwSjRSzySzXEhOb20RZgoCYK02xc66ktTKlqAtZKizXKkEzkDb9CPv1qjafIdQEMF2uE8kRImLZR5UlmlBBWgwRswDijSkq1cqSc3eM35ULT-KyYutVbL2qtV4AX7en_PGaHP-a3GUj7Ez03z-A843Bq_a3fapYTdFr5gVwuT1MPxxnUbA2q7WbQ1-Fg_sAPvmFsr14WlCUn5fD0zduegYH_GyeMHkO-83j2nwhUNPIC7eYXwEc8fG_
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1NT9tAEB1Remg50NJS6hZaV-qp0ho7ztrxEUVAaEMONFScas1-SVWQg6jDgV_fmV0nipCouK3k9eesNW923rwB-CodeXFVaiFTdKKvy75AjUrkaepS6UyR-85z55NidNn_fiWvumJ1XwtjrfXkM5vw0OfyzVwveKvskNXNqrInn8FzcvyyF8q1VkkDijSqQDEsRY-AepfEzNLqcPpreJpwp_CEAESZMqtwzQ35viqPQ0zvak5ewWT5kIFhMksWrUr0_QP9xie_xWvY7kBnfBRWyQ5s2OYNbK1JEb6F3z9n5IC4nzD7NRP7zf34yFc9xBdLlhGN7_5gPGb6uJjZ28bSVds2UCbjUxa_jofz5q5bz3TPSaCZ78LlyfF0OBJd7wWhCaK1YkBGUpqbDpaZQYralCyMwswVLtOOQCAW2lD0XGjlnMop7kIWCyuMztD2lcvfwWYzb-x7iA1BTJ_tRHKFiFj1UMvMEC7IywGijCBdmqLWnTA598e4rn2AklY1W69m69Wd9SL4tjrlJqhy_G_yLhthbWL4_hHsLw1edz_u35r1FINqXgRfVofpl-M8CjZ2vvBz6KtweB_BXlgoq4vnJcX5RTX48MhNP8OL0fR8XI_PJj8-wkt-zkCf3IfN9nZhDwjitOqTX9j_AOGC9Qk
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Skeleton-based+Human+Action+Recognition+via+Large-kernel+Attention+Graph+Convolutional+Network&rft.jtitle=IEEE+transactions+on+visualization+and+computer+graphics&rft.au=Liu%2C+Yanan&rft.au=Zhang%2C+Hao&rft.au=Li%2C+Yanqiu&rft.au=He%2C+Kangjian&rft.date=2023-05-01&rft.pub=The+Institute+of+Electrical+and+Electronics+Engineers%2C+Inc.+%28IEEE%29&rft.issn=1077-2626&rft.eissn=1941-0506&rft.volume=29&rft.issue=5&rft.spage=2575&rft_id=info:doi/10.1109%2FTVCG.2023.3247075&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1077-2626&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1077-2626&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1077-2626&client=summon