A 3D graph convolutional networks model for 2D skeleton‐based human action recognition

With the popularity of cameras, the application of action recognition is more and more extensive. After the emergence of RGB‐D cameras and human pose estimation algorithms, human actions can be represented by a sequence of skeleton joints. Therefore, skeleton‐based action recognition has been a rese...

Full description

Saved in:
Bibliographic Details
Published inIET image processing Vol. 17; no. 3; pp. 773 - 783
Main Authors Weng, Libo, Lou, Weidong, Shen, Xin, Gao, Fei
Format Journal Article
LanguageEnglish
Published Wiley 01.02.2023
Subjects
Online AccessGet full text

Cover

Loading…
Abstract With the popularity of cameras, the application of action recognition is more and more extensive. After the emergence of RGB‐D cameras and human pose estimation algorithms, human actions can be represented by a sequence of skeleton joints. Therefore, skeleton‐based action recognition has been a research hotspot. In this paper, a novel 3D Graph Convolutional Network model (3D‐GCN) with space‐time attention mechanism for 2D skeleton data is proposed. Three‐dimensional graph convolution is employed to extract spatiotemporal features of skeleton descriptor that is composed of joint coordinates, frame differences and angles. Meanwhile, different joints and different frames are given different attention to achieve action classification. A zebra crossing pedestrian dataset named ZCP is also provided, which simulates possible pedestrian actions on the zebra crossing in real scenes. Experimental evaluation is carried out on ZCP dataset and NTU RGB+D dataset. Experimental results show that our method is better than current 2D‐based methods and is comparable with 3D methods.
AbstractList Abstract With the popularity of cameras, the application of action recognition is more and more extensive. After the emergence of RGB‐D cameras and human pose estimation algorithms, human actions can be represented by a sequence of skeleton joints. Therefore, skeleton‐based action recognition has been a research hotspot. In this paper, a novel 3D Graph Convolutional Network model (3D‐GCN) with space‐time attention mechanism for 2D skeleton data is proposed. Three‐dimensional graph convolution is employed to extract spatiotemporal features of skeleton descriptor that is composed of joint coordinates, frame differences and angles. Meanwhile, different joints and different frames are given different attention to achieve action classification. A zebra crossing pedestrian dataset named ZCP is also provided, which simulates possible pedestrian actions on the zebra crossing in real scenes. Experimental evaluation is carried out on ZCP dataset and NTU RGB+D dataset. Experimental results show that our method is better than current 2D‐based methods and is comparable with 3D methods.
With the popularity of cameras, the application of action recognition is more and more extensive. After the emergence of RGB‐D cameras and human pose estimation algorithms, human actions can be represented by a sequence of skeleton joints. Therefore, skeleton‐based action recognition has been a research hotspot. In this paper, a novel 3D Graph Convolutional Network model (3D‐GCN) with space‐time attention mechanism for 2D skeleton data is proposed. Three‐dimensional graph convolution is employed to extract spatiotemporal features of skeleton descriptor that is composed of joint coordinates, frame differences and angles. Meanwhile, different joints and different frames are given different attention to achieve action classification. A zebra crossing pedestrian dataset named ZCP is also provided, which simulates possible pedestrian actions on the zebra crossing in real scenes. Experimental evaluation is carried out on ZCP dataset and NTU RGB+D dataset. Experimental results show that our method is better than current 2D‐based methods and is comparable with 3D methods.
Author Weng, Libo
Shen, Xin
Gao, Fei
Lou, Weidong
Author_xml – sequence: 1
  givenname: Libo
  surname: Weng
  fullname: Weng, Libo
  organization: Zhejiang University of Technology
– sequence: 2
  givenname: Weidong
  surname: Lou
  fullname: Lou, Weidong
  organization: Zhejiang University of Technology
– sequence: 3
  givenname: Xin
  surname: Shen
  fullname: Shen, Xin
  organization: Zhejiang University of Technology
– sequence: 4
  givenname: Fei
  orcidid: 0000-0003-1209-0608
  surname: Gao
  fullname: Gao, Fei
  email: gfei_jack@163.com
  organization: Zhejiang University of Technology
BookMark eNp9UMtKw0AUHaSCtbrxC2YttM4jmcksS-ujUFBEwd1wZzJp06aZMkkt3fkJfqNfYtJIl67u4XJenEvUK33pELqhZERJpO7ybWAjyoSkZ6hPZUyHSgjZO-FYXaDLqloREiuSxH30McZ8ihcBtktsffnpi12d-xIKXLp678O6whufugJnPmA2xdXaFa725c_Xt4HKpXi520CJwbYqHJz1izJv8RU6z6Co3PXfHaD3h_u3ydNw_vw4m4znQ8tJU0lwJq1lNBVpxpUgNpHSMgnUplxYwaUjCU1NCgCxMYSBShgzTflGAYJxPkCzzjf1sNLbkG8gHLSHXB8fPiw0hDq3hdM0to5GiUmMNJHhkVE8skrJxGUOXBM2QLedlw2-qoLLTn6U6HZf3e6rj_s2ZNqR93nhDv8w9ezllXWaX2WPgDo
CitedBy_id crossref_primary_10_1007_s42452_024_05774_9
crossref_primary_10_1016_j_neucom_2023_126903
Cites_doi 10.3758/BF03212378
10.1049/iet‐ipr.2019.0030
10.1109/CVPR.2019.01230
10.1109/CVPR.2017.143
10.1109/TPAMI.2017.2691321
10.1007/s11042‐016‐3523‐y
10.1109/CVPR.2017.387
10.1109/TPAMI.2019.2896631
10.1109/ICME.2018.8486566
10.1145/1922649.1922653
10.1109/ICCV.2015.510
10.1109/IWAIT.2018.8369778
10.1109/CVPR.2011.5995316
10.1109/CVPR42600.2020.00022
10.1109/CVPR52688.2022.01955
10.1145/2696454.2696462
10.1109/ICCV48922.2021.01317
10.1109/ICCV48922.2021.01318
10.1049/iet‐cvi.2018.5014
10.1109/CVPR42600.2020.00026
10.1007/978-3-319-46487-9_50
10.1007/978-3-319-46484-8_2
10.1109/CVPR42600.2020.00119
10.1109/AVSS.2010.63
10.1109/CVPR.2017.502
10.1109/BRACIS.2019.00134
10.1109/MMUL.2012.24
10.1109/JSEN.2018.2876624
10.1109/ICCV48922.2021.01311
10.1109/CVPRW.2017.207
10.36227/techrxiv.13708270
10.1109/TMM.2019.2960588
10.1109/CVPR.2016.115
10.1109/CVPR.2015.7298878
10.1109/TCSVT.2019.2914137
10.1609/aaai.v32i1.12328
10.1109/CVPR42600.2020.01434
10.1109/TIP.2018.2815744
ContentType Journal Article
Copyright 2022 The Authors. published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology.
Copyright_xml – notice: 2022 The Authors. published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology.
DBID 24P
WIN
AAYXX
CITATION
DOA
DOI 10.1049/ipr2.12671
DatabaseName Wiley-Blackwell Open Access Collection
Wiley Online Library Open Access
CrossRef
DOAJ Directory of Open Access Journals
DatabaseTitle CrossRef
DatabaseTitleList

Database_xml – sequence: 1
  dbid: DOA
  name: DOAJ Directory of Open Access Journals
  url: https://www.doaj.org/
  sourceTypes: Open Website
– sequence: 2
  dbid: 24P
  name: Wiley-Blackwell Open Access Collection
  url: https://authorservices.wiley.com/open-science/open-access/browse-journals.html
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Applied Sciences
EISSN 1751-9667
EndPage 783
ExternalDocumentID oai_doaj_org_article_15ce148b8b7b4b34b934c9978efeaec6
10_1049_ipr2_12671
IPR212671
Genre article
GroupedDBID .DC
0R~
1OC
24P
29I
5GY
6IK
8VB
AAHHS
AAHJG
AAJGR
ABQXS
ACCFJ
ACESK
ACGFS
ACIWK
ACXQS
ADZOD
AEEZP
AENEX
AEQDE
AIWBW
AJBDE
ALMA_UNASSIGNED_HOLDINGS
ALUQN
AVUZU
CS3
DU5
EBS
ESX
GROUPED_DOAJ
HZ~
IAO
IFIPE
IPLJI
JAVBF
K1G
LAI
MCNEO
MS~
O9-
OCL
OK1
P2P
QWB
RIE
RNS
ROL
RUI
WIN
ZL0
4.4
8FE
8FG
AAYXX
ABJCF
AFKRA
ARAPS
BENPR
BGLVJ
CCPQU
CITATION
EJD
HCIFZ
ITC
L6V
M43
M7S
P62
PTHSS
S0W
ID FETCH-LOGICAL-c3051-6327cc21d6df3960c877c27a1cd36c637e081dbdaaa5bb02a9822b5901d6a6233
IEDL.DBID 24P
ISSN 1751-9659
IngestDate Tue Oct 22 15:15:22 EDT 2024
Thu Sep 26 16:57:48 EDT 2024
Sat Aug 24 01:01:33 EDT 2024
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 3
Language English
License Attribution-NonCommercial-NoDerivs
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c3051-6327cc21d6df3960c877c27a1cd36c637e081dbdaaa5bb02a9822b5901d6a6233
ORCID 0000-0003-1209-0608
OpenAccessLink https://onlinelibrary.wiley.com/doi/abs/10.1049%2Fipr2.12671
PageCount 11
ParticipantIDs doaj_primary_oai_doaj_org_article_15ce148b8b7b4b34b934c9978efeaec6
crossref_primary_10_1049_ipr2_12671
wiley_primary_10_1049_ipr2_12671_IPR212671
PublicationCentury 2000
PublicationDate 2023-02-01
PublicationDateYYYYMMDD 2023-02-01
PublicationDate_xml – month: 02
  year: 2023
  text: 2023-02-01
  day: 01
PublicationDecade 2020
PublicationTitle IET image processing
PublicationYear 2023
Publisher Wiley
Publisher_xml – name: Wiley
References 2011
2022
2019; 41
2020; 30
2010
2021
2020
2019; 13
2017; 76
1973; 14
2019
2019; 19
2018
2011; 43
2012; 19
2017
2016
2015
2018; 40
2014
2020; 22
2018; 27
e_1_2_10_23_1
e_1_2_10_24_1
e_1_2_10_21_1
e_1_2_10_22_1
e_1_2_10_42_1
e_1_2_10_20_1
e_1_2_10_41_1
e_1_2_10_40_1
e_1_2_10_2_1
e_1_2_10_4_1
e_1_2_10_18_1
e_1_2_10_3_1
e_1_2_10_19_1
e_1_2_10_6_1
e_1_2_10_16_1
e_1_2_10_39_1
e_1_2_10_5_1
e_1_2_10_17_1
e_1_2_10_38_1
e_1_2_10_8_1
e_1_2_10_14_1
e_1_2_10_37_1
e_1_2_10_7_1
e_1_2_10_15_1
e_1_2_10_36_1
e_1_2_10_12_1
e_1_2_10_35_1
e_1_2_10_9_1
e_1_2_10_13_1
e_1_2_10_34_1
e_1_2_10_10_1
e_1_2_10_33_1
e_1_2_10_11_1
e_1_2_10_32_1
e_1_2_10_31_1
e_1_2_10_30_1
e_1_2_10_29_1
e_1_2_10_27_1
e_1_2_10_28_1
e_1_2_10_25_1
e_1_2_10_26_1
References_xml – start-page: 13423
  year: 2021
  end-page: 13433
  article-title: Skeleton cloud colorization for unsupervised 3d action representation learning
– start-page: 4724
  year: 2017
  end-page: 4733
  article-title: Action recognition? A new model and the kinetics dataset
– start-page: 143
  year: 2020
  end-page: 152
  article-title: Disentangling and unifying graph convolutions for skeleton‐based action recognition
– start-page: 3633
  year: 2017
  end-page: 3642
  article-title: Modeling temporal dynamics and spatial configurations of actions using two‐stream recurrent neural networks
– start-page: 1112
  year: 2020
  end-page: 1121
  article-title: Semantics‐guided neural networks for efficient skeleton‐based human action recognition
– start-page: 48
  year: 2010
  end-page: 55
  article-title: MuHAVi: A multicamera human action video dataset for the evaluation of action recognition methods
– start-page: 295
  year: 2015
  end-page: 302
  article-title: Robot‐centric activity prediction from first‐person videos: what will they do to me?
– volume: 43
  start-page: 16
  issue: 3
  year: 2011
  article-title: Human activity analysis: A review
  publication-title: ACM Comput. Surv.
– start-page: 1
  year: 2018
  end-page: 6
  article-title: Skeleton‐based human action recognition using spatial temporal 3D convolutional neural networks
– start-page: 13359
  year: 2021
  end-page: 13368
  article-title: Channel‐wise topology refinement graph convolution for skeleton‐based action recognition
– volume: 40
  start-page: 1045
  year: 2018
  end-page: 1058
  article-title: Deep multimodal feature analysis for action recognition in RGB+D videos
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
– volume: 76
  start-page: 4357
  year: 2017
  end-page: 4380
  article-title: Depth completion for Kinect v2 sensor
  publication-title: Multimed. Tools Appl.
– start-page: 1302
  year: 2017
  end-page: 1310
  article-title: Realtime multi‐person 2D pose estimation using part affinity fields
– start-page: 7444
  year: 2018
  end-page: 7452
  article-title: Spatial temporal graph convolutional networks for skeleton‐based action recognition
– volume: 30
  start-page: 2129
  year: 2020
  end-page: 2140
  article-title: Action recognition scheme based on skeleton representation with DS‐LSTM Network
  publication-title: IEEE Trans. Circuits Syst. Video Technol.
– start-page: 270
  year: 2018
  article-title: Part‐based graph convolutional network for action recognition
– year: 2022
  article-title: Human action recognition from various data modalities: A review
– volume: 13
  start-page: 2572
  year: 2019
  end-page: 2578
  article-title: Human activity recognition using 2D skeleton data and supervised machine learning
  publication-title: IET Image Process
– volume: 22
  start-page: 2481
  year: 2020
  end-page: 2496
  article-title: 2‐D skeleton‐based action recognition via two‐branch stacked LSTM‐RNNs
  publication-title: IEEE Trans. Multimed.
– volume: 19
  start-page: 4
  year: 2012
  end-page: 10
  article-title: Microsoft Kinect sensor and its effect
  publication-title: IEEE Multimed
– start-page: 20
  year: 2016
  end-page: 36
– start-page: 1010
  year: 2016
  end-page: 1019
  article-title: NTU RGB+D: A large scale dataset for 3D human activity analysis
– start-page: 2625
  year: 2015
  end-page: 2634
  article-title: Long‐term recurrent convolutional networks for visual recognition and description
– volume: 41
  start-page: 1963
  year: 2019
  end-page: 1978
  article-title: View adaptive neural networks for high performance skeleton‐based human action recognition
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
– volume: 27
  start-page: 3657
  year: 2018
  end-page: 3670
  article-title: Action‐attending graphic neural network
  publication-title: IEEE Trans. Image Process.
– start-page: 14333
  year: 2020
  end-page: 14342
  article-title: Context aware graph convolution for skeleton‐based action recognition
– start-page: 4489
  year: 2015
  end-page: 4497
  article-title: Learning spatiotemporal features with 3D convolutional networks
– volume: 14
  start-page: 201
  year: 1973
  end-page: 211
  article-title: Visual perception of biological motion and a model for its analysis
  publication-title: Percept. Psychophys.
– volume: 13
  start-page: 319
  year: 2019
  end-page: 328
  article-title: Learning to recognise 3D human action from a new skeleton‐based representation using deep convolutional neural networks
  publication-title: IET Comput. Vis.
– start-page: 747
  year: 2019
  end-page: 752
  article-title: Human action recognition using 2D poses
– start-page: 20186
  year: 2022
  end-page: 20196
  article-title: InfoGCN: Representation learning for human skeleton‐based action recognition
– start-page: 13434
  year: 2021
  end-page: 13443
  article-title: Else‐net: Elastic semantic network for continual action recognition from skeleton data
– start-page: 1297
  year: 2011
  end-page: 1304
  article-title: Real‐time human pose recognition in parts from single depth images
– start-page: 1623
  year: 2017
  end-page: 1631
  article-title: Interpretable 3D human action analysis with temporal convolutional networks
– start-page: 183
  year: 2020
  end-page: 192
  article-title: Skeleton‐based action recognition with shift graph convolutional network
– start-page: 816
  year: 2016
  end-page: 833
– start-page: 1
  year: 2018
  end-page: 4
  article-title: Human fall‐down event detection based on 2D skeletons and deep learning approach
– start-page: 568
  year: 2014
  end-page: 576
  article-title: Two‐stream convolutional networks for action recognition in videos
– start-page: 12018
  year: 2019
  end-page: 12027
  article-title: Two‐stream adaptive graph convolutional networks for skeleton‐based action recognition
– volume: 19
  start-page: 171
  year: 2019
  end-page: 179
  article-title: Influence of a marker‐based motion capture system on the performance of microsoft Kinect v2 skeleton algorithm
  publication-title: IEEE Sens. J.
– ident: e_1_2_10_14_1
  doi: 10.3758/BF03212378
– ident: e_1_2_10_32_1
  doi: 10.1049/iet‐ipr.2019.0030
– ident: e_1_2_10_20_1
  doi: 10.1109/CVPR.2019.01230
– ident: e_1_2_10_8_1
  doi: 10.1109/CVPR.2017.143
– ident: e_1_2_10_39_1
  doi: 10.1109/TPAMI.2017.2691321
– ident: e_1_2_10_7_1
  doi: 10.1007/s11042‐016‐3523‐y
– ident: e_1_2_10_38_1
  doi: 10.1109/CVPR.2017.387
– ident: e_1_2_10_37_1
  doi: 10.1109/TPAMI.2019.2896631
– ident: e_1_2_10_17_1
  doi: 10.1109/ICME.2018.8486566
– ident: e_1_2_10_4_1
  doi: 10.1145/1922649.1922653
– ident: e_1_2_10_12_1
  doi: 10.1109/ICCV.2015.510
– ident: e_1_2_10_34_1
  doi: 10.1109/IWAIT.2018.8369778
– ident: e_1_2_10_15_1
  doi: 10.1109/CVPR.2011.5995316
– ident: e_1_2_10_24_1
  doi: 10.1109/CVPR42600.2020.00022
– ident: e_1_2_10_21_1
– ident: e_1_2_10_30_1
  doi: 10.1109/CVPR52688.2022.01955
– ident: e_1_2_10_3_1
  doi: 10.1145/2696454.2696462
– ident: e_1_2_10_28_1
  doi: 10.1109/ICCV48922.2021.01317
– ident: e_1_2_10_29_1
  doi: 10.1109/ICCV48922.2021.01318
– ident: e_1_2_10_22_1
  doi: 10.1049/iet‐cvi.2018.5014
– ident: e_1_2_10_26_1
  doi: 10.1109/CVPR42600.2020.00026
– ident: e_1_2_10_16_1
  doi: 10.1007/978-3-319-46487-9_50
– ident: e_1_2_10_10_1
  doi: 10.1007/978-3-319-46484-8_2
– ident: e_1_2_10_25_1
  doi: 10.1109/CVPR42600.2020.00119
– ident: e_1_2_10_2_1
  doi: 10.1109/AVSS.2010.63
– ident: e_1_2_10_13_1
  doi: 10.1109/CVPR.2017.502
– ident: e_1_2_10_33_1
  doi: 10.1109/BRACIS.2019.00134
– ident: e_1_2_10_5_1
  doi: 10.1109/MMUL.2012.24
– ident: e_1_2_10_6_1
  doi: 10.1109/JSEN.2018.2876624
– ident: e_1_2_10_42_1
  doi: 10.1109/ICCV48922.2021.01311
– ident: e_1_2_10_9_1
– ident: e_1_2_10_18_1
  doi: 10.1109/CVPRW.2017.207
– ident: e_1_2_10_31_1
  doi: 10.36227/techrxiv.13708270
– ident: e_1_2_10_35_1
  doi: 10.1109/TMM.2019.2960588
– ident: e_1_2_10_27_1
  doi: 10.1109/ICCV48922.2021.01311
– ident: e_1_2_10_36_1
  doi: 10.1109/CVPR.2016.115
– ident: e_1_2_10_11_1
  doi: 10.1109/CVPR.2015.7298878
– ident: e_1_2_10_41_1
  doi: 10.1109/TCSVT.2019.2914137
– ident: e_1_2_10_19_1
  doi: 10.1609/aaai.v32i1.12328
– ident: e_1_2_10_23_1
  doi: 10.1109/CVPR42600.2020.01434
– ident: e_1_2_10_40_1
  doi: 10.1109/TIP.2018.2815744
SSID ssj0059085
Score 2.329884
Snippet With the popularity of cameras, the application of action recognition is more and more extensive. After the emergence of RGB‐D cameras and human pose...
Abstract With the popularity of cameras, the application of action recognition is more and more extensive. After the emergence of RGB‐D cameras and human pose...
SourceID doaj
crossref
wiley
SourceType Open Website
Aggregation Database
Publisher
StartPage 773
SubjectTerms 2D human action recognition
3D convolutional neural networks
attention mechanism
graph convolutional neural networks
skeleton sequences
SummonAdditionalLinks – databaseName: DOAJ Directory of Open Access Journals
  dbid: DOA
  link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1LSwMxEA7SkxffYn0R0JOwtnlssjlWa6mCImKhtyWvBRG2pa13f4K_0V_iJNmKvejF27JsyDKT2fmGnfk-hM45QHxtIZACV1fGK19kRnqRGcvzomIV4ZG--P5BDEf8bpyPf0h9hZ6wRA-cDNchufUA2U1hpOGGcaMYtwpqH1957W0i2-6qZTGVvsFByDuPo5BBRF7kaklMylXnZTqjl4QKSVZSUWTsX0WoMcUMttBGgw1xL73TNlrz9Q7abHAibqJwvovGPcz6OFJN49A13pweWFqnpu45jgI3GAAppn08f4XcAhjv8_0jJC2HozAfTiMN-LuFaFLvodHg5vl6mDUKCZmFOCWZYFRaS4kTrmJQi9hCSkulJtYxYQWTHjK-M05rnRvTpTqw9ZkwbuqEBuDD9lGrntT-AGHKbdVlnnYdRKh0RaGV94w4bgS1Sqs2Olsaq5wmIowy_sDmqgwmLaNJ2-gq2PH7iUBeHW-AS8vGpeVfLm2ji-iFX_Ypbx-faLw6_I8dj9B6kJFP3djHqLWYvfkTABsLcxrP1RcqtdE0
  priority: 102
  providerName: Directory of Open Access Journals
Title A 3D graph convolutional networks model for 2D skeleton‐based human action recognition
URI https://onlinelibrary.wiley.com/doi/abs/10.1049%2Fipr2.12671
https://doaj.org/article/15ce148b8b7b4b34b934c9978efeaec6
Volume 17
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LSwMxEA5FL158i_VRAnoSVrtJNtkFL9VaqqAUsdDbkteKCNvS1rs_wd_oL3Emu630InhbloTAJF_mm92Zbwg5F0DxtQUgoVZXJAqfRkZ5GRkrkrTgRSyCfPHjk-wPxcMoGTXI9aIWptKHWH5wQ2SE-xoBrk3VhQRILWzi22TKLmMmsYB8HSVjUDmficHiHsZm3kkoh8RG8jLJFuKkIrv6nbvijoJq_ypLDW6mt002a35IO9WG7pCGL3fJVs0VaY3E2R4ZdSjv0iA3TTFzvD5BMLWsErtnNDS5oUBKKevS2Tv4F-B5359f6LgcDc35aFXWQJdpRONynwx7dy-3_ajukhBZwGocSc6UtSx20hUc4hGbKmWZ0rF1XFrJlQev74zTWifGtJlGxT6DJadOaiA__ICslePSHxLKhC3a3LO2A5Qql6Y6857HThjJbKazJjlbGCufVGIYefiJLbIcTZoHkzbJDdpxOQIFrMOL8fQ1r_GQx4n1EImZ1CgjDBcm48JmENL6wmtvZZNchF34Y538fvDMwtPRfwYfkw1sGV9lXp-Qtfn0w58CsZibVjg_rRCW_wDPpcmN
link.rule.ids 315,783,787,867,2109,11574,27936,27937,46064,46488,50826,50935
linkProvider Wiley-Blackwell
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LSwMxEA7SHvTiW6zPgJ6E1W6STXaP1VpabYtIK8XLkteKCNvS1rs_wd_oLzGT3Va8CN6WZcPCJF_mm2TmG4TOmaP4UjsggVZXwDIbB0pYHijNojijWci8fHGvz9tDdjeKRmVuDtTCFPoQywM3QIbfrwHgcCBdBJwMRDJfJ1NyGRIOFeTVCC70KqjaeBo-DxdbMfTzjnxFJPSS51Gy0CdlydXP6F8eyQv3_yaq3tO0NtF6SRFxo5jTLbRi8220UdJFXIJxtoNGDUyb2CtOY0geLxeRG5oXud0z7PvcYMdLMWni2ZtzMY7qfX18gu8y2Pfnw0VlA15mEo3zXTRs3Q5u2kHZKCHQDq5hwCkRWpPQcJNRF5LoWAhNhAy1oVxzKqxz_EYZKWWkVJ1IEO1TUHVquHT8h-6hSj7O7T7ChOmsTi2pGwdUYeJYJtbS0DDFiU5kUkNnC2Olk0IPI_X32CxJwaSpN2kNXYMdl1-AhrV_MZ6-pCUk0jDS1gVjKlZCMUWZSijTiYtqbWal1byGLvws_PGftPPwSPzTwX8-PkWr7UGvm3Y7_ftDtAYd5ItE7CNUmU_f7bHjGXN1Uq6mb_E5zdI
linkToPdf http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3LSgMxFA2lgrjxLdZnQFfCaCfJJBNwU62l9VGKWOgu5DUiwrS0de8n-I1-iUlmptKN4G4YEgI3Obnnztx7LgDnxFF8qR2QvFZXRDKbRopZGilNkjTDWUyCfPFTn3aH5H6UjGrguqqFKfQhFh_cPDLCfe0BPjFZEW8Sr5H5NpmiyxhRX0C-QhwP98r5iAyqe9g3805COaRvJE8TXomTEn71O3fJHQXV_mWWGtxMZxOsl_wQtooN3QI1m2-DjZIrwhKJsx0wakHchkFuGvrM8fIEual5kdg9g6HJDXSkFKI2nL07_-J43vfnl3dcBobmfLAoa4CLNKJxvguGnbuX225UdkmItMNqHFGMmNYoNtRk2MUjOmVMIyZjbTDVFDPrvL5RRkqZKNVE0iv2KV9yaqh05AfvgXo-zu0-gIjorIktahqHUmbSVHJrcWyIokhzyRvgrDKWmBRiGCL8xCZceJOKYNIGuPF2XIzwAtbhxXj6Kko8iDjR1kViKlVMEYWJ4pho7kJam1lpNW2Ai7ALf6wjeoNnFJ4O_jP4FKwO2h3x2Os_HII13z2-SMI-AvX59MMeO44xVyfhKP0A4U_LLA
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=A+3D+graph+convolutional+networks+model+for+2D+skeleton%E2%80%90based+human+action+recognition&rft.jtitle=IET+image+processing&rft.au=Weng%2C+Libo&rft.au=Lou%2C+Weidong&rft.au=Shen%2C+Xin&rft.au=Gao%2C+Fei&rft.date=2023-02-01&rft.issn=1751-9659&rft.eissn=1751-9667&rft.volume=17&rft.issue=3&rft.spage=773&rft.epage=783&rft_id=info:doi/10.1049%2Fipr2.12671&rft.externalDBID=10.1049%252Fipr2.12671&rft.externalDocID=IPR212671
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1751-9659&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1751-9659&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1751-9659&client=summon