Action Recognition With Spatio-Temporal Visual Attention on Skeleton Image Sequences

Action recognition with 3D skeleton sequences became popular due to its speed and robustness. The recently proposed convolutional neural networks (CNNs)-based methods show a good performance in learning spatio-temporal representations for skeleton sequences. Despite the good recognition accuracy ach...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on circuits and systems for video technology Vol. 29; no. 8; pp. 2405 - 2415
Main Authors Yang, Zhengyuan, Li, Yuncheng, Yang, Jianchao, Luo, Jiebo
Format Journal Article
LanguageEnglish
Published New York IEEE 01.08.2019
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Action recognition with 3D skeleton sequences became popular due to its speed and robustness. The recently proposed convolutional neural networks (CNNs)-based methods show a good performance in learning spatio-temporal representations for skeleton sequences. Despite the good recognition accuracy achieved by previous CNN-based methods, there existed two problems that potentially limit the performance. First, previous skeleton representations were generated by chaining joints with a fixed order. The corresponding semantic meaning was unclear and the structural information among the joints was lost. Second, previous models did not have an ability to focus on informative joints. The attention mechanism was important for skeleton-based action recognition because different joints contributed unequally toward the correct recognition. To solve these two problems, we proposed a novel CNN-based method for skeleton-based action recognition. We first redesigned the skeleton representations with a depth-first tree traversal order, which enhanced the semantic meaning of skeleton images and better preserved the associated structural information. We then proposed the general two-branch attention architecture that automatically focused on spatio-temporal key stages and filtered out unreliable joint predictions. Based on the proposed general architecture, we designed a global long-sequence attention network with refined branch structures. Furthermore, in order to adjust the kernel's spatio-temporal aspect ratios and better capture long-term dependencies, we proposed a sub-sequence attention network (SSAN) that took sub-image sequences as inputs. We showed that the two-branch attention architecture could be combined with the SSAN to further improve the performance. Our experiment results on the NTU RGB+D data set and the SBU kinetic interaction data set outperformed the state of the art. The model was further validated on noisy estimated poses from the subsets of the UCF101 data set and the kinetics data set.
AbstractList Action recognition with 3D skeleton sequences became popular due to its speed and robustness. The recently proposed convolutional neural networks (CNNs)-based methods show a good performance in learning spatio–temporal representations for skeleton sequences. Despite the good recognition accuracy achieved by previous CNN-based methods, there existed two problems that potentially limit the performance. First, previous skeleton representations were generated by chaining joints with a fixed order. The corresponding semantic meaning was unclear and the structural information among the joints was lost. Second, previous models did not have an ability to focus on informative joints. The attention mechanism was important for skeleton-based action recognition because different joints contributed unequally toward the correct recognition. To solve these two problems, we proposed a novel CNN-based method for skeleton-based action recognition. We first redesigned the skeleton representations with a depth-first tree traversal order, which enhanced the semantic meaning of skeleton images and better preserved the associated structural information. We then proposed the general two-branch attention architecture that automatically focused on spatio–temporal key stages and filtered out unreliable joint predictions. Based on the proposed general architecture, we designed a global long-sequence attention network with refined branch structures. Furthermore, in order to adjust the kernel’s spatio–temporal aspect ratios and better capture long-term dependencies, we proposed a sub-sequence attention network (SSAN) that took sub-image sequences as inputs. We showed that the two-branch attention architecture could be combined with the SSAN to further improve the performance. Our experiment results on the NTU RGB+D data set and the SBU kinetic interaction data set outperformed the state of the art. The model was further validated on noisy estimated poses from the subsets of the UCF101 data set and the kinetics data set.
Author Yang, Zhengyuan
Luo, Jiebo
Li, Yuncheng
Yang, Jianchao
Author_xml – sequence: 1
  givenname: Zhengyuan
  orcidid: 0000-0002-5808-0889
  surname: Yang
  fullname: Yang, Zhengyuan
  email: zyang39@cs.rochester.edu
  organization: Department of Computer Science, University of Rochester, Rochester, NY, USA
– sequence: 2
  givenname: Yuncheng
  surname: Li
  fullname: Li, Yuncheng
  email: yuncheng.li@snapchat.com
  organization: Snap Inc., Venice, CA, USA
– sequence: 3
  givenname: Jianchao
  surname: Yang
  fullname: Yang, Jianchao
  email: jcyangenator@gmail.com
  organization: Toutiao AI Lab, Menlo Park, CA, USA
– sequence: 4
  givenname: Jiebo
  surname: Luo
  fullname: Luo, Jiebo
  email: jluo@cs.rochester.edu
  organization: Department of Computer Science, University of Rochester, Rochester, NY, USA
BookMark eNp9kEtLAzEUhYMo2Fb_gG4GXE9NMpPHLEvxUSgIzliXYSa9U1OnmZqkC_-96QMXLoQL94R7vlzuGaJz21tA6IbgMSG4uK-m5aIaU0zkmEqek1yeoQFhTKaUYnYeNWYklZSwSzT0fo1xtORigKqJDqa3ySvofmXNQb-b8JGU2zo-0go2297VXbIwfhfbJASwB1es8hM6CFHMNvUKkhK-dmA1-Ct00dadh-tTH6G3x4dq-pzOX55m08k81bRgIc10oYE0LTDBcQFScEIE1FxmDJZZo4kkgugla3ELOE6kbrgA3rTLFud5I7MRujv-u3V9XO2DWvc7Z-NKRSkvBM4EK6JLHl3a9d47aJU2YX-cDa42nSJY7TNUhwzVPkN1yjCi9A-6dWZTu-__odsjZADgF5B5nBOe_QD1yoBE
CODEN ITCTEM
CitedBy_id crossref_primary_10_1109_ACCESS_2021_3059650
crossref_primary_10_3390_s21206761
crossref_primary_10_1109_TCSVT_2022_3215564
crossref_primary_10_1016_j_neucom_2021_04_071
crossref_primary_10_1109_TCSVT_2023_3272375
crossref_primary_10_3390_app10186188
crossref_primary_10_1109_TCE_2024_3420936
crossref_primary_10_3389_fnins_2024_1353257
crossref_primary_10_1109_TCSVT_2023_3281413
crossref_primary_10_1016_j_jvcir_2022_103531
crossref_primary_10_1109_TCSVT_2021_3098839
crossref_primary_10_1109_TCSVT_2022_3193574
crossref_primary_10_1007_s11760_024_03805_x
crossref_primary_10_1109_ACCESS_2020_2992740
crossref_primary_10_3390_s21051838
crossref_primary_10_3390_app112311481
crossref_primary_10_1007_s11265_023_01892_6
crossref_primary_10_1109_TCSVT_2021_3100842
crossref_primary_10_1109_TCSVT_2023_3248271
crossref_primary_10_1109_TBCAS_2021_3064841
crossref_primary_10_1002_int_22713
crossref_primary_10_1109_TMM_2020_2990082
crossref_primary_10_1016_j_asoc_2020_106713
crossref_primary_10_1109_ACCESS_2023_3325401
crossref_primary_10_1016_j_neucom_2020_03_126
crossref_primary_10_1109_TCSVT_2022_3164663
crossref_primary_10_3390_s24144646
crossref_primary_10_1109_ACCESS_2019_2947090
crossref_primary_10_1109_ACCESS_2023_3235368
crossref_primary_10_3390_s22228738
crossref_primary_10_1016_j_eswa_2021_116424
crossref_primary_10_1109_TMM_2021_3070127
crossref_primary_10_3390_s24092795
crossref_primary_10_1117_1_JEI_31_5_053015
crossref_primary_10_1109_ACCESS_2021_3109255
crossref_primary_10_3233_JIFS_234791
crossref_primary_10_1109_TCSVT_2021_3073673
crossref_primary_10_1007_s00371_021_02355_4
crossref_primary_10_1007_s00521_024_09630_0
crossref_primary_10_1109_TCSVT_2023_3318557
crossref_primary_10_11834_jig_230046
crossref_primary_10_1007_s10846_023_01938_8
crossref_primary_10_1007_s12652_021_03614_x
crossref_primary_10_1109_ACCESS_2020_3048741
crossref_primary_10_1155_2021_2290304
crossref_primary_10_3390_s21062051
crossref_primary_10_3390_jmse11020291
crossref_primary_10_1145_3473912
crossref_primary_10_1016_j_cag_2021_04_017
crossref_primary_10_1109_TCSVT_2022_3143549
crossref_primary_10_1109_TCSVT_2021_3059706
crossref_primary_10_1007_s40747_025_01807_x
crossref_primary_10_1109_TCSVT_2021_3085959
crossref_primary_10_1016_j_neucom_2020_10_016
crossref_primary_10_1109_TCSVT_2024_3375512
crossref_primary_10_1126_sciadv_ade0123
crossref_primary_10_1109_TCSVT_2022_3232021
crossref_primary_10_1109_TIE_2020_3001838
crossref_primary_10_3390_app12083846
crossref_primary_10_1109_TCSVT_2022_3207174
crossref_primary_10_1109_TCSVT_2020_3038145
crossref_primary_10_1109_TCSVT_2022_3148392
crossref_primary_10_3390_electronics12040879
crossref_primary_10_1117_1_JEI_33_4_043025
crossref_primary_10_1109_TCSVT_2022_3156058
crossref_primary_10_1007_s00521_022_07763_8
crossref_primary_10_3390_a13120319
crossref_primary_10_1038_s41598_022_09293_8
crossref_primary_10_1007_s00521_023_08671_1
crossref_primary_10_1109_ACCESS_2020_2968054
crossref_primary_10_1109_TCSVT_2020_3019293
crossref_primary_10_1109_TCSVT_2020_2973301
crossref_primary_10_1109_TCSVT_2022_3178430
crossref_primary_10_1109_TCSVT_2020_3028008
crossref_primary_10_1109_TCSVT_2024_3358836
crossref_primary_10_1109_ACCESS_2020_2987177
crossref_primary_10_1109_TCSVT_2020_2987141
crossref_primary_10_3390_jpm13050874
crossref_primary_10_1109_TBDATA_2019_2950411
crossref_primary_10_1007_s11042_020_09181_1
crossref_primary_10_3390_mti5090055
crossref_primary_10_1007_s00521_022_07514_9
crossref_primary_10_1007_s11431_023_2491_4
crossref_primary_10_1049_cit2_12014
crossref_primary_10_3390_s23052680
crossref_primary_10_1007_s11042_022_13298_w
crossref_primary_10_1007_s00521_022_07826_w
crossref_primary_10_3233_ICA_210652
crossref_primary_10_1016_j_bspc_2024_107053
crossref_primary_10_1109_TCSVT_2019_2918591
crossref_primary_10_1109_TCDS_2022_3171550
crossref_primary_10_1109_ACCESS_2024_3525185
crossref_primary_10_1109_TNNLS_2022_3175775
crossref_primary_10_1016_j_neucom_2024_129105
crossref_primary_10_1007_s11042_023_15778_z
crossref_primary_10_1109_TCSVT_2022_3201186
crossref_primary_10_3389_fpubh_2020_584387
Cites_doi 10.1109/ICCV.2017.590
10.1109/CVPRW.2017.207
10.1109/CVPR.2009.5206744
10.1109/ICCV.2013.396
10.1007/978-3-319-10602-1_38
10.1109/CVPR.2018.00762
10.1109/CVPR.2016.511
10.1109/ICIP.2017.8296256
10.1109/CVPR.2016.115
10.1109/TIP.2018.2815744
10.1109/ICCV.2017.557
10.1109/CVPRW.2012.6239234
10.1109/CVPR.2016.503
10.1109/CVPR.2017.137
10.1109/ICCV.2015.510
10.1109/CVPR.2014.82
10.1109/ICCV.2017.402
10.1145/2398356.2398381
10.1109/CVPR.2017.143
10.1007/978-3-319-46487-9_50
10.1109/CVPR.2016.90
10.1109/CVPR.2016.213
10.1109/CVPR.2015.7298878
10.1016/j.cviu.2016.03.013
10.1109/CVPR.2005.177
10.1145/3072959.3073596
10.1109/ICPR.2018.8546012
10.1109/CVPRW.2012.6239232
10.1109/ICCV.2013.280
10.1016/S0031-3203(00)00146-1
10.1109/CVPR.2017.391
10.1109/CVPR.2017.502
10.1145/2647868.2654918
10.1109/ICCV.2017.256
10.1109/CVPR.2017.387
10.1109/CVPR.2017.683
10.1109/CVPR.2017.486
10.1109/ICCV.2017.115
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019
DBID 97E
RIA
RIE
AAYXX
CITATION
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
DOI 10.1109/TCSVT.2018.2864148
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DatabaseTitle CrossRef
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
DatabaseTitleList Technology Research Database

Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
Architecture
EISSN 1558-2205
EndPage 2415
ExternalDocumentID 10_1109_TCSVT_2018_2864148
8428616
Genre orig-research
GrantInformation_xml – fundername: New York State through Snap
– fundername: Cheetah Mobile
– fundername: Division of Institution and Award Support
  grantid: 1704309
  funderid: 10.13039/100005446
GroupedDBID -~X
0R~
29I
4.4
5GY
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
AENEX
AETIX
AGQYO
AGSQL
AHBIQ
AI.
AIBXA
AKJIK
AKQYR
ALLEH
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
EBS
EJD
HZ~
H~9
ICLAB
IFIPE
IFJZH
IPLJI
JAVBF
LAI
M43
O9-
OCL
P2P
RIA
RIE
RNS
RXW
TAE
TN5
VH1
AAYXX
CITATION
RIG
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
ID FETCH-LOGICAL-c295t-3c9ce1bfe57609e876117ea6835ed3bc18171cd5f0fe017e8cb67e6bfdf044b83
IEDL.DBID RIE
ISSN 1051-8215
IngestDate Mon Jun 30 04:07:29 EDT 2025
Thu Apr 24 22:59:21 EDT 2025
Tue Jul 01 00:41:12 EDT 2025
Wed Aug 27 02:54:22 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 8
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c295t-3c9ce1bfe57609e876117ea6835ed3bc18171cd5f0fe017e8cb67e6bfdf044b83
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0002-5808-0889
PQID 2269703759
PQPubID 85433
PageCount 11
ParticipantIDs ieee_primary_8428616
proquest_journals_2269703759
crossref_citationtrail_10_1109_TCSVT_2018_2864148
crossref_primary_10_1109_TCSVT_2018_2864148
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2019-08-01
PublicationDateYYYYMMDD 2019-08-01
PublicationDate_xml – month: 08
  year: 2019
  text: 2019-08-01
  day: 01
PublicationDecade 2010
PublicationPlace New York
PublicationPlace_xml – name: New York
PublicationTitle IEEE transactions on circuits and systems for video technology
PublicationTitleAbbrev TCSVT
PublicationYear 2019
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref57
ref13
you (ref46) 2017
ref15
ref14
ref53
soomro (ref50) 2012
ref52
ref55
ref11
ref54
ref10
ref17
ref16
liu (ref41) 2017
weng (ref37) 2017
zaremba (ref49) 2014
ref45
ref47
ref42
ref44
ref43
newell (ref48) 2016
ref8
ref7
ref9
ref3
ref6
kay (ref51) 2017
ref5
ng (ref29) 2015
xu (ref18) 2015
srivastava (ref28) 2015
ref40
ref34
ref36
ref31
ref30
ref33
ref32
ref2
ref1
yan (ref39) 2018
ref38
song (ref35) 2017
dalal (ref23) 2006
krizhevsky (ref26) 2012
du (ref12) 2015
ref24
ref25
ref20
simonyan (ref27) 2014
ref22
ref21
sharma (ref19) 2015
jiang (ref58) 2014
simonyan (ref4) 2014
kumar (ref56) 2010
References_xml – start-page: 4694
  year: 2015
  ident: ref29
  article-title: Beyond short snippets: Deep networks for video classification
  publication-title: Proc IEEE Conf Comput Vis Pattern Recognit
– start-page: 1110
  year: 2015
  ident: ref12
  article-title: Hierarchical recurrent neural network for skeleton based action recognition
  publication-title: Proc IEEE Conf Comput Vis Pattern Recognit
– ident: ref3
  doi: 10.1109/ICCV.2017.590
– start-page: 568
  year: 2014
  ident: ref4
  article-title: Two-stream convolutional networks for action recognition in videos
  publication-title: Proc Adv Neural Inf Process Syst
– ident: ref17
  doi: 10.1109/CVPRW.2017.207
– start-page: 428
  year: 2006
  ident: ref23
  article-title: Human detection using oriented histograms of flow and appearance
  publication-title: Proc Eur Conf Comput Vis
– start-page: 4263
  year: 2017
  ident: ref35
  article-title: An end-to-end spatio-temporal attention model for human action recognition from skeleton data
  publication-title: Proc AAAI
– ident: ref21
  doi: 10.1109/CVPR.2009.5206744
– ident: ref53
  doi: 10.1109/ICCV.2013.396
– start-page: 843
  year: 2015
  ident: ref28
  article-title: Unsupervised learning of video representations using LSTMs
  publication-title: Proc Int Conf Mach Learn
– ident: ref24
  doi: 10.1007/978-3-319-10602-1_38
– start-page: 1189
  year: 2010
  ident: ref56
  article-title: Self-paced learning for latent variable models
  publication-title: Proc Adv Neural Inf Process Syst
– ident: ref32
  doi: 10.1109/CVPR.2018.00762
– ident: ref30
  doi: 10.1109/CVPR.2016.511
– ident: ref9
  doi: 10.1109/ICIP.2017.8296256
– start-page: 2048
  year: 2015
  ident: ref18
  article-title: Show, attend and tell: Neural image caption generation with visual attention
  publication-title: Proc Int Conf Mach Learn
– ident: ref13
  doi: 10.1109/CVPR.2016.115
– start-page: 7444
  year: 2018
  ident: ref39
  article-title: Spatial temporal graph convolutional networks for skeleton-based action recognition
  publication-title: Proc AAAI
– ident: ref38
  doi: 10.1109/TIP.2018.2815744
– ident: ref44
  doi: 10.1109/ICCV.2017.557
– ident: ref40
  doi: 10.1109/CVPRW.2012.6239234
– ident: ref42
  doi: 10.1109/CVPR.2016.503
– ident: ref36
  doi: 10.1109/CVPR.2017.137
– ident: ref2
  doi: 10.1109/ICCV.2015.510
– year: 2014
  ident: ref49
  publication-title: Recurrent Neural Network Regularization
– ident: ref11
  doi: 10.1109/CVPR.2014.82
– ident: ref43
  doi: 10.1109/ICCV.2017.402
– ident: ref7
  doi: 10.1145/2398356.2398381
– ident: ref31
  doi: 10.1109/CVPR.2017.143
– start-page: 2078
  year: 2014
  ident: ref58
  article-title: Self-paced learning with diversity
  publication-title: Proc Adv Neural Inf Process Syst
– ident: ref14
  doi: 10.1007/978-3-319-46487-9_50
– ident: ref47
  doi: 10.1109/CVPR.2016.90
– ident: ref5
  doi: 10.1109/CVPR.2016.213
– ident: ref1
  doi: 10.1109/CVPR.2015.7298878
– ident: ref25
  doi: 10.1016/j.cviu.2016.03.013
– year: 2014
  ident: ref27
  publication-title: Very Deep Convolutional Networks for Large-scale Image Recognition
– start-page: 483
  year: 2016
  ident: ref48
  article-title: Stacked hourglass networks for human pose estimation
  publication-title: Proc Eur Conf Comput Vis
– year: 2015
  ident: ref19
  publication-title: Action recognition using visual attention
– ident: ref22
  doi: 10.1109/CVPR.2005.177
– start-page: 445
  year: 2017
  ident: ref37
  publication-title: Spatio-temporal naive-bayes nearest-neighbor (st-nbnn) for skeleton-based action recognition
– ident: ref8
  doi: 10.1145/3072959.3073596
– ident: ref20
  doi: 10.1109/ICPR.2018.8546012
– ident: ref10
  doi: 10.1109/CVPRW.2012.6239232
– ident: ref54
  doi: 10.1109/ICCV.2013.280
– year: 2012
  ident: ref50
  publication-title: Ucf101 A Dataset of 101 Human Actions Classes from Videos in the Wild
– ident: ref55
  doi: 10.1016/S0031-3203(00)00146-1
– ident: ref34
  doi: 10.1109/CVPR.2017.391
– ident: ref6
  doi: 10.1109/CVPR.2017.502
– year: 2017
  ident: ref51
  publication-title: The kinetics human action video dataset
– ident: ref57
  doi: 10.1145/2647868.2654918
– year: 2017
  ident: ref41
  publication-title: Skepxels Spatio-temporal image representation of human skeleton joints for action recognition
– ident: ref52
  doi: 10.1109/ICCV.2017.256
– start-page: 1097
  year: 2012
  ident: ref26
  article-title: ImageNet classification with deep convolutional neural networks
  publication-title: Proc Adv Neural Inf Process Syst
– ident: ref15
  doi: 10.1109/CVPR.2017.387
– ident: ref45
  doi: 10.1109/CVPR.2017.683
– ident: ref16
  doi: 10.1109/CVPR.2017.486
– start-page: 231
  year: 2017
  ident: ref46
  article-title: Visual sentiment analysis by attending on local image regions
  publication-title: Proc AAAI
– ident: ref33
  doi: 10.1109/ICCV.2017.115
SSID ssj0014847
Score 2.6024668
Snippet Action recognition with 3D skeleton sequences became popular due to its speed and robustness. The recently proposed convolutional neural networks (CNNs)-based...
SourceID proquest
crossref
ieee
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 2405
SubjectTerms Action and activity recognition
Architecture
Artificial neural networks
Aspect ratio
Datasets
human analysis
Image enhancement
Image recognition
Image sequences
Joints (anatomy)
Object recognition
Optical imaging
Performance enhancement
Representations
Semantics
Skeleton
Three-dimensional displays
Two dimensional displays
video understanding
visual attention
Visualization
Title Action Recognition With Spatio-Temporal Visual Attention on Skeleton Image Sequences
URI https://ieeexplore.ieee.org/document/8428616
https://www.proquest.com/docview/2269703759
Volume 29
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3PS8MwFH64nfTgb3E6pQdv2q1d0yw5juGYgh5cp97Kkrzi2Jyi3cW_3pe0HaIiQgs9JCHkJXlfmu99D-BMGi1tDgufMcN9K4HuCx7ZS14-0dy-Tq7p5pYPx-z6MX5cg4tVLAwiOvIZtuynu8s3L3ppf5W1BWFlHvIa1OjgVsRqrW4MmHDJxAguhL4gP1YFyASynfRH94llcYkW1WehzfXzxQm5rCo_tmLnXwZbcFP1rKCVzFrLXLX0xzfRxv92fRs2S6Dp9YqZsQNruNiFjS_yg3uQ9FxQg3dXkYjo-2GaP3kjR7P2k0K2au7dT9-Xtq08L8iRHj2jGTksAo7e1TNtSd6oomTvw3hwmfSHfpllwdcdGed-pKXGUGVIJ49AIu2OYdjFCSdohiZSmiBAN9QmzoIMafmi0Ip3kavMZAFjSkQHUF-8LPAQvDjmGRORNoZHjAVGTCKFmHVUR060FrIBYTXsqS4lyG0mjHnqjiKBTJ2pUmuqtDRVA85XdV4LAY4_S-_ZsV-VLIe9Ac3Kumm5Rt9TAp6ya1MAy6Pfax3DOrUtC7pfE-r52xJPCILk6tTNvU-IJtew
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwzV1NT9wwEB1Remg5QFuKupS2PrSnKks-HK994LBainb5OrCBckvX9kRFwILYrBD9LfwV_htjJ1khWvWGhJRIOdhOYr94nuM3MwBflTXK5bAIOLcicCHQAykSt8krRka404dr2tsX_UO-fZwez8HtzBcGEb34DNvu0u_l2wszdb_K1iVxZRGJWkK5gzfXtECbbAw2aTS_xfHWj6zXD-ocAoGJVVoGiVEGI10g8epQIX37UdTBkSDigTbRhgxcJzI2LcICCZwojRYdFLqwRci5lgm1-wJeEs9I48o7bLZHwaVPX0YEJQokWc7GJSdU61lveJQ53Zhs0xPzyGUXemD2fB6XvyZ_b9G2luCu6YtKyHLanpa6bf48ChP5XDvrDSzWVJp1K-y_hTkcv4OFBwEWlyHrercNdtDIpOj650n5mw29kDzIqsBcZ-zoZDJ1bZVlJf9kdAxPySQTNWaDc5p02bARnb-Hwyd5rRWYH1-M8QOwNBUFl4mxViSch1aOEo1YxDpWI2OkakHUDHNu6iDrLtfHWe4XW6HKPTRyB428hkYLvs_qXFYhRv5betmN9axkPcwtWGvQlNez0CQnaq06LsmxWv13rS_wqp_t7ea7g_2dj_Ca7qMqceMazJdXU_xEhKvUnz3uGfx6auzcAzHeNtY
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Action+Recognition+With+Spatio%E2%80%93Temporal+Visual+Attention+on+Skeleton+Image+Sequences&rft.jtitle=IEEE+transactions+on+circuits+and+systems+for+video+technology&rft.au=Yang%2C+Zhengyuan&rft.au=Li%2C+Yuncheng&rft.au=Yang%2C+Jianchao&rft.au=Luo%2C+Jiebo&rft.date=2019-08-01&rft.pub=The+Institute+of+Electrical+and+Electronics+Engineers%2C+Inc.+%28IEEE%29&rft.issn=1051-8215&rft.eissn=1558-2205&rft.volume=29&rft.issue=8&rft.spage=2405&rft_id=info:doi/10.1109%2FTCSVT.2018.2864148&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1051-8215&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1051-8215&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1051-8215&client=summon