Skeleton-Based Action Recognition With Gated Convolutional Neural Networks
For skeleton-based action recognition, most of the existing works used recurrent neural networks. Using convolutional neural networks (CNNs) is another attractive solution considering their advantages in parallelization, effectiveness in feature learning, and model base sufficiency. Besides these, s...
Saved in:
Published in | IEEE transactions on circuits and systems for video technology Vol. 29; no. 11; pp. 3247 - 3257 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.11.2019
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | For skeleton-based action recognition, most of the existing works used recurrent neural networks. Using convolutional neural networks (CNNs) is another attractive solution considering their advantages in parallelization, effectiveness in feature learning, and model base sufficiency. Besides these, skeleton data are low-dimensional features. It is natural to arrange a sequence of skeleton features chronologically into an image, which retains the original information. Therefore, we solve the sequence learning problem as an image classification task using CNNs. For better learning ability, we build a classification network with stacked residual blocks and having a special design called linear skip gated connection which can benefit information propagation across multiple residual blocks. When arranging the coordinates of body joints in one frame into a skeleton feature, we systematically investigate the performance of part-based, chain-based, and traversal-based orders. Furthermore, a fully convolutional permutation network is designed to learn an optimized order for data rearrangement. Without any bells and whistles, our proposed model achieves state-of-the-art performance on two challenging benchmark datasets, outperforming existing methods significantly. |
---|---|
AbstractList | For skeleton-based action recognition, most of the existing works used recurrent neural networks. Using convolutional neural networks (CNNs) is another attractive solution considering their advantages in parallelization, effectiveness in feature learning, and model base sufficiency. Besides these, skeleton data are low-dimensional features. It is natural to arrange a sequence of skeleton features chronologically into an image, which retains the original information. Therefore, we solve the sequence learning problem as an image classification task using CNNs. For better learning ability, we build a classification network with stacked residual blocks and having a special design called linear skip gated connection which can benefit information propagation across multiple residual blocks. When arranging the coordinates of body joints in one frame into a skeleton feature, we systematically investigate the performance of part-based, chain-based, and traversal-based orders. Furthermore, a fully convolutional permutation network is designed to learn an optimized order for data rearrangement. Without any bells and whistles, our proposed model achieves state-of-the-art performance on two challenging benchmark datasets, outperforming existing methods significantly. |
Author | Cao, Congqi Lu, Hanqing Zeng, Wenjun Zhang, Yifan Zhang, Yanning Lan, Cuiling |
Author_xml | – sequence: 1 givenname: Congqi orcidid: 0000-0002-0217-9791 surname: Cao fullname: Cao, Congqi email: congqi.cao@nwpu.edu.cn organization: School of Computer Science, Northwestern Polytechnical University, Xi'an, China – sequence: 2 givenname: Cuiling orcidid: 0000-0001-9145-9957 surname: Lan fullname: Lan, Cuiling email: culan@microsoft.com organization: Microsoft Research Asia, Beijing, China – sequence: 3 givenname: Yifan orcidid: 0000-0002-9190-3509 surname: Zhang fullname: Zhang, Yifan email: yfzhang@nlpr.ia.ac.cn organization: National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China – sequence: 4 givenname: Wenjun orcidid: 0000-0003-2531-3137 surname: Zeng fullname: Zeng, Wenjun email: wezeng@microsoft.com organization: Microsoft Research Asia, Beijing, China – sequence: 5 givenname: Hanqing surname: Lu fullname: Lu, Hanqing email: luhq@nlpr.ia.ac.cn organization: National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China – sequence: 6 givenname: Yanning surname: Zhang fullname: Zhang, Yanning email: ynzhang@nwpu.edu.cn organization: School of Computer Science, Northwestern Polytechnical University, Xi'an, China |
BookMark | eNp9kE1PAjEQhhuDiYD-Ab1s4nmx027p9ohEUUM0EdTjpnQHXVi32HY1_nt3gXjw4GkmM-87H0-PdCpbISGnQAcAVF3Mx7Pn-YBRSAcslUoBPyBdECKNGaOi0-RUQJwyEEek5_2KUkjSRHbJ3WyNJQZbxZfaYx6NTChsFT2isa9Vsc1fivAWTXRoumNbfdqybsu6jO6xdtsQvqxb-2NyuNSlx5N97JOn66v5-CaePkxux6NpbJgSIeaojJBLoaTmDMUCUEPChWrOFglKbhZC5cMFUwB5zqkyMmcKGbKcyaEwlPfJ-W7uxtmPGn3IVrZ2zUE-YxwaHBIENCq2UxlnvXe4zDaueNfuOwOatcyyLbOsZZbtmTWm9I_JFEG37wani_J_69nOWiDi765UMMUk8B8kV3uB |
CODEN | ITCTEM |
CitedBy_id | crossref_primary_10_1016_j_patcog_2021_108170 crossref_primary_10_1016_j_patcog_2024_111106 crossref_primary_10_1109_TCSVT_2024_3399126 crossref_primary_10_1109_LRA_2021_3056361 crossref_primary_10_1007_s10994_022_06141_8 crossref_primary_10_1109_TPAMI_2021_3050918 crossref_primary_10_1109_TCSVT_2021_3050807 crossref_primary_10_1016_j_compeleceng_2024_109633 crossref_primary_10_1080_00140139_2024_2433027 crossref_primary_10_1016_j_neucom_2022_07_046 crossref_primary_10_1016_j_jvcir_2022_103531 crossref_primary_10_1109_TCSVT_2021_3098839 crossref_primary_10_1109_TCSVT_2022_3193574 crossref_primary_10_20965_jaciii_2023_p0165 crossref_primary_10_1007_s11042_022_13441_7 crossref_primary_10_2139_ssrn_4116054 crossref_primary_10_2139_ssrn_4170495 crossref_primary_10_3390_app12031402 crossref_primary_10_3390_electronics9010187 crossref_primary_10_1109_TCSVT_2024_3360452 crossref_primary_10_22399_ijcesen_621 crossref_primary_10_3390_rs13183755 crossref_primary_10_1016_j_ijleo_2021_166726 crossref_primary_10_1016_j_knosys_2022_109887 crossref_primary_10_1109_TCSVT_2022_3219864 crossref_primary_10_1109_TCSVT_2023_3295432 crossref_primary_10_1016_j_neucom_2020_03_126 crossref_primary_10_1109_TMM_2021_3086758 crossref_primary_10_1016_j_eswa_2022_118004 crossref_primary_10_3390_app13031647 crossref_primary_10_1016_j_knosys_2021_107783 crossref_primary_10_1016_j_neunet_2022_04_025 crossref_primary_10_1016_j_asoc_2025_112797 crossref_primary_10_1016_j_eswa_2021_116424 crossref_primary_10_1109_TMM_2021_3070127 crossref_primary_10_1038_s41598_024_58190_9 crossref_primary_10_1109_LSENS_2024_3475515 crossref_primary_10_1109_TCSVT_2021_3077512 crossref_primary_10_1049_ipr2_12469 crossref_primary_10_1109_TCSVT_2022_3175923 crossref_primary_10_1016_j_sigpro_2022_108714 crossref_primary_10_1007_s00371_021_02355_4 crossref_primary_10_1145_3656046 crossref_primary_10_3390_a13120331 crossref_primary_10_1007_s10489_024_05719_0 crossref_primary_10_1109_TCSVT_2023_3318557 crossref_primary_10_1007_s10044_023_01156_w crossref_primary_10_1109_JSSC_2024_3391665 crossref_primary_10_12688_digitaltwin_17408_1 crossref_primary_10_1109_TAI_2024_3430260 crossref_primary_10_1007_s00371_020_01955_w crossref_primary_10_1109_ACCESS_2021_3049808 crossref_primary_10_1002_cav_2207 crossref_primary_10_1007_s10489_022_03968_5 crossref_primary_10_1016_j_neucom_2022_07_080 crossref_primary_10_11834_jig_220028 crossref_primary_10_1109_TCSVT_2021_3085959 crossref_primary_10_3390_s25061769 crossref_primary_10_1002_cav_2193 crossref_primary_10_1016_j_patcog_2024_110733 crossref_primary_10_1109_TCSVT_2024_3375512 crossref_primary_10_1109_THMS_2024_3467334 crossref_primary_10_1016_j_asoc_2023_111166 crossref_primary_10_1109_TCSVT_2021_3124562 crossref_primary_10_1016_j_patcog_2019_05_020 crossref_primary_10_1016_j_neucom_2025_129433 crossref_primary_10_1109_TCDS_2021_3126637 crossref_primary_10_1016_j_neucom_2024_127496 crossref_primary_10_1016_j_patcog_2023_109989 crossref_primary_10_1109_TCSVT_2020_3038145 crossref_primary_10_3390_electronics12132852 crossref_primary_10_5351_KJAS_2024_37_5_643 crossref_primary_10_1109_TCSVT_2020_3017203 crossref_primary_10_1109_TIP_2022_3230249 crossref_primary_10_3390_electronics11182973 crossref_primary_10_1109_TCSVT_2023_3240472 crossref_primary_10_1007_s00371_024_03420_4 crossref_primary_10_1016_j_patcog_2022_109234 crossref_primary_10_1109_TCSVT_2021_3100128 crossref_primary_10_1038_s41598_022_09293_8 crossref_primary_10_1088_1742_6596_2010_1_012131 crossref_primary_10_1109_TMM_2022_3168137 crossref_primary_10_1109_TCSVT_2020_3019293 crossref_primary_10_1109_TCSVT_2020_2973301 crossref_primary_10_1016_j_knosys_2025_113045 crossref_primary_10_1007_s13735_023_00301_9 crossref_primary_10_1109_TMM_2022_3175374 crossref_primary_10_1360_SSI_2023_0047 crossref_primary_10_2478_amns_2023_2_00253 crossref_primary_10_20535_2523_4455_2019_24_6_197449 crossref_primary_10_1007_s10489_021_02370_x crossref_primary_10_1007_s11760_024_03259_1 crossref_primary_10_1109_TIP_2020_3028207 crossref_primary_10_1016_j_asoc_2023_110536 crossref_primary_10_1016_j_patrec_2023_11_010 crossref_primary_10_1049_cit2_12012 crossref_primary_10_1007_s00521_022_07826_w crossref_primary_10_1016_j_cviu_2024_104258 crossref_primary_10_1109_TNNLS_2022_3226301 crossref_primary_10_3390_s24165379 crossref_primary_10_1007_s00371_022_02603_1 crossref_primary_10_1109_TCDS_2022_3171550 crossref_primary_10_3390_s23249738 crossref_primary_10_3390_e22101135 crossref_primary_10_1007_s10489_022_04442_y crossref_primary_10_1049_cvi2_12080 crossref_primary_10_1049_cvi2_12086 crossref_primary_10_1109_TCSVT_2022_3201186 |
Cites_doi | 10.1109/ACPR.2015.7486569 10.1109/CVPR.2017.486 10.1109/78.650093 10.1007/978-3-319-46487-9_50 10.1109/CVPR.2016.115 10.1109/ICCV.2017.115 10.1016/j.patcog.2017.02.030 10.1109/DICTA.2014.7008101 10.1109/CVPR.2016.90 10.1109/TCSVT.2018.2799968 10.1109/5.726791 10.1109/CVPR.2015.7298594 10.1109/CVPR.2015.7299172 10.1109/TSP.2006.881199 10.1109/TCYB.2016.2519448 10.1109/CVPR.2014.82 10.1109/TCSVT.2016.2628339 10.1109/TCSVT.2018.2818151 10.1109/TIP.2017.2785279 10.1109/CVPR.2017.387 10.1109/72.279181 10.1016/j.neunet.2017.12.012 10.1016/j.knosys.2018.05.029 10.1109/ICCV.2017.233 10.3115/v1/D14-1179 10.1109/CVPRW.2017.207 10.1109/ICCV.2017.161 10.1109/TIP.2018.2812099 10.1109/MMUL.2012.24 10.1162/neco.1997.9.8.1735 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019 |
DBID | 97E RIA RIE AAYXX CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D |
DOI | 10.1109/TCSVT.2018.2879913 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005-present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Xplore Digital Library CrossRef Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
DatabaseTitle | CrossRef Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional |
DatabaseTitleList | Technology Research Database |
Database_xml | – sequence: 1 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering |
EISSN | 1558-2205 |
EndPage | 3257 |
ExternalDocumentID | 10_1109_TCSVT_2018_2879913 8529271 |
Genre | orig-research |
GrantInformation_xml | – fundername: Northwestern Polytechnical University grantid: 31020180QD138 funderid: 10.13039/501100002663 |
GroupedDBID | -~X 0R~ 29I 4.4 5GY 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AETIX AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS EJD HZ~ H~9 ICLAB IFIPE IFJZH IPLJI JAVBF LAI M43 O9- OCL P2P RIA RIE RNS RXW TAE TN5 VH1 AAYXX CITATION RIG 7SC 7SP 8FD JQ2 L7M L~C L~D |
ID | FETCH-LOGICAL-c295t-3e9c57f597a32e5b1ea1435987954e73cb59d6b2911dd309c7d29e2e2d2765c03 |
IEDL.DBID | RIE |
ISSN | 1051-8215 |
IngestDate | Mon Jun 30 06:27:16 EDT 2025 Thu Apr 24 23:06:23 EDT 2025 Tue Jul 01 00:41:12 EDT 2025 Wed Aug 27 02:40:42 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 11 |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c295t-3e9c57f597a32e5b1ea1435987954e73cb59d6b2911dd309c7d29e2e2d2765c03 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0002-9190-3509 0000-0003-2531-3137 0000-0002-0217-9791 0000-0001-9145-9957 |
PQID | 2311107151 |
PQPubID | 85433 |
PageCount | 11 |
ParticipantIDs | crossref_primary_10_1109_TCSVT_2018_2879913 proquest_journals_2311107151 ieee_primary_8529271 crossref_citationtrail_10_1109_TCSVT_2018_2879913 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2019-11-01 |
PublicationDateYYYYMMDD | 2019-11-01 |
PublicationDate_xml | – month: 11 year: 2019 text: 2019-11-01 day: 01 |
PublicationDecade | 2010 |
PublicationPlace | New York |
PublicationPlace_xml | – name: New York |
PublicationTitle | IEEE transactions on circuits and systems for video technology |
PublicationTitleAbbrev | TCSVT |
PublicationYear | 2019 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | krizhevsky (ref12) 2012 ref37 ref15 ref36 ref14 ref31 ref30 ref33 ref11 ref32 ref10 chen (ref38) 2015 dauphin (ref18) 2017 ref2 ref39 ref16 pascanu (ref17) 2013 li (ref35) 2017 ref24 ref23 ref26 ref25 ref20 ref42 du (ref4) 2015 ref41 ref21 ref43 yin (ref19) 2017; abs 1702 1923 ref28 hussein (ref27) 2013 li (ref22) 2017 simonyan (ref13) 2014; abs 1409 1556 ref29 ref8 ref7 ding (ref34) 2017 ref9 ref3 pineda (ref1) 1987 song (ref6) 2017 ref5 ref40 |
References_xml | – ident: ref24 doi: 10.1109/ACPR.2015.7486569 – ident: ref20 doi: 10.1109/CVPR.2017.486 – ident: ref32 doi: 10.1109/78.650093 – volume: abs 1702 1923 year: 2017 ident: ref19 article-title: Comparative study of CNN and RNN for natural language processing publication-title: CoRR – ident: ref5 doi: 10.1007/978-3-319-46487-9_50 – ident: ref36 doi: 10.1109/CVPR.2016.115 – ident: ref8 doi: 10.1109/ICCV.2017.115 – ident: ref33 doi: 10.1016/j.patcog.2017.02.030 – ident: ref26 doi: 10.1109/DICTA.2014.7008101 – start-page: 1110 year: 2015 ident: ref4 article-title: Hierarchical recurrent neural network for skeleton based action recognition publication-title: Proc IEEE Conf Comput Vis Pattern Recognit – volume: abs 1409 1556 year: 2014 ident: ref13 article-title: Very deep convolutional networks for large-scale image recognition publication-title: CoRR – ident: ref15 doi: 10.1109/CVPR.2016.90 – ident: ref10 doi: 10.1109/TCSVT.2018.2799968 – start-page: 601 year: 2017 ident: ref22 article-title: Skeleton based action recognition using translation-scale invariant image mapping and multi-scale deep CNN publication-title: Proc IEEE Int Conf Multimedia Expo Workshops – start-page: 4263 year: 2017 ident: ref6 article-title: An end-to-end spatio-temporal attention model for human action recognition from skeleton data publication-title: Proc 31st AAAI Conf Artif Intell – ident: ref11 doi: 10.1109/5.726791 – ident: ref14 doi: 10.1109/CVPR.2015.7298594 – start-page: 933 year: 2017 ident: ref18 article-title: Language modeling with gated convolutional networks publication-title: Proc Int Conf Mach Learn (ICML) – start-page: 1310 year: 2013 ident: ref17 article-title: On the difficulty of training recurrent neural networks publication-title: Proc 30th Int Conf Mach Learn (ICML) – ident: ref40 doi: 10.1109/CVPR.2015.7299172 – start-page: 168 year: 2015 ident: ref38 article-title: UTD-MHAD: A multimodal dataset for human action recognition utilizing a depth camera and a wearable inertial sensor publication-title: Proc IEEE Int Conf Image Process – ident: ref31 doi: 10.1109/TSP.2006.881199 – ident: ref29 doi: 10.1109/TCYB.2016.2519448 – ident: ref28 doi: 10.1109/CVPR.2014.82 – ident: ref43 doi: 10.1109/TCSVT.2016.2628339 – ident: ref30 doi: 10.1109/TCSVT.2018.2818151 – ident: ref9 doi: 10.1109/TIP.2017.2785279 – ident: ref25 doi: 10.1109/CVPR.2017.387 – ident: ref16 doi: 10.1109/72.279181 – ident: ref39 doi: 10.1016/j.neunet.2017.12.012 – start-page: 617 year: 2017 ident: ref34 article-title: Investigation of different skeleton features for CNN-based 3D action recognition publication-title: Proc IEEE Int Conf Multimedia Expo Workshops – start-page: 597 year: 2017 ident: ref35 article-title: Skeleton-based action recognition with convolutional neural networks publication-title: Proc IEEE Int Conf Multimedia Expo Workshops – start-page: 602 year: 1987 ident: ref1 article-title: Generalization of back propagation to recurrent and higher order neural networks publication-title: Proc Adv Neural Inf Process Syst – ident: ref42 doi: 10.1016/j.knosys.2018.05.029 – ident: ref7 doi: 10.1109/ICCV.2017.233 – start-page: 2466 year: 2013 ident: ref27 article-title: Human action recognition using a temporal hierarchy of covariance descriptors on 3D joint locations publication-title: Proc 23rd Int Joint Conf Artif Intell – ident: ref3 doi: 10.3115/v1/D14-1179 – ident: ref21 doi: 10.1109/CVPRW.2017.207 – start-page: 1106 year: 2012 ident: ref12 article-title: ImageNet classification with deep convolutional neural networks publication-title: Proc Adv Neural Inf Process Syst – ident: ref41 doi: 10.1109/ICCV.2017.161 – ident: ref23 doi: 10.1109/TIP.2018.2812099 – ident: ref37 doi: 10.1109/MMUL.2012.24 – ident: ref2 doi: 10.1162/neco.1997.9.8.1735 |
SSID | ssj0014847 |
Score | 2.6075437 |
Snippet | For skeleton-based action recognition, most of the existing works used recurrent neural networks. Using convolutional neural networks (CNNs) is another... |
SourceID | proquest crossref ieee |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 3247 |
SubjectTerms | action recognition Artificial neural networks Bells Convolutional neural networks gated connection Image classification Learning Logic gates Matrix converters Neural networks Permutations Recognition Recurrent neural networks Skeleton Task analysis Three-dimensional displays |
Title | Skeleton-Based Action Recognition With Gated Convolutional Neural Networks |
URI | https://ieeexplore.ieee.org/document/8529271 https://www.proquest.com/docview/2311107151 |
Volume | 29 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV07T8MwELZKJxh4FUShoAxskDR1EiceS0VVVYKBttAtih8RqChFNGXg13PnPMRLiCkeYsvynX3f2Xf3EXKeKqSJg_1NAY7YvhDcFixhdk9qJlSoVJqaANlbNpr543kwb5DLOhdGa22Cz7SDTfOWr5ZyjVdl3SignGLC-AY4bkWuVv1i4EeGTAzgQs-OwI5VCTIu704Hk_spRnFFDvgHAIi8L0bIsKr8OIqNfRnukJtqZkVYycJZ58KR79-KNv536rtkuwSaVr_QjD3S0Nk-2fpUfrBFxpMFGB3kEL4CW6asvslxsO6qmCJoPzzljxbesClrsMzeSj2FcbGoh_mYKPLVAZkNr6eDkV1yK9iS8iC3Pc1lEKbgTiQe1YHo6QSRE0fucV-HnhQBV0xQOAuV8lwuQ0W5ppoqGrJAut4haWbLTB8Ri4HPwVyhME0V4UCkkgD8Ri4SFqGGtEmvWuxYloXHkf_iOTYOiMtjI6AYBRSXAmqTi7rPS1F248-_W7ji9Z_lYrdJp5JpXO7MVQx4Fl1eADrHv_c6IZswNi_yDTukmb-u9SkAj1ycGY37AMP502M |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV07T8MwED4hGICBV0GUZwY2SEmc2olHqEClFAZaHlsUPyoQqEWQMvDruXOSipcQUzzYSeQ7-76z77sD2BsYKhOH65shHPGbSklfiUz4obZCmdiYwcAFyF6K9nWzc8fvpuBgwoWx1rrgM9ugprvLNyM9pqOyw4QzyYgwPoN2n4cFW2tyZ9BMXDkxBAyhn6AlqygygTzst3o3fYrjShroISAkir6YIVdX5cdm7CzM6SJcVP9WBJY8Nsa5auj3b2kb__vzS7BQQk3vqNCNZZiywxWY_5SAsAad3iOaHaoifIzWzHhHjuXgXVVRRdi-fcjvPTpjM15rNHwrNRXfS2k93MPFkb-uwvXpSb_V9svqCr5mkud-ZKXm8QAdiixilqvQZoSdJFUfb9o40opLIxTD3dCYKJA6NkxaZplhseA6iNZgejga2nXwBHodIlCGiKoECBKTcfQcpcpEQjpSh7Ca7FSXqcepAsZT6lyQQKZOQCkJKC0FVIf9yZjnIvHGn71rNOOTnuVk12Grkmlars3XFBEtOb0IdTZ-H7ULs-3-RTftnl2eb8IcfkcW7MMtmM5fxnYbYUiudpz2fQD3qtas |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Skeleton-Based+Action+Recognition+With+Gated+Convolutional+Neural+Networks&rft.jtitle=IEEE+transactions+on+circuits+and+systems+for+video+technology&rft.au=Cao%2C+Congqi&rft.au=Lan%2C+Cuiling&rft.au=Zhang%2C+Yifan&rft.au=Zeng%2C+Wenjun&rft.date=2019-11-01&rft.pub=IEEE&rft.issn=1051-8215&rft.volume=29&rft.issue=11&rft.spage=3247&rft.epage=3257&rft_id=info:doi/10.1109%2FTCSVT.2018.2879913&rft.externalDocID=8529271 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1051-8215&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1051-8215&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1051-8215&client=summon |