Beyond Frame-level CNN: Saliency-Aware 3-D CNN With LSTM for Video Action Recognition
Human activity recognition in videos with convolutional neural network (CNN) features has received increasing attention in multimedia understanding. Taking videos as a sequence of frames, a new record was recently set on several benchmark datasets by feeding frame-level CNN sequence features to long...
Saved in:
Published in | IEEE signal processing letters Vol. 24; no. 4; pp. 510 - 514 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
IEEE
01.04.2017
|
Subjects | |
Online Access | Get full text |
ISSN | 1070-9908 1558-2361 |
DOI | 10.1109/LSP.2016.2611485 |
Cover
Loading…
Abstract | Human activity recognition in videos with convolutional neural network (CNN) features has received increasing attention in multimedia understanding. Taking videos as a sequence of frames, a new record was recently set on several benchmark datasets by feeding frame-level CNN sequence features to long short-term memory (LSTM) model for video activity recognition. This recurrent model-based visual recognition pipeline is a natural choice for perceptual problems with time-varying visual input or sequential outputs. However, the above-mentioned pipeline takes frame-level CNN sequence features as input for LSTM, which may fail to capture the rich motion information from adjacent frames or maybe multiple clips. Furthermore, an activity is conducted by a subject or multiple subjects. It is important to consider attention that allows for salient features, instead of mapping an entire frame into a static representation. To tackle these issues, we propose a novel pipeline, saliency-aware three-dimensional (3-D) CNN with LSTM, for video action recognition by integrating LSTM with salient-aware deep 3-D CNN features on videos shots. Specifically, we first apply saliency-aware methods to generate saliency-aware videos. Then, we design an end-to-end pipeline by integrating 3-D CNN with LSTM, followed by a time series pooling layer and a softmax layer to predict the activities. Noticeably, we set a new record on two benchmark datasets, i.e., UCF101 with 13 320 videos and HMDB-51 with 6766 videos. Our method outperforms the state-of-the-art end-to-end methods of action recognition by 3.8% and 3.2%, respectively on above two datasets. |
---|---|
AbstractList | Human activity recognition in videos with convolutional neural network (CNN) features has received increasing attention in multimedia understanding. Taking videos as a sequence of frames, a new record was recently set on several benchmark datasets by feeding frame-level CNN sequence features to long short-term memory (LSTM) model for video activity recognition. This recurrent model-based visual recognition pipeline is a natural choice for perceptual problems with time-varying visual input or sequential outputs. However, the above-mentioned pipeline takes frame-level CNN sequence features as input for LSTM, which may fail to capture the rich motion information from adjacent frames or maybe multiple clips. Furthermore, an activity is conducted by a subject or multiple subjects. It is important to consider attention that allows for salient features, instead of mapping an entire frame into a static representation. To tackle these issues, we propose a novel pipeline, saliency-aware three-dimensional (3-D) CNN with LSTM, for video action recognition by integrating LSTM with salient-aware deep 3-D CNN features on videos shots. Specifically, we first apply saliency-aware methods to generate saliency-aware videos. Then, we design an end-to-end pipeline by integrating 3-D CNN with LSTM, followed by a time series pooling layer and a softmax layer to predict the activities. Noticeably, we set a new record on two benchmark datasets, i.e., UCF101 with 13 320 videos and HMDB-51 with 6766 videos. Our method outperforms the state-of-the-art end-to-end methods of action recognition by 3.8% and 3.2%, respectively on above two datasets. |
Author | Xuanhan Wang Hengtao Shen Jingkuan Song Lianli Gao |
Author_xml | – sequence: 1 givenname: Xuanhan surname: Wang fullname: Wang, Xuanhan – sequence: 2 givenname: Lianli surname: Gao fullname: Gao, Lianli – sequence: 3 givenname: Jingkuan surname: Song fullname: Song, Jingkuan – sequence: 4 givenname: Hengtao surname: Shen fullname: Shen, Hengtao |
BookMark | eNp9kMFOwzAMhiMEEtvgjsQlL9ARp0machuDAdIYiG1wrJLUhaCuRWkF2tvTahMHDlzsX5Y-2_qG5LCqKyTkDNgYgKUX8-XTmDNQY64AhJYHZABS6ojHCg67zBIWpSnTx2TYNB-MMQ1aDsj6Crd1ldNZMBuMSvzCkk4Xi0u6NKXHym2jybcJSOPoup_TV9--0_ly9UCLOtAXn2NNJ671dUWf0dVvle_zCTkqTNng6b6PyHp2s5reRfPH2_vpZB45ruI2kkkuHGgF3CZWaptoJiQvEtFVgQJACodKWJuy3IDiRZGo2AFaprixysYjwnZ7XaibJmCRfQa_MWGbAct6LVmnJeu1ZHstHaL-IM63pn-6DcaX_4HnO9Aj4u-dRCYcdBz_APKNbss |
CODEN | ISPLEM |
CitedBy_id | crossref_primary_10_1016_j_media_2018_05_001 crossref_primary_10_1049_cvi2_12013 crossref_primary_10_1109_TPAMI_2022_3183112 crossref_primary_10_3389_fninf_2018_00035 crossref_primary_10_3390_app10155326 crossref_primary_10_1177_09544070241265633 crossref_primary_10_3390_s23115133 crossref_primary_10_1142_S0219519422400516 crossref_primary_10_1016_j_artmed_2020_101936 crossref_primary_10_3389_fpsyg_2020_575971 crossref_primary_10_1007_s11042_019_07800_0 crossref_primary_10_1016_j_isprsjprs_2020_01_003 crossref_primary_10_1109_ACCESS_2024_3479718 crossref_primary_10_1109_TNNLS_2020_2978613 crossref_primary_10_1109_TCYB_2017_2734946 crossref_primary_10_1007_s11042_017_5238_0 crossref_primary_10_1109_JSEN_2018_2810449 crossref_primary_10_3390_fi11020042 crossref_primary_10_1016_j_asoc_2021_107433 crossref_primary_10_1109_ACCESS_2020_2976496 crossref_primary_10_3389_fnins_2020_590164 crossref_primary_10_1007_s10489_020_01688_2 crossref_primary_10_1007_s11063_018_9812_x crossref_primary_10_1109_JSEN_2019_2956901 crossref_primary_10_1109_LRA_2021_3059624 crossref_primary_10_3390_jimaging9040082 crossref_primary_10_1016_j_neucom_2020_06_008 crossref_primary_10_1109_ACCESS_2022_3165977 crossref_primary_10_1007_s00521_019_04232_7 crossref_primary_10_1016_j_jneumeth_2019_05_006 crossref_primary_10_1007_s11042_019_08205_9 crossref_primary_10_1109_ACCESS_2023_3293813 crossref_primary_10_1109_ACCESS_2020_3033190 crossref_primary_10_1007_s11265_025_01952_z crossref_primary_10_1016_j_neucom_2018_09_049 crossref_primary_10_1109_TIFS_2020_3036242 crossref_primary_10_1016_j_chb_2023_108038 crossref_primary_10_1109_TEMC_2021_3131670 crossref_primary_10_1007_s11042_018_5642_0 crossref_primary_10_1049_iet_ipr_2018_6581 crossref_primary_10_1088_1742_6596_2278_1_012004 crossref_primary_10_3390_en14092392 crossref_primary_10_3390_rs13193876 crossref_primary_10_3390_s20205957 crossref_primary_10_3390_app8122417 crossref_primary_10_1007_s11042_019_08453_9 crossref_primary_10_1007_s10489_021_02924_z crossref_primary_10_1109_ACCESS_2021_3110610 crossref_primary_10_2147_IJGM_S408725 crossref_primary_10_1016_j_jvcir_2021_103112 crossref_primary_10_1109_JSEN_2022_3163449 crossref_primary_10_1109_TMM_2018_2875512 crossref_primary_10_1109_ACCESS_2018_2880494 crossref_primary_10_1007_s11042_020_08609_y crossref_primary_10_1016_j_neucom_2018_11_102 crossref_primary_10_1016_j_neucom_2019_07_082 crossref_primary_10_1007_s10462_022_10148_x crossref_primary_10_1016_j_media_2021_102008 crossref_primary_10_3390_app10093166 crossref_primary_10_1049_ipr2_12640 crossref_primary_10_1109_LGRS_2021_3086136 crossref_primary_10_1016_j_neucom_2020_03_111 crossref_primary_10_1016_j_neucom_2019_05_027 crossref_primary_10_3390_a16080369 crossref_primary_10_1109_ACCESS_2019_2897060 crossref_primary_10_1007_s11042_020_09530_0 crossref_primary_10_1109_TIP_2020_3021294 crossref_primary_10_1016_j_mineng_2021_107068 crossref_primary_10_1109_TMM_2021_3066775 crossref_primary_10_3390_e22101174 crossref_primary_10_1007_s10462_020_09825_6 crossref_primary_10_1007_s11042_018_6088_0 crossref_primary_10_1016_j_patcog_2020_107477 crossref_primary_10_1109_TMM_2020_2991513 crossref_primary_10_1007_s11042_019_07895_5 crossref_primary_10_1007_s11042_019_07984_5 crossref_primary_10_3233_IDT_230469 crossref_primary_10_1109_TIP_2020_2989864 crossref_primary_10_1109_ACCESS_2021_3096240 crossref_primary_10_1016_j_jvcir_2020_102769 crossref_primary_10_1109_LSP_2018_2823910 crossref_primary_10_3390_electronics10202470 crossref_primary_10_1109_ACCESS_2021_3132668 crossref_primary_10_1007_s00500_023_09044_5 crossref_primary_10_1007_s11042_020_09589_9 crossref_primary_10_1007_s11042_018_5801_3 crossref_primary_10_1109_MAES_2021_3140064 crossref_primary_10_3390_jimaging9070130 crossref_primary_10_1177_15589250221077267 crossref_primary_10_1109_LSP_2023_3267975 crossref_primary_10_3233_JIFS_230170 crossref_primary_10_1007_s11042_019_7168_5 crossref_primary_10_1016_j_neucom_2018_10_095 crossref_primary_10_1016_j_neucom_2018_09_086 crossref_primary_10_3390_app13063403 crossref_primary_10_1016_j_ins_2023_03_058 crossref_primary_10_1109_TMM_2017_2777665 crossref_primary_10_1186_s13640_020_00544_0 crossref_primary_10_1007_s11042_019_7270_8 crossref_primary_10_1016_j_neucom_2018_06_096 crossref_primary_10_3390_electronics13122291 crossref_primary_10_3390_w16233390 crossref_primary_10_4018_IJICTE_315743 crossref_primary_10_1007_s42979_021_00775_6 crossref_primary_10_1016_j_image_2019_08_009 crossref_primary_10_1007_s11042_019_07929_y crossref_primary_10_1007_s00521_019_04605_y crossref_primary_10_1007_s11280_018_0642_6 crossref_primary_10_1016_j_ins_2021_07_079 crossref_primary_10_1109_ACCESS_2020_2983427 crossref_primary_10_1109_JSTARS_2022_3162953 crossref_primary_10_1007_s00521_020_05144_7 crossref_primary_10_1109_TCSVT_2018_2870832 crossref_primary_10_1016_j_asoc_2022_109884 crossref_primary_10_1016_j_oceaneng_2022_111683 crossref_primary_10_1007_s10489_021_02329_y crossref_primary_10_3390_s21093099 crossref_primary_10_1016_j_dsp_2018_03_021 crossref_primary_10_1007_s11042_019_08493_1 crossref_primary_10_1109_ACCESS_2021_3083064 crossref_primary_10_1109_JSEN_2023_3329491 crossref_primary_10_1016_j_mri_2021_02_001 crossref_primary_10_1016_j_neuroimage_2019_116459 crossref_primary_10_1049_trit_2018_1025 crossref_primary_10_1155_2020_3062706 crossref_primary_10_1007_s11042_018_6653_6 crossref_primary_10_1016_j_infrared_2019_103014 crossref_primary_10_1016_j_engappai_2020_103758 crossref_primary_10_3390_app14010230 crossref_primary_10_1088_1757_899X_768_7_072014 crossref_primary_10_1007_s11042_017_5020_3 crossref_primary_10_1109_TCSVT_2019_2909427 crossref_primary_10_1007_s11042_019_7180_9 crossref_primary_10_1007_s11063_019_10091_z crossref_primary_10_3390_ijerph19031744 crossref_primary_10_1016_j_eswa_2019_112847 crossref_primary_10_1007_s11042_018_6959_4 crossref_primary_10_1109_JSEN_2021_3071884 crossref_primary_10_1109_TNNLS_2022_3175480 crossref_primary_10_1016_j_engappai_2022_105581 crossref_primary_10_1016_j_chemolab_2020_104143 crossref_primary_10_3390_s23146384 crossref_primary_10_1109_JSEN_2020_3016968 crossref_primary_10_1007_s11042_018_7068_0 crossref_primary_10_1007_s42979_021_00484_0 crossref_primary_10_1145_3355394 crossref_primary_10_1155_2022_9506418 crossref_primary_10_3390_rs11060654 crossref_primary_10_1109_TNNLS_2020_2986823 crossref_primary_10_3390_app13169059 crossref_primary_10_1007_s11042_018_7083_1 crossref_primary_10_1017_S0263574721000801 crossref_primary_10_1142_S0129065722500344 crossref_primary_10_1016_j_apor_2022_103330 crossref_primary_10_1155_2022_6246842 crossref_primary_10_1007_s11042_020_09173_1 crossref_primary_10_1016_j_ecoinf_2020_101089 crossref_primary_10_1109_LSP_2017_2690339 crossref_primary_10_1142_S0219467824500554 crossref_primary_10_1007_s11042_021_11093_7 crossref_primary_10_1016_j_desal_2021_115233 crossref_primary_10_1016_j_egyr_2022_09_071 crossref_primary_10_1007_s00521_020_05332_5 crossref_primary_10_1016_j_ress_2020_107032 crossref_primary_10_1007_s10489_018_1395_8 crossref_primary_10_1080_13682199_2023_2166193 crossref_primary_10_1109_ACCESS_2018_2889556 |
Cites_doi | 10.1109/CVPR.2014.223 10.1109/CVPR.2015.7298935 10.1007/s00530-015-0494-1 10.1109/TBC.2016.2580920 10.1109/ICCV.2015.522 10.1145/1291233.1291311 10.1002/sec.1582 10.1109/CVPR.2015.7298691 10.1109/TCYB.2015.2403356 10.1109/TKDE.2010.99 10.1109/TPAMI.2016.2577031 10.1109/TPAMI.2012.59 10.1109/CVPR.2008.4587756 10.1109/VSPETS.2005.1570899 10.1109/CVPR.2015.7298961 10.1109/TIP.2016.2601260 10.1109/CVPR.2015.7298878 10.1109/ICCV.2013.441 10.1109/TBC.2015.2419824 10.1109/ICCV.2015.510 10.1109/CVPR.2015.7299066 10.1109/TIP.2014.2332764 10.1016/j.neucom.2015.08.115 10.1109/ICCV.2011.6126543 |
ContentType | Journal Article |
DBID | 97E RIA RIE AAYXX CITATION |
DOI | 10.1109/LSP.2016.2611485 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE/IET Electronic Library CrossRef |
DatabaseTitle | CrossRef |
DatabaseTitleList | |
Database_xml | – sequence: 1 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering |
EISSN | 1558-2361 |
EndPage | 514 |
ExternalDocumentID | 10_1109_LSP_2016_2611485 7572183 |
Genre | orig-research |
GrantInformation_xml | – fundername: National Natural Science Foundation of China grantid: 61502080 funderid: 10.13039/501100001809 – fundername: Fundamental Research Funds for the Central Universities grantid: ZYGX2014J063 |
GroupedDBID | -~X .DC 0R~ 29I 3EH 4.4 5GY 5VS 6IK 85S 97E AAJGR AARMG AASAJ AAWTH AAYJJ ABAZT ABFSI ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AETIX AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 E.L EBS EJD F5P HZ~ H~9 ICLAB IFIPE IFJZH IPLJI JAVBF LAI M43 O9- OCL P2P RIA RIE RNS TAE TN5 VH1 AAYXX CITATION RIG |
ID | FETCH-LOGICAL-c263t-57d4c18612b7b58b780452f744524e41154ce64bb90da162ff763c1eb062ab6b3 |
IEDL.DBID | RIE |
ISSN | 1070-9908 |
IngestDate | Thu Apr 24 23:05:42 EDT 2025 Tue Jul 01 00:42:14 EDT 2025 Tue Aug 26 16:58:39 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 4 |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c263t-57d4c18612b7b58b780452f744524e41154ce64bb90da162ff763c1eb062ab6b3 |
PageCount | 5 |
ParticipantIDs | crossref_citationtrail_10_1109_LSP_2016_2611485 ieee_primary_7572183 crossref_primary_10_1109_LSP_2016_2611485 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2017-April 2017-4-00 |
PublicationDateYYYYMMDD | 2017-04-01 |
PublicationDate_xml | – month: 04 year: 2017 text: 2017-April |
PublicationDecade | 2010 |
PublicationTitle | IEEE signal processing letters |
PublicationTitleAbbrev | LSP |
PublicationYear | 2017 |
Publisher | IEEE |
Publisher_xml | – name: IEEE |
References | ref13 zhu (ref28) 2016 ref15 ref14 ref31 ref30 zeiler (ref25) 0 ref33 ref11 ref32 ref10 ref2 ref1 ref17 ref19 graves (ref6) 0 ref24 sutskever (ref20) 2014 ref23 ref26 ref22 ref21 ng (ref12) 0 ref27 simonyan (ref16) 0 ref29 ref8 ref7 ref4 ref3 gao (ref5) 0 krizhevsky (ref9) 2012 soomro (ref18) 2012 |
References_xml | – ident: ref8 doi: 10.1109/CVPR.2014.223 – start-page: 1764 year: 0 ident: ref6 article-title: Towards end-to-end speech recognition with recurrent neural networks publication-title: Proc 31st Int Conf Mach Learn – ident: ref22 doi: 10.1109/CVPR.2015.7298935 – ident: ref3 doi: 10.1007/s00530-015-0494-1 – ident: ref32 doi: 10.1109/TBC.2016.2580920 – start-page: 818 year: 0 ident: ref25 article-title: Visualizing and understanding convolutional networks publication-title: Proc 13th Eur Conf Comput Vis – ident: ref19 doi: 10.1109/ICCV.2015.522 – ident: ref15 doi: 10.1145/1291233.1291311 – ident: ref31 doi: 10.1002/sec.1582 – ident: ref14 doi: 10.1109/CVPR.2015.7298691 – start-page: 3104 year: 2014 ident: ref20 article-title: Sequence to sequence learning with neural networks publication-title: Advances in Neural IInformation Processing Systems – start-page: 4694 year: 0 ident: ref12 article-title: Beyond short snippets: Deep networks for video classification publication-title: Proc 2015 IEEE Conf Comput Vis Pattern Recog – ident: ref27 doi: 10.1109/TCYB.2015.2403356 – ident: ref30 doi: 10.1109/TKDE.2010.99 – start-page: 1188 year: 0 ident: ref5 article-title: Graph-without-cut: An ideal graph learning for image segmentation publication-title: Proc 13th AAAI Conf Artif Intell Phoenix Arizona USA – ident: ref13 doi: 10.1109/TPAMI.2016.2577031 – ident: ref7 doi: 10.1109/TPAMI.2012.59 – start-page: 1097 year: 2012 ident: ref9 article-title: Imagenet classification with deep convolutional neural networks publication-title: Advances in Neural Information Processing Systems 25 – ident: ref11 doi: 10.1109/CVPR.2008.4587756 – year: 2016 ident: ref28 article-title: Robust joint graph sparse coding for unsupervised spectral feature selection publication-title: IEEE Trans Neural Netw Learning Syst – ident: ref1 doi: 10.1109/VSPETS.2005.1570899 – ident: ref24 doi: 10.1109/CVPR.2015.7298961 – ident: ref17 doi: 10.1109/TIP.2016.2601260 – ident: ref2 doi: 10.1109/CVPR.2015.7298878 – ident: ref23 doi: 10.1109/ICCV.2013.441 – ident: ref33 doi: 10.1109/TBC.2015.2419824 – ident: ref21 doi: 10.1109/ICCV.2015.510 – ident: ref4 doi: 10.1109/CVPR.2015.7299066 – ident: ref29 doi: 10.1109/TIP.2014.2332764 – start-page: 568 year: 0 ident: ref16 article-title: Two-stream convolutional networks for action recognition in videos publication-title: Proc Neural Inform Process Syst – ident: ref26 doi: 10.1016/j.neucom.2015.08.115 – year: 2012 ident: ref18 article-title: UCF101: A dataset of 101 human actions classes from videos in the wild publication-title: CoRR – ident: ref10 doi: 10.1109/ICCV.2011.6126543 |
SSID | ssj0008185 |
Score | 2.5674927 |
Snippet | Human activity recognition in videos with convolutional neural network (CNN) features has received increasing attention in multimedia understanding. Taking... |
SourceID | crossref ieee |
SourceType | Enrichment Source Index Database Publisher |
StartPage | 510 |
SubjectTerms | Action recognition Computer architecture deep learning Image recognition LSTM Microprocessors Pipelines saliency-aware three-dimensional (3-D) convolution Three-dimensional displays Time series analysis Visualization |
Title | Beyond Frame-level CNN: Saliency-Aware 3-D CNN With LSTM for Video Action Recognition |
URI | https://ieeexplore.ieee.org/document/7572183 |
Volume | 24 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1bS8MwFA5zT_rgbYrzRh58EUzXdWnS-jamY8g2xG26t5KkKQ5HK6ND8Neb0xtTRHwpJSQlJOGc7zTf-Q5CV5Tb2nOEJuCsCfVB8laGnDBXRZFPKeUScodHYzaY0Ye5O6-hmyoXRmudkc-0Ba_ZXX6YqDX8Kmtxl4NH30JbJnDLc7UqqwuOJ-cX2sRYWK-8krT91nDyCBwuZplowaB_95sL2qipkrmU_h4alZPJmSRv1jqVlvr8odP439nuo90CW-JufhgOUE3Hh2hnQ3GwgWZ5xgruAyeLLIExhHvj8S2eGDwOWZik-yFWGnfIHbTjl0X6ioeT6QgbcIufF6FOcDdLhcBPJfUoiY_QrH8_7Q1IUVmBKId1UuLykKq2Z9CN5NL1JKgQuU7EqXlSTUGiR2lGpfTtULSZE0XGDKm2ljZzhGSyc4zqcRLrE4R95kZcafOZLBxTQgohbQMbPcVk5IgmapWLHahCdhyqXyyDLPyw_cBsTwDbExTb00TX1Yj3XHLjj74NWPiqX7Hmp783n6FtB7xyRrw5R_V0tdYXBlOk8jI7TF9ErcTQ |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1dS8MwFL1MfVAf_Jrit3nwRTBb2-Wj9W1Mx9RtiNvUt9KkKQ5lE-kQ_PXmtt2YIuJLKSEp4SbknDTnngCcMukY34sMRbCmLEDLWxVLKrhOkoAxJhXmDne6ojVgN0_8qQTns1wYY0wmPjMVfM3O8uOxnuCvsqrkEhF9AZYs7nM3z9aarbsIPbnC0KF2jfWnh5JOUG337lDFJSp2v2D5P_8GQnO3qmSg0lyHzrQ7uZbkpTJJVUV__nBq_G9_N2CtYJeknk-HTSiZ0RasznkOlmGQ56yQJqqy6Ctqhkij270gPcvIMQ-T1j-id0Nq9BLLyeMwfSbtXr9DLL0lD8PYjEk9S4Yg91Px0Xi0DYPmVb_RosXdClR7opZSLmOmXd_yGyUV9xX6EHEvkcw-mWFo0qONYEoFThy5wksSuxBp1yhHeJESqrYDi6PxyOwCCQRPpDb2M9mGTEcqipRjiaOvhUq8aA-q02CHujAex_svXsNsA-IEoR2eEIcnLIZnD85mLd5y040_6pYx8LN6Rcz3fy8-geVWv9MO29fd2wNY8RCjMxnOISym7xNzZBlGqo6zifUFukLIGQ |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Beyond+Frame-level+CNN%3A+Saliency-Aware+3-D+CNN+With+LSTM+for+Video+Action+Recognition&rft.jtitle=IEEE+signal+processing+letters&rft.au=Xuanhan+Wang&rft.au=Lianli+Gao&rft.au=Jingkuan+Song&rft.au=Hengtao+Shen&rft.date=2017-04-01&rft.pub=IEEE&rft.issn=1070-9908&rft.volume=24&rft.issue=4&rft.spage=510&rft.epage=514&rft_id=info:doi/10.1109%2FLSP.2016.2611485&rft.externalDocID=7572183 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1070-9908&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1070-9908&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1070-9908&client=summon |