Improved human action recognition approach based on two-stream convolutional neural network model
In order to improve the accuracy of human abnormal behavior recognition, a two-stream convolution neural network model was proposed. This model includes two main parts, VMHI and FRGB. Firstly, the motion history images are extracted and input into VGG-16 convolutional neural network for training. Th...
Saved in:
Published in | The Visual computer Vol. 37; no. 6; pp. 1327 - 1341 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
Berlin/Heidelberg
Springer Berlin Heidelberg
01.06.2021
Springer Nature B.V |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | In order to improve the accuracy of human abnormal behavior recognition, a two-stream convolution neural network model was proposed. This model includes two main parts, VMHI and FRGB. Firstly, the motion history images are extracted and input into VGG-16 convolutional neural network for training. Then, the RGB image is input into Faster R-CNN algorithm for training using Kalman filter-assisted data annotation. Finally, the two stream VMHI and FRGB results are fused. The algorithm can recognize not only single person behavior, but also two person interaction behavior and improve the recognition accuracy of similar actions. Experimental results on KTH, Weizmann, UT-interaction, and TenthLab dataset showed that the proposed algorithm has higher accuracy than the other literature. |
---|---|
AbstractList | In order to improve the accuracy of human abnormal behavior recognition, a two-stream convolution neural network model was proposed. This model includes two main parts, VMHI and FRGB. Firstly, the motion history images are extracted and input into VGG-16 convolutional neural network for training. Then, the RGB image is input into Faster R-CNN algorithm for training using Kalman filter-assisted data annotation. Finally, the two stream VMHI and FRGB results are fused. The algorithm can recognize not only single person behavior, but also two person interaction behavior and improve the recognition accuracy of similar actions. Experimental results on KTH, Weizmann, UT-interaction, and TenthLab dataset showed that the proposed algorithm has higher accuracy than the other literature. |
Author | Liu, Congcong Liu, Jin Ying, Jie Hu, Xing Yang, Haima |
Author_xml | – sequence: 1 givenname: Congcong surname: Liu fullname: Liu, Congcong organization: School of Optical Electrical and Computer Engineering, University of Shanghai for Science and Technology – sequence: 2 givenname: Jie surname: Ying fullname: Ying, Jie organization: School of Optical Electrical and Computer Engineering, University of Shanghai for Science and Technology – sequence: 3 givenname: Haima surname: Yang fullname: Yang, Haima email: snowyhm@sina.com organization: School of Optical Electrical and Computer Engineering, University of Shanghai for Science and Technology – sequence: 4 givenname: Xing surname: Hu fullname: Hu, Xing organization: School of Optical Electrical and Computer Engineering, University of Shanghai for Science and Technology – sequence: 5 givenname: Jin surname: Liu fullname: Liu, Jin organization: School of Electronic and Electrical Engineering, Shanghai University of Engineering Science |
BookMark | eNp9kE1LAzEURYNUsK3-AVcDrqP5mDTJUoofBcGNrkMmk7RTZ5KazFT892ZaQXDR1csL5zwudwYmPngLwDVGtxghfpcQohxDRBBEWCwEFGdgiktKIKGYTcAUYS4g4UJegFlKW5R3Xsop0KtuF8Pe1sVm6LQvtOmb4ItoTVj75vDWu0xosykqnTKXf_qvAFMfre4KE_w-tMMI6rbwdoiHkYn4UXShtu0lOHe6Tfbqd87B--PD2_IZvrw-rZb3L9BQLHtYU-IQ01gbbAmvWV1WNXbMVZpyu7COSS05KS0uOXNywawTBjFRUkMFqquSzsHN8W5O-znY1KttGGJOlRSRmEvOqBgpcaRMDClF65Rpej3G76NuWoWRGgtVx0JVLlQdClUiq-SfuotNp-P3aYkepZRhv7bxL9UJ6wdk0Ixu |
CitedBy_id | crossref_primary_10_1109_ACCESS_2022_3221530 crossref_primary_10_1080_08839514_2023_2225906 crossref_primary_10_1016_j_knosys_2024_111427 crossref_primary_10_32604_cmc_2023_028743 crossref_primary_10_26599_TST_2021_9010068 crossref_primary_10_3934_era_2022210 crossref_primary_10_1016_j_neunet_2023_10_016 crossref_primary_10_1109_ACCESS_2023_3282439 crossref_primary_10_1109_ACCESS_2021_3096527 crossref_primary_10_1007_s00371_021_02350_9 crossref_primary_10_1155_2022_6793365 crossref_primary_10_1016_j_patcog_2022_108621 crossref_primary_10_1007_s10489_021_02315_4 crossref_primary_10_1155_2022_6912315 crossref_primary_10_1007_s00138_022_01291_0 crossref_primary_10_1007_s11042_023_16793_w crossref_primary_10_1007_s10845_024_02518_9 crossref_primary_10_1007_s42979_023_02452_2 crossref_primary_10_3390_s24237595 crossref_primary_10_1007_s00371_020_02012_2 crossref_primary_10_1007_s10639_025_13351_9 crossref_primary_10_1007_s13369_021_06182_6 crossref_primary_10_1007_s13735_024_00338_4 crossref_primary_10_3390_info14110616 crossref_primary_10_1007_s00521_022_07165_w crossref_primary_10_1007_s00521_022_07665_9 crossref_primary_10_1007_s00371_021_02064_y crossref_primary_10_1007_s00371_023_03073_9 crossref_primary_10_1007_s00371_023_03018_2 crossref_primary_10_1007_s13042_023_01851_4 crossref_primary_10_1016_j_image_2025_117263 crossref_primary_10_1109_ACCESS_2022_3144035 crossref_primary_10_1109_ACCESS_2024_3509674 crossref_primary_10_1007_s10044_023_01134_2 crossref_primary_10_1007_s00371_023_02827_9 crossref_primary_10_32628_CSEIT228392 crossref_primary_10_1109_ACCESS_2024_3432597 |
Cites_doi | 10.1016/j.eswa.2015.12.012 10.1016/j.engappai.2017.10.001 10.1186/s13634-018-0574-4 10.1109/TPAMI.2012.59 10.1007/s12652-019-01239-9 10.1109/TMM.2017.2749159 10.1109/TCSVT.2017.2665359 10.1016/j.cogsys.2019.12.004 10.1007/s00371-018-1560-4 10.1016/j.eswa.2018.08.014 10.1007/s00371-019-01722-6 10.1007/s11042-017-4610-4 10.1016/j.imavis.2009.11.014 10.1016/j.jvcir.2013.03.001 10.1007/s11771-018-3738-3 10.1016/j.image.2019.115640 10.1109/ACCESS.2018.2812929 10.1016/j.cviu.2006.07.013 10.1109/34.910878 10.1007/978-3-319-46448-0_2 10.1109/WACV.2017.23 10.1109/ICPR.2004.1334462 10.1109/CVPR.2016.90 10.21236/ADA623249 10.1109/CVPR.2009.5206525 10.1007/978-3-319-46484-8_2 10.1109/CVPRW.2009.5206821 10.1109/IROS.2017.8206288 10.1109/CVPR.2018.00096 10.1109/CVPR.2016.213 10.1109/CVPR.2017.477 10.1109/CVPR.2016.91 |
ContentType | Journal Article |
Copyright | Springer-Verlag GmbH Germany, part of Springer Nature 2020 Springer-Verlag GmbH Germany, part of Springer Nature 2020. |
Copyright_xml | – notice: Springer-Verlag GmbH Germany, part of Springer Nature 2020 – notice: Springer-Verlag GmbH Germany, part of Springer Nature 2020. |
DBID | AAYXX CITATION 8FE 8FG AFKRA ARAPS AZQEC BENPR BGLVJ CCPQU DWQXO GNUQQ HCIFZ JQ2 K7- P5Z P62 PHGZM PHGZT PKEHL PQEST PQGLB PQQKQ PQUKI |
DOI | 10.1007/s00371-020-01868-8 |
DatabaseName | CrossRef ProQuest SciTech Collection ProQuest Technology Collection ProQuest Central UK/Ireland Advanced Technologies & Aerospace Collection ProQuest Central Essentials ProQuest Central Technology Collection ProQuest One ProQuest Central ProQuest Central Student SciTech Premium Collection ProQuest Computer Science Collection Computer Science Database (ProQuest) ProQuest advanced technologies & aerospace journals ProQuest Advanced Technologies & Aerospace Collection ProQuest Central Premium ProQuest One Academic (New) ProQuest One Academic Middle East (New) ProQuest One Academic Eastern Edition (DO NOT USE) ProQuest One Applied & Life Sciences ProQuest One Academic ProQuest One Academic UKI Edition |
DatabaseTitle | CrossRef Advanced Technologies & Aerospace Collection Computer Science Database ProQuest Central Student Technology Collection ProQuest One Academic Middle East (New) ProQuest Advanced Technologies & Aerospace Collection ProQuest Central Essentials ProQuest Computer Science Collection ProQuest One Academic Eastern Edition SciTech Premium Collection ProQuest One Community College ProQuest Technology Collection ProQuest SciTech Collection ProQuest Central Advanced Technologies & Aerospace Database ProQuest One Applied & Life Sciences ProQuest One Academic UKI Edition ProQuest Central Korea ProQuest Central (New) ProQuest One Academic ProQuest One Academic (New) |
DatabaseTitleList | Advanced Technologies & Aerospace Collection |
Database_xml | – sequence: 1 dbid: 8FG name: ProQuest Technology Collection url: https://search.proquest.com/technologycollection1 sourceTypes: Aggregation Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering Computer Science |
EISSN | 1432-2315 |
EndPage | 1341 |
ExternalDocumentID | 10_1007_s00371_020_01868_8 |
GrantInformation_xml | – fundername: Fund Project of National Natural Science Foundation of China grantid: 61701296 – fundername: Shanghai Natural Science Foundation grantid: 17ZR1443500 – fundername: Joint Funds of the National Natural Science Foundation of China grantid: U1831133 |
GroupedDBID | -4Z -59 -5G -BR -EM -Y2 -~C -~X .86 .DC .VR 06D 0R~ 0VY 123 1N0 1SB 2.D 203 28- 29R 2J2 2JN 2JY 2KG 2KM 2LR 2P1 2VQ 2~H 30V 4.4 406 408 409 40D 40E 5QI 5VS 67Z 6NX 6TJ 78A 8TC 8UJ 95- 95. 95~ 96X AAAVM AABHQ AACDK AAHNG AAIAL AAJBT AAJKR AANZL AAOBN AARHV AARTL AASML AATNV AATVU AAUYE AAWCG AAYIU AAYOK AAYQN AAYTO AAYZH ABAKF ABBBX ABBXA ABDPE ABDZT ABECU ABFTV ABHLI ABHQN ABJNI ABJOX ABKCH ABKTR ABMNI ABMQK ABNWP ABQBU ABQSL ABSXP ABTEG ABTHY ABTKH ABTMW ABULA ABWNU ABXPI ACAOD ACBXY ACDTI ACGFS ACHSB ACHXU ACKNC ACMDZ ACMLO ACOKC ACOMO ACPIV ACZOJ ADHHG ADHIR ADIMF ADINQ ADKNI ADKPE ADQRH ADRFC ADTPH ADURQ ADYFF ADZKW AEBTG AEFIE AEFQL AEGAL AEGNC AEJHL AEJRE AEKMD AEMSY AENEX AEOHA AEPYU AESKC AETLH AEVLU AEXYK AFBBN AFEXP AFFNX AFGCZ AFKRA AFLOW AFQWF AFWTZ AFZKB AGAYW AGDGC AGGDS AGJBK AGMZJ AGQEE AGQMX AGRTI AGWIL AGWZB AGYKE AHAVH AHBYD AHSBF AHYZX AIAKS AIGIU AIIXL AILAN AITGF AJBLW AJRNO AJZVZ ALMA_UNASSIGNED_HOLDINGS ALWAN AMKLP AMXSW AMYLF AMYQR AOCGG ARAPS ARMRJ ASPBG AVWKF AXYYD AYJHY AZFZN B-. BA0 BBWZM BDATZ BENPR BGLVJ BGNMA BSONS CAG CCPQU COF CS3 CSCUP DDRTE DL5 DNIVK DPUIP DU5 EBLON EBS EIOEI EJD ESBYG FEDTE FERAY FFXSO FIGPU FINBP FNLPD FRRFC FSGXE FWDCC GGCAI GGRSB GJIRD GNWQR GQ6 GQ7 GQ8 GXS H13 HCIFZ HF~ HG5 HG6 HMJXF HQYDN HRMNR HVGLF HZ~ I09 IHE IJ- IKXTQ ITM IWAJR IXC IZIGR IZQ I~X I~Z J-C J0Z JBSCW JCJTX JZLTJ K7- KDC KOV KOW LAS LLZTM M4Y MA- N2Q N9A NB0 NDZJH NPVJJ NQJWS NU0 O9- O93 O9G O9I O9J OAM P19 P2P P9O PF0 PT4 PT5 QOK QOS R4E R89 R9I RHV RIG RNI RNS ROL RPX RSV RZK S16 S1Z S26 S27 S28 S3B SAP SCJ SCLPG SCO SDH SDM SHX SISQX SJYHP SNE SNPRN SNX SOHCF SOJ SPISZ SRMVM SSLCW STPWE SZN T13 T16 TN5 TSG TSK TSV TUC U2A UG4 UOJIU UTJUX UZXMN VC2 VFIZW W23 W48 WK8 YLTOR YOT Z45 Z5O Z7R Z7S Z7X Z7Z Z83 Z86 Z88 Z8M Z8N Z8R Z8T Z8W Z92 ZMTXR ~EX AAPKM AAYXX ABBRH ABDBE ABFSG ACSTC ADHKG ADKFA AEZWR AFDZB AFHIU AFOHR AGQPQ AHPBZ AHWEU AIXLP ATHPR AYFIA CITATION PHGZM PHGZT 8FE 8FG ABRTQ AZQEC DWQXO GNUQQ JQ2 P62 PKEHL PQEST PQGLB PQQKQ PQUKI |
ID | FETCH-LOGICAL-c319t-d32f05a1ac1e27d5d4bd1f5fba37e6ef59a9724e1475f965ef8c05843c380db43 |
IEDL.DBID | BENPR |
ISSN | 0178-2789 |
IngestDate | Fri Jul 25 23:22:57 EDT 2025 Tue Jul 01 01:05:48 EDT 2025 Thu Apr 24 22:54:26 EDT 2025 Fri Feb 21 02:49:11 EST 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 6 |
Keywords | Human action recognition Video surveillance Motion history image Faster R-CNN Kalman filter |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c319t-d32f05a1ac1e27d5d4bd1f5fba37e6ef59a9724e1475f965ef8c05843c380db43 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
PQID | 2917975384 |
PQPubID | 2043737 |
PageCount | 15 |
ParticipantIDs | proquest_journals_2917975384 crossref_citationtrail_10_1007_s00371_020_01868_8 crossref_primary_10_1007_s00371_020_01868_8 springer_journals_10_1007_s00371_020_01868_8 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 20210600 2021-06-00 20210601 |
PublicationDateYYYYMMDD | 2021-06-01 |
PublicationDate_xml | – month: 6 year: 2021 text: 20210600 |
PublicationDecade | 2020 |
PublicationPlace | Berlin/Heidelberg |
PublicationPlace_xml | – name: Berlin/Heidelberg – name: Heidelberg |
PublicationSubtitle | International Journal of Computer Graphics |
PublicationTitle | The Visual computer |
PublicationTitleAbbrev | Vis Comput |
PublicationYear | 2021 |
Publisher | Springer Berlin Heidelberg Springer Nature B.V |
Publisher_xml | – name: Springer Berlin Heidelberg – name: Springer Nature B.V |
References | Bobick, Davis (CR29) 2001; 23 Wang, Gao, Song (CR19) 2016; 99 Afrasiabi, Khotanlou, Mansoorizadeh (CR26) 2019 Ji, Xu, Yang (CR18) 2013; 35 Ko, Sim (CR40) 2018; 67 Yang, Tian (CR3) 2014; 25 CR17 Qian, Zhou, Mao (CR37) 2017; 76 CR16 CR15 CR14 CR36 Hu (CR8) 2018; 2018 CR13 CR35 CR12 CR34 CR11 CR33 CR10 Imran, Raman (CR27) 2020; 11 CR31 CR30 Siswantoro, Prabuwono, Abdullah (CR32) 2016; 49 Fujiyoshi, Lipton (CR2) 1998; 87 Chou, Prasad, Wu (CR39) 2018; 6 CR4 CR6 Weinland, Ronfard, Boyer (CR5) 2006; 104 CR7 CR9 Vishwakarma (CR44) 2020; 61 CR25 Sahoo, Ari (CR43) 2019; 115 CR23 CR22 CR21 Vishwakarma, Dhiman (CR42) 2019; 35 CR20 Yi, Li, Zhou (CR28) 2020; 80 Xu, Jiang, Sun (CR38) 2017; 27 Wang, Zhou, Xia (CR41) 2018; 25 Poppe (CR1) 2010; 28 Wang, Gao, Wang (CR24) 2018; 20 AF Bobick (1868_CR29) 2001; 23 1868_CR9 1868_CR6 KE Ko (1868_CR40) 2018; 67 PS Sahoo (1868_CR43) 2019; 115 1868_CR7 1868_CR22 M Afrasiabi (1868_CR26) 2019 J Wang (1868_CR41) 2018; 25 1868_CR21 J Imran (1868_CR27) 2020; 11 1868_CR4 KP Chou (1868_CR39) 2018; 6 1868_CR23 K Xu (1868_CR38) 2017; 27 1868_CR20 S Ji (1868_CR18) 2013; 35 X Yang (1868_CR3) 2014; 25 X Wang (1868_CR24) 2018; 20 1868_CR25 H Qian (1868_CR37) 2017; 76 X Wang (1868_CR19) 2016; 99 DK Vishwakarma (1868_CR42) 2019; 35 J Siswantoro (1868_CR32) 2016; 49 R Poppe (1868_CR1) 2010; 28 X Hu (1868_CR8) 2018; 2018 Y Yi (1868_CR28) 2020; 80 1868_CR11 1868_CR33 1868_CR10 1868_CR13 1868_CR35 H Fujiyoshi (1868_CR2) 1998; 87 1868_CR12 1868_CR34 D Weinland (1868_CR5) 2006; 104 1868_CR31 DK Vishwakarma (1868_CR44) 2020; 61 1868_CR30 1868_CR15 1868_CR14 1868_CR36 1868_CR17 1868_CR16 |
References_xml | – volume: 87 start-page: 113 year: 1998 end-page: 120 ident: CR2 article-title: Real-time human motion analysis by image skeletonization publication-title: Appl. Comput. Vis. – ident: CR22 – ident: CR4 – ident: CR14 – ident: CR16 – ident: CR12 – ident: CR30 – ident: CR10 – ident: CR33 – ident: CR35 – ident: CR6 – volume: 49 start-page: 112 year: 2016 end-page: 122 ident: CR32 article-title: A linear model based on Kalman filter for improving neural network classification performance publication-title: Expert Syst. Appl. doi: 10.1016/j.eswa.2015.12.012 – volume: 67 start-page: 226 year: 2018 end-page: 234 ident: CR40 article-title: Deep convolutional framework for abnormal activities recognition in a smart surveillance system publication-title: Eng. Appl. Artif. Intell. doi: 10.1016/j.engappai.2017.10.001 – ident: CR25 – volume: 2018 start-page: 54 issue: 1 year: 2018 ident: CR8 article-title: Huang Y, Duan Q, et al, Abnormal event detection in crowded scenes using histogram of oriented contextual gradient descriptor publication-title: EURASIP J. Adv. Signal Process. doi: 10.1186/s13634-018-0574-4 – ident: CR23 – volume: 35 start-page: 221 issue: 1 year: 2013 end-page: 231 ident: CR18 article-title: 3D convolutional neural networks for human action recognition publication-title: IEEE Trans. Pattern Anal. Mach. Intell. doi: 10.1109/TPAMI.2012.59 – volume: 11 start-page: 189 year: 2020 end-page: 208 ident: CR27 article-title: Evaluating fusion of RGB-D and inertial sensors for multimodal human action recognition publication-title: J. Ambient Intell. Hum. Comput. doi: 10.1007/s12652-019-01239-9 – volume: 99 start-page: 1 year: 2016 ident: CR19 article-title: Beyond frame-level CNN: saliency-aware 3D CNN with LSTM for video action recognition publication-title: IEEE Signal Process. Lett. – ident: CR21 – volume: 20 start-page: 634 year: 2018 end-page: 644 ident: CR24 article-title: Two-stream 3-D convnet fusion for action recognition in videos with arbitrary size and length publication-title: IEEE Trans. Multimed. doi: 10.1109/TMM.2017.2749159 – volume: 27 start-page: 567 year: 2017 end-page: 576 ident: CR38 article-title: Two-stream dictionary learning architecture for action recognition publication-title: IEEE Trans. Circuits Syst. Video doi: 10.1109/TCSVT.2017.2665359 – volume: 61 start-page: 1 year: 2020 end-page: 13 ident: CR44 article-title: A twofold transformation model for human action recognition using decisive pose publication-title: Cognit. Syst. Res. doi: 10.1016/j.cogsys.2019.12.004 – ident: CR15 – volume: 35 start-page: 1595 year: 2019 end-page: 1613 ident: CR42 article-title: A unified model for human activity recognition using spatial distribution of gradients and difference of Gaussian kernel publication-title: Vis. Comput. doi: 10.1007/s00371-018-1560-4 – ident: CR17 – ident: CR31 – volume: 115 start-page: 524 year: 2019 end-page: 534 ident: CR43 article-title: On an algorithm for human action recognition publication-title: Expert Syst. Appl. doi: 10.1016/j.eswa.2018.08.014 – ident: CR13 – ident: CR11 – year: 2019 ident: CR26 article-title: DTW-CNN: time series-based human interaction prediction in videos using CNN-extracted features publication-title: Vis. Comput. doi: 10.1007/s00371-019-01722-6 – volume: 76 start-page: 21889 year: 2017 end-page: 21910 ident: CR37 article-title: Recognizing human actions from silhouettes described with weighted distance metric and kinematics publication-title: Multimed. Tools Appl. doi: 10.1007/s11042-017-4610-4 – ident: CR9 – volume: 28 start-page: 976 issue: 6 year: 2010 end-page: 990 ident: CR1 article-title: A survey on vision-based human action recognition publication-title: Image Vis. Comput. doi: 10.1016/j.imavis.2009.11.014 – volume: 25 start-page: 2 issue: 1 year: 2014 end-page: 11 ident: CR3 article-title: Effective 3D action recognition using EigenJoints publication-title: J. Vis. Commun. Image Represent. doi: 10.1016/j.jvcir.2013.03.001 – volume: 25 start-page: 304 issue: 2 year: 2018 end-page: 314 ident: CR41 article-title: Human interaction recognition based on sparse representation of feature covariance matrices publication-title: J. Central South Univ. doi: 10.1007/s11771-018-3738-3 – ident: CR34 – ident: CR36 – ident: CR7 – volume: 80 start-page: 115640 year: 2020 ident: CR28 article-title: Human action recognition based on action relevance weighted encoding publication-title: Signal Process. Image Commun. doi: 10.1016/j.image.2019.115640 – volume: 6 start-page: 1 year: 2018 ident: CR39 article-title: Robust feature-based automated multi-view human action recognition system publication-title: IEEE Access doi: 10.1109/ACCESS.2018.2812929 – volume: 104 start-page: 249 issue: 2 year: 2006 end-page: 257 ident: CR5 article-title: Free viewpoint action recognition using motion history volumes publication-title: Comput. Vis. Image Underst. doi: 10.1016/j.cviu.2006.07.013 – ident: CR20 – volume: 23 start-page: 257 issue: 3 year: 2001 end-page: 267 ident: CR29 article-title: The recognition of human movement using temporal templates publication-title: IEEE Trans. Pattern Anal. Mach. Intell. doi: 10.1109/34.910878 – ident: 1868_CR15 doi: 10.1007/978-3-319-46448-0_2 – volume: 80 start-page: 115640 year: 2020 ident: 1868_CR28 publication-title: Signal Process. Image Commun. doi: 10.1016/j.image.2019.115640 – volume: 99 start-page: 1 year: 2016 ident: 1868_CR19 publication-title: IEEE Signal Process. Lett. – ident: 1868_CR23 doi: 10.1109/WACV.2017.23 – ident: 1868_CR6 doi: 10.1109/ICPR.2004.1334462 – ident: 1868_CR12 doi: 10.1109/CVPR.2016.90 – ident: 1868_CR16 – ident: 1868_CR35 – ident: 1868_CR20 – volume: 61 start-page: 1 year: 2020 ident: 1868_CR44 publication-title: Cognit. Syst. Res. doi: 10.1016/j.cogsys.2019.12.004 – ident: 1868_CR17 doi: 10.21236/ADA623249 – ident: 1868_CR7 doi: 10.1109/CVPR.2009.5206525 – volume: 28 start-page: 976 issue: 6 year: 2010 ident: 1868_CR1 publication-title: Image Vis. Comput. doi: 10.1016/j.imavis.2009.11.014 – volume: 20 start-page: 634 year: 2018 ident: 1868_CR24 publication-title: IEEE Trans. Multimed. doi: 10.1109/TMM.2017.2749159 – ident: 1868_CR33 – volume: 35 start-page: 221 issue: 1 year: 2013 ident: 1868_CR18 publication-title: IEEE Trans. Pattern Anal. Mach. Intell. doi: 10.1109/TPAMI.2012.59 – ident: 1868_CR22 doi: 10.1007/978-3-319-46484-8_2 – ident: 1868_CR10 – volume: 87 start-page: 113 year: 1998 ident: 1868_CR2 publication-title: Appl. Comput. Vis. – ident: 1868_CR4 doi: 10.1109/CVPRW.2009.5206821 – ident: 1868_CR25 doi: 10.1109/IROS.2017.8206288 – volume: 35 start-page: 1595 year: 2019 ident: 1868_CR42 publication-title: Vis. Comput. doi: 10.1007/s00371-018-1560-4 – ident: 1868_CR30 doi: 10.1109/CVPR.2018.00096 – volume: 25 start-page: 2 issue: 1 year: 2014 ident: 1868_CR3 publication-title: J. Vis. Commun. Image Represent. doi: 10.1016/j.jvcir.2013.03.001 – ident: 1868_CR36 – volume: 115 start-page: 524 year: 2019 ident: 1868_CR43 publication-title: Expert Syst. Appl. doi: 10.1016/j.eswa.2018.08.014 – ident: 1868_CR13 – ident: 1868_CR34 – ident: 1868_CR21 doi: 10.1109/CVPR.2016.213 – ident: 1868_CR9 – volume: 27 start-page: 567 year: 2017 ident: 1868_CR38 publication-title: IEEE Trans. Circuits Syst. Video doi: 10.1109/TCSVT.2017.2665359 – volume: 2018 start-page: 54 issue: 1 year: 2018 ident: 1868_CR8 publication-title: EURASIP J. Adv. Signal Process. doi: 10.1186/s13634-018-0574-4 – ident: 1868_CR31 doi: 10.1109/CVPR.2017.477 – volume: 67 start-page: 226 year: 2018 ident: 1868_CR40 publication-title: Eng. Appl. Artif. Intell. doi: 10.1016/j.engappai.2017.10.001 – ident: 1868_CR14 doi: 10.1109/CVPR.2016.91 – volume: 11 start-page: 189 year: 2020 ident: 1868_CR27 publication-title: J. Ambient Intell. Hum. Comput. doi: 10.1007/s12652-019-01239-9 – volume: 49 start-page: 112 year: 2016 ident: 1868_CR32 publication-title: Expert Syst. Appl. doi: 10.1016/j.eswa.2015.12.012 – volume: 6 start-page: 1 year: 2018 ident: 1868_CR39 publication-title: IEEE Access doi: 10.1109/ACCESS.2018.2812929 – year: 2019 ident: 1868_CR26 publication-title: Vis. Comput. doi: 10.1007/s00371-019-01722-6 – volume: 76 start-page: 21889 year: 2017 ident: 1868_CR37 publication-title: Multimed. Tools Appl. doi: 10.1007/s11042-017-4610-4 – volume: 104 start-page: 249 issue: 2 year: 2006 ident: 1868_CR5 publication-title: Comput. Vis. Image Underst. doi: 10.1016/j.cviu.2006.07.013 – ident: 1868_CR11 – volume: 25 start-page: 304 issue: 2 year: 2018 ident: 1868_CR41 publication-title: J. Central South Univ. doi: 10.1007/s11771-018-3738-3 – volume: 23 start-page: 257 issue: 3 year: 2001 ident: 1868_CR29 publication-title: IEEE Trans. Pattern Anal. Mach. Intell. doi: 10.1109/34.910878 |
SSID | ssj0017749 |
Score | 2.4535217 |
Snippet | In order to improve the accuracy of human abnormal behavior recognition, a two-stream convolution neural network model was proposed. This model includes two... |
SourceID | proquest crossref springer |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 1327 |
SubjectTerms | Algorithms Annotations Artificial Intelligence Artificial neural networks Computer Graphics Computer Science Deep learning Human activity recognition Image Processing and Computer Vision Kalman filters Neural networks Original Article Performance evaluation Recognition Spacetime Support vector machines Training Wavelet transforms |
SummonAdditionalLinks | – databaseName: SpringerLink Journals (ICM) dbid: U2A link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV07T8MwED5BWWDgUUAUCvLABpbysGtnrBBVhQQTlbpF8WuCFtEg_j5nx0kBARJTosR2pLOde_i-7wAuC8OZSrWhzseYmPaUt8xU1EipXSYU55V3FO8fRtMZu5vzeQSFrdps9_ZIMvypO7BbYJej3t1JPMc7lZuwxdF394lcs2zcnR2gQROM3hT9I4_zjFCZn8f4qo7WNua3Y9GgbSb7sBvNRDJu5vUANuyiD3ttCQYSd2Qfdj7xCR5C1YQIrCGh9B5pQAukSxLC-5ZDnHj1ZQg-qd-X1CNGqmfiU9DjUsSPe6rLcAmJ4iTUzDmC2eT28WZKYw0FqnFz1dTkmUtQ3pVObSYMN0yZ1HGnqlzYkXW8qAqRMZsywV0x4tZJnaBRkutcJkax_Bh6i-XCngBx6PsYIVThUKUbnWNHD1QVKrFMJU4OIG1FWepIMO7rXDyVHTVyEH-J4i-D-Evsc9X1eWnoNf5sPWxnqIxbbVVm6HB6dLBkA7huZ239-vfRTv_X_Ay2M5_PEiIwQ-jVr2_2HA2SWl2E9fcBJtfXeg priority: 102 providerName: Springer Nature |
Title | Improved human action recognition approach based on two-stream convolutional neural network model |
URI | https://link.springer.com/article/10.1007/s00371-020-01868-8 https://www.proquest.com/docview/2917975384 |
Volume | 37 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV1LT9tAEB6V5NIeKqBFTaHRHrjBCj924_UJpVUeAhUh1EhwsryvEyRpE8TfZ2a9TqASnGzZ3rU0-5jHzvcNwHFppdCpsdxTjEkYorwVtuZWKeOzQktZk6P4-2ownYmLW3kbA26rmFbZ7olho7YLQzHyswz9CgKBKnG-_MupahSdrsYSGjvQxS1YqQ50f46urm825who3AQDOEVfiTCfETYTwHOBrY6T-5QQZzxXr1XT1t7874g0aJ7xLnyOJiMbNmO8Bx_cfB8-vSAS_AJ1ExtwloWae6xBK7BNdhDet-ThjPSWZfhk_bTgBBWpHxjlnsc5iH8ijstwCRniLBTL-Qqz8ejPrymPxRO4wVW15jbPfIKCrk3qssJKK7RNvfS6zgs3cF6WdVlkwqWikL4cSOeVSdAayU2uEqtFfgCd-WLuvgHz6PTYotClR11uTY4NCaFa6MQJnXjVg7SVW2UiszgVuLivNpzIQdYVyroKsq6wzcmmzbLh1Xj366N2OKq4xlbVdkb04LQdou3rt3v7_n5vh_Axo8SVEGo5gs7636P7gZbHWvdhR40nfegOJ3eXo36cbPh0lg2fAe0t2Sg |
linkProvider | ProQuest |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV3JTsMwEB1BOQAHxCrK6gOcwCKL3SQHhBBQynoCiVuItxO0LEUVP8U3MuMkLSDBjVOiJHai8cQzY897A7CVGSlUqA13tMYkNFHeClNwk6baRYmSsqBA8eq61bkV53fybgw-aiwMpVXWc6KfqE1P0xr5XoRxBYFAU3Hw9MypahTtrtYlNEq1uLDvAwzZXvfPjnF8t6OofXJz1OFVVQGuUd363MSRC_ALCh3aKDHSCGVCJ50q4sS2rJNZkSWRsKFIpMta0rpUB2imYx2ngVEixn7HYULEaMkJmd4-He5aoCvl3e0QIzNCmFYgHQ_V89x4nIK1gBjqefrdEI682x8bst7OtWdhpnJQ2WGpUXMwZrvzMP2FtnABinIlwhrmK_yxEhvBhrlIeF5TlTOykobhlf6gxwmYUjwyynSvNB7fRIya_uDz0ZkvzbMIt_8i1CVodHtduwzMYYhlkkRlDj0Ho2NsSHjYRAVWqMClTQhrueW64jGnchoP-ZCB2cs6R1nnXtY5ttkZtnkqWTz-fHqtHo68-qNf85H-NWG3HqLR7d97W_m7t02Y7NxcXeaXZ9cXqzAVUcqMX-RZg0b_5c2uo8_TVxte0Rjc_7dmfwJV-RHq |
linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV07T8MwED4VkBAMPAqI8vQAE0Qkjt0kAwOiVOVVMVCJLcSvCQqiQYh_xU_k7DwKCJAYmBIltmOdz_Gdfd93ADuJ4kwEUnnG7jExaSlvmco8FcfS0EhwnllH8bLf7g3Y2Q2_acBbhYVx0e7VkWSBabAsTcP84FGZgxr45pjmPOv6-Jbv3YvLsMpz_fqCTtvo8LSDI7xLaffk-rjnlXkFPIkKl3sqpMbHPmQy0DRSXDGhAsONyMJIt7XhSZZElOmARdwkba5NLH1cqEMZxr4SLMR2J2CKWfQxzqABParPLdCYcgZ3gL6ZxZiWMJ3v-_x5KRzbt1-OZN1K112AudJEJUeFTi1CQw-bMF-lfyDl36AJsx-4DJcgK7YntCIu7R8pABOkDlDC-4q_nNilUxF8kr88eBatkt0TG_5eTgP8uKXZdBcXpE5cvp5lGPyLnFdgcvgw1KtADPpdKopEYtCcUDLEihYkGwlfM-GbuAVBJcpUluTmNsfGXVrTMjvxpyj-1Ik_xTp7dZ3Hgtrj19Ib1Qil5TQfpRSdXYtMjlkL9qtRG7_-ubW1vxXfhumrTje9OO2fr8MMtWE1biNoAybzp2e9iXZRLracKhK4_W_dfwccARlX |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Improved+human+action+recognition+approach+based+on+two-stream+convolutional+neural+network+model&rft.jtitle=The+Visual+computer&rft.au=Liu%2C+Congcong&rft.au=Ying%2C+Jie&rft.au=Yang%2C+Haima&rft.au=Hu%2C+Xing&rft.date=2021-06-01&rft.pub=Springer+Nature+B.V&rft.issn=0178-2789&rft.eissn=1432-2315&rft.volume=37&rft.issue=6&rft.spage=1327&rft.epage=1341&rft_id=info:doi/10.1007%2Fs00371-020-01868-8 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0178-2789&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0178-2789&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0178-2789&client=summon |