Deep learning-based multi-view 3D-human action recognition using skeleton and depth data
Human Action Recognition (HAR) is a fundamental challenge that smart surveillance systems must overcome. With the rising affordability of capturing human actions with more advanced depth cameras, HAR has garnered increased interest over the years, however the majority of these efforts have been on s...
Saved in:
Published in | Multimedia tools and applications Vol. 82; no. 13; pp. 19829 - 19851 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
New York
Springer US
01.05.2023
Springer Nature B.V |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Human Action Recognition (HAR) is a fundamental challenge that smart surveillance systems must overcome. With the rising affordability of capturing human actions with more advanced depth cameras, HAR has garnered increased interest over the years, however the majority of these efforts have been on single-view HAR. Recognizing human actions from arbitrary viewpoints is more challenging, as the same action is observed differently from different angles. This paper proposes a multi-stream Convolutional Neural Network (CNN) model for multi-view HAR using depth and skeleton data. We also propose a novel and efficient depth descriptor, Edge Detected-Motion History Image (ED-MHI), based on Canny Edge Detection and Motion History Image. Also, the proposed skeleton descriptor, Motion and Orientation of Joints (MOJ), represent the appropriate action by using joint motion and orientation. Experimental results on two datasets of human actions: NUCLA Multiview Action3D and NTU RGB-D using a Cross-subject evaluation protocol demonstrated that the proposed system exhibits the superior performance as compared to the state-of-the-art works with 93.87% and 85.61% accuracy, respectively. |
---|---|
AbstractList | Human Action Recognition (HAR) is a fundamental challenge that smart surveillance systems must overcome. With the rising affordability of capturing human actions with more advanced depth cameras, HAR has garnered increased interest over the years, however the majority of these efforts have been on single-view HAR. Recognizing human actions from arbitrary viewpoints is more challenging, as the same action is observed differently from different angles. This paper proposes a multi-stream Convolutional Neural Network (CNN) model for multi-view HAR using depth and skeleton data. We also propose a novel and efficient depth descriptor, Edge Detected-Motion History Image (ED-MHI), based on Canny Edge Detection and Motion History Image. Also, the proposed skeleton descriptor, Motion and Orientation of Joints (MOJ), represent the appropriate action by using joint motion and orientation. Experimental results on two datasets of human actions: NUCLA Multiview Action3D and NTU RGB-D using a Cross-subject evaluation protocol demonstrated that the proposed system exhibits the superior performance as compared to the state-of-the-art works with 93.87% and 85.61% accuracy, respectively. |
Author | Mohan, Biju R M, Rashmi Ghosh, Sampat Kumar Guddeti, Ram Mohana Reddy |
Author_xml | – sequence: 1 givenname: Sampat Kumar surname: Ghosh fullname: Ghosh, Sampat Kumar email: sampatghosh1995@gmail.com organization: Department of Information Technology, National Institute of Technology Karnataka – sequence: 2 givenname: Rashmi orcidid: 0000-0003-2101-5992 surname: M fullname: M, Rashmi email: nm.rashmi@gmail.com organization: Department of Information Technology, National Institute of Technology Karnataka – sequence: 3 givenname: Biju R surname: Mohan fullname: Mohan, Biju R organization: Department of Information Technology, National Institute of Technology Karnataka – sequence: 4 givenname: Ram Mohana Reddy surname: Guddeti fullname: Guddeti, Ram Mohana Reddy organization: Department of Information Technology, National Institute of Technology Karnataka |
BookMark | eNp9kF1LwzAUhoNMcJv-Aa8CXkdPkrZpL2XzCwRvFLwLaXu6dXZpTVJl_95uFQQvdnXeA-9zPt4ZmdjWIiGXHK45gLrxnEMkGAjBeCR4xHYnZMpjJZlSgk8GLVNgKgZ-RmbebwB4EotoSt6XiB1t0Dhb2xXLjceSbvsm1Oyrxm8ql2zdb42lpgh1a6nDol3Z-qB7PyDUf2CDYWiNLWmJXVjT0gRzTk4r03i8-K1z8nZ_97p4ZM8vD0-L22dWSJ4FphLgGeSREonhCJXhqYEqTXIViwxknkQyASOquJCZQFGavMgxLaoEpMogyeWcXI1zO9d-9uiD3rS9s8NKLVKIYi7SOB1c6egqXOu9w0oXdTD7L4IzdaM56H2OesxRDznqQ456N6DiH9q5emvc7jgkR8gPZrtC93fVEeoHKLmHkA |
CitedBy_id | crossref_primary_10_1007_s00521_024_09630_0 crossref_primary_10_33851_JMIS_2024_11_1_83 crossref_primary_10_1007_s11042_024_20484_5 crossref_primary_10_3390_app14146335 crossref_primary_10_7717_peerj_cs_2054 crossref_primary_10_1016_j_knosys_2025_113232 |
Cites_doi | 10.1109/TPAMI.2017.2691321 10.1080/17517575.2018.1557256 10.1016/j.imavis.2019.10.004 10.1109/LRA.2021.3059624 10.1016/j.patrec.2013.02.006 10.1109/TPAMI.2013.198 10.1109/ACCESS.2020.2968054 10.1080/21645515.2017.1379639 10.1109/TPAMI.2016.2533389 10.1016/j.chemosphere.2021.132569 10.1109/TGRS.2021.3090410 10.1007/s11356-021-16627-y 10.1109/TIP.2019.2937724 10.1109/JSEN.2020.3028561 10.1007/s11063-020-10400-x 10.1016/j.imavis.2020.104090 10.1049/iet-cvi.2018.5014 10.1109/TCSVT.2020.2965574 10.1016/j.ins.2019.10.047 10.1109/MMUL.2012.24 10.1109/TSMC.2018.2850149 10.1007/s10489-020-01803-3 10.1109/TIP.2020.2965299 10.1016/j.patcog.2017.12.004 10.1109/CVPR.2018.00300 10.1109/CVPR.2014.108 10.1109/CVPR.2013.365 10.1609/aaai.v32i1.12328 10.1609/aaai.v31i1.11212 10.1109/TPAMI.1986.4767851 10.1109/CVPR.2011.5995631 10.1007/978-981-19-0840-8_6 10.1109/CVPR.2013.98 10.1109/TPAMI.2022.3183112 10.1109/BigMM.2019.00-21 10.1109/CVPR.2015.7299172 10.1109/ICCV.2013.389 10.1109/ICASSP40776.2020.9053939 10.1109/CVPR.2016.167 10.1109/ICPR48806.2021.9412863 10.1007/s00530-019-00645-5 10.1109/EMBC.2014.6944534 10.1109/SPCOM.2012.6290032 10.1109/CVPR.2016.115 10.1109/CVPR.2017.486 10.1109/CVPR.2014.339 |
ContentType | Journal Article |
Copyright | The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. |
Copyright_xml | – notice: The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. |
DBID | AAYXX CITATION 3V. 7SC 7WY 7WZ 7XB 87Z 8AL 8AO 8FD 8FE 8FG 8FK 8FL 8G5 ABUWG AFKRA ARAPS AZQEC BENPR BEZIV BGLVJ CCPQU DWQXO FRNLG F~G GNUQQ GUQSH HCIFZ JQ2 K60 K6~ K7- L.- L7M L~C L~D M0C M0N M2O MBDVC P5Z P62 PHGZM PHGZT PKEHL PQBIZ PQBZA PQEST PQGLB PQQKQ PQUKI Q9U |
DOI | 10.1007/s11042-022-14214-y |
DatabaseName | CrossRef ProQuest Central (Corporate) Computer and Information Systems Abstracts ABI/INFORM Collection ABI/INFORM Global (PDF only) ProQuest Central (purchase pre-March 2016) ABI/INFORM Global (Alumni Edition) Computing Database (Alumni Edition) ProQuest Pharma Collection Technology Research Database ProQuest SciTech Collection ProQuest Technology Collection ProQuest Central (Alumni) (purchase pre-March 2016) ABI/INFORM Collection (Alumni) ProQuest Research Library ProQuest Central (Alumni) ProQuest Central UK/Ireland Advanced Technologies & Aerospace Collection ProQuest Central Essentials - QC ProQuest Central Business Premium Collection ProQuest Technology Collection ProQuest One Community College ProQuest Central Korea Business Premium Collection (Alumni) ABI/INFORM Global (Corporate) ProQuest Central Student ProQuest Research Library SciTech Premium Collection ProQuest Computer Science Collection ProQuest Business Collection (Alumni Edition) ProQuest Business Collection Computer Science Database ABI/INFORM Professional Advanced Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional ABI/INFORM Global Computing Database Research Library Research Library (Corporate) ProQuest advanced technologies & aerospace journals ProQuest Advanced Technologies & Aerospace Collection ProQuest Central Premium ProQuest One Academic (New) ProQuest One Academic Middle East (New) ProQuest One Business ProQuest One Business (Alumni) ProQuest One Academic Eastern Edition (DO NOT USE) ProQuest One Applied & Life Sciences ProQuest One Academic ProQuest One Academic UKI Edition ProQuest Central Basic |
DatabaseTitle | CrossRef ABI/INFORM Global (Corporate) ProQuest Business Collection (Alumni Edition) ProQuest One Business Research Library Prep Computer Science Database ProQuest Central Student Technology Collection Technology Research Database Computer and Information Systems Abstracts – Academic ProQuest One Academic Middle East (New) ProQuest Advanced Technologies & Aerospace Collection ProQuest Central Essentials ProQuest Computer Science Collection Computer and Information Systems Abstracts ProQuest Central (Alumni Edition) SciTech Premium Collection ProQuest One Community College Research Library (Alumni Edition) ProQuest Pharma Collection ABI/INFORM Complete ProQuest Central ABI/INFORM Professional Advanced ProQuest One Applied & Life Sciences ProQuest Central Korea ProQuest Research Library ProQuest Central (New) Advanced Technologies Database with Aerospace ABI/INFORM Complete (Alumni Edition) Advanced Technologies & Aerospace Collection Business Premium Collection ABI/INFORM Global ProQuest Computing ABI/INFORM Global (Alumni Edition) ProQuest Central Basic ProQuest Computing (Alumni Edition) ProQuest One Academic Eastern Edition ProQuest Technology Collection ProQuest SciTech Collection ProQuest Business Collection Computer and Information Systems Abstracts Professional Advanced Technologies & Aerospace Database ProQuest One Academic UKI Edition ProQuest One Business (Alumni) ProQuest One Academic ProQuest One Academic (New) ProQuest Central (Alumni) Business Premium Collection (Alumni) |
DatabaseTitleList | ABI/INFORM Global (Corporate) |
Database_xml | – sequence: 1 dbid: 8FG name: ProQuest Technology Collection url: https://search.proquest.com/technologycollection1 sourceTypes: Aggregation Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering Computer Science |
EISSN | 1573-7721 |
EndPage | 19851 |
ExternalDocumentID | 10_1007_s11042_022_14214_y |
GroupedDBID | -4Z -59 -5G -BR -EM -Y2 -~C .4S .86 .DC .VR 06D 0R~ 0VY 123 1N0 1SB 2.D 203 28- 29M 2J2 2JN 2JY 2KG 2LR 2P1 2VQ 2~H 30V 3EH 3V. 4.4 406 408 409 40D 40E 5QI 5VS 67Z 6NX 7WY 8AO 8FE 8FG 8FL 8G5 8UJ 95- 95. 95~ 96X AAAVM AABHQ AACDK AAHNG AAIAL AAJBT AAJKR AANZL AAOBN AARHV AARTL AASML AATNV AATVU AAUYE AAWCG AAYIU AAYQN AAYTO AAYZH ABAKF ABBBX ABBXA ABDZT ABECU ABFTV ABHLI ABHQN ABJNI ABJOX ABKCH ABKTR ABMNI ABMQK ABNWP ABQBU ABQSL ABSXP ABTEG ABTHY ABTKH ABTMW ABULA ABUWG ABWNU ABXPI ACAOD ACBXY ACDTI ACGFO ACGFS ACHSB ACHXU ACKNC ACMDZ ACMLO ACOKC ACOMO ACPIV ACREN ACSNA ACZOJ ADHHG ADHIR ADIMF ADINQ ADKNI ADKPE ADMLS ADRFC ADTPH ADURQ ADYFF ADYOE ADZKW AEBTG AEFIE AEFQL AEGAL AEGNC AEJHL AEJRE AEKMD AEMSY AENEX AEOHA AEPYU AESKC AETLH AEVLU AEXYK AFBBN AFEXP AFGCZ AFKRA AFLOW AFQWF AFWTZ AFYQB AFZKB AGAYW AGDGC AGGDS AGJBK AGMZJ AGQEE AGQMX AGRTI AGWIL AGWZB AGYKE AHAVH AHBYD AHKAY AHSBF AHYZX AIAKS AIGIU AIIXL AILAN AITGF AJBLW AJRNO AJZVZ ALMA_UNASSIGNED_HOLDINGS ALWAN AMKLP AMTXH AMXSW AMYLF AMYQR AOCGG ARAPS ARCSS ARMRJ ASPBG AVWKF AXYYD AYJHY AZFZN AZQEC B-. BA0 BBWZM BDATZ BENPR BEZIV BGLVJ BGNMA BPHCQ BSONS CAG CCPQU COF CS3 CSCUP DDRTE DL5 DNIVK DPUIP DU5 DWQXO EBLON EBS EIOEI EJD ESBYG FEDTE FERAY FFXSO FIGPU FINBP FNLPD FRNLG FRRFC FSGXE FWDCC GGCAI GGRSB GJIRD GNUQQ GNWQR GQ6 GQ7 GQ8 GROUPED_ABI_INFORM_COMPLETE GUQSH GXS H13 HCIFZ HF~ HG5 HG6 HMJXF HQYDN HRMNR HVGLF HZ~ I-F I09 IHE IJ- IKXTQ ITG ITH ITM IWAJR IXC IXE IZIGR IZQ I~X I~Z J-C J0Z JBSCW JCJTX JZLTJ K60 K6V K6~ K7- KDC KOV KOW LAK LLZTM M0C M0N M2O M4Y MA- N2Q N9A NB0 NDZJH NPVJJ NQJWS NU0 O9- O93 O9G O9I O9J OAM OVD P19 P2P P62 P9O PF0 PQBIZ PQBZA PQQKQ PROAC PT4 PT5 Q2X QOK QOS R4E R89 R9I RHV RNI RNS ROL RPX RSV RZC RZE RZK S16 S1Z S26 S27 S28 S3B SAP SCJ SCLPG SCO SDH SDM SHX SISQX SJYHP SNE SNPRN SNX SOHCF SOJ SPISZ SRMVM SSLCW STPWE SZN T13 T16 TEORI TH9 TSG TSK TSV TUC TUS U2A UG4 UOJIU UTJUX UZXMN VC2 VFIZW W23 W48 WK8 YLTOR Z45 Z7R Z7S Z7W Z7X Z7Y Z7Z Z81 Z83 Z86 Z88 Z8M Z8N Z8Q Z8R Z8S Z8T Z8U Z8W Z92 ZMTXR ~EX AAPKM AAYXX ABBRH ABDBE ABFSG ACMFV ACSTC ADHKG ADKFA AEZWR AFDZB AFHIU AFOHR AGQPQ AHPBZ AHWEU AIXLP ATHPR AYFIA CITATION PHGZM PHGZT 7SC 7XB 8AL 8FD 8FK ABRTQ JQ2 L.- L7M L~C L~D MBDVC PKEHL PQEST PQGLB PQUKI Q9U |
ID | FETCH-LOGICAL-c319t-760190b4726a1e0fa18a0f86b752903b64360a2f5c392e2dabcbe8cf6037906b3 |
IEDL.DBID | U2A |
ISSN | 1380-7501 |
IngestDate | Fri Jul 25 23:08:25 EDT 2025 Tue Jul 01 04:13:18 EDT 2025 Thu Apr 24 23:09:33 EDT 2025 Fri Feb 21 02:43:28 EST 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 13 |
Keywords | Deep learning Human action recognition Feature fusion Convolutional neural networks Score fusion |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c319t-760190b4726a1e0fa18a0f86b752903b64360a2f5c392e2dabcbe8cf6037906b3 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0003-2101-5992 |
PQID | 2804512858 |
PQPubID | 54626 |
PageCount | 23 |
ParticipantIDs | proquest_journals_2804512858 crossref_citationtrail_10_1007_s11042_022_14214_y crossref_primary_10_1007_s11042_022_14214_y springer_journals_10_1007_s11042_022_14214_y |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 20230500 2023-05-00 20230501 |
PublicationDateYYYYMMDD | 2023-05-01 |
PublicationDate_xml | – month: 5 year: 2023 text: 20230500 |
PublicationDecade | 2020 |
PublicationPlace | New York |
PublicationPlace_xml | – name: New York – name: Dordrecht |
PublicationSubtitle | An International Journal |
PublicationTitle | Multimedia tools and applications |
PublicationTitleAbbrev | Multimed Tools Appl |
PublicationYear | 2023 |
Publisher | Springer US Springer Nature B.V |
Publisher_xml | – name: Springer US – name: Springer Nature B.V |
References | Shahroudy, Ng, Gong, Wang (CR38) 2017; 40 CR37 CR36 CR35 Ding, Liu, Cheng, Belyaev (CR15) 2021; 51 Fan, Weng, Zhang, Shi, Zhang (CR16) 2020; 8 CR32 Bhatti, Zeeshan, Nizamani, Bazai, Yu, Yuan (CR8) 2022; 288 CR31 CR30 Afza, Khan, Sharif, Kadry, Manogaran, Saba (CR1) 2021; 106 CR3 Wang, Liu, Wu, Yuan (CR44) 2013; 36 Bhatti, Huang, Wu, Zhang, Mehmood, Han (CR5) 2019; 13 CR9 CR49 Huynh-The, Hua, Ngo, Kim (CR20) 2020; 513 CR48 CR47 CR46 CR45 CR43 CR42 Bhatti, Huang, Wang, Zhang, Mehmood, Di (CR4) 2018; 14 CR41 CR40 Islam, Iqbal (CR21) 2021; 6 Gu, Ye, Sheng, Ou, Li (CR18) 2020; 93 Dhiman, Vishwakarma (CR13) 2020; 29 CR19 Ahmad, Khan (CR2) 2021; 21 Bhatti, Yu, Hasnain, Nawaz, Yuan, Wen (CR7) 2022; 29 CR17 Kamel, Sheng, Yang, Li, Shen, Feng (CR22) 2018; 49 CR12 CR10 Rahmani, Mahmood, Huynh, Mian (CR34) 2016; 38 Kanjilal, Uysal (CR23) 2021; 53 Shao, Li, Zhang (CR39) 2021; 31 Pham, Khoudour, Crouzil, Zegers, Velastin (CR33) 2018; 13 CR29 CR28 CR27 CR26 CR25 CR24 Zhang, Xue, Lan, Zeng, Gao, Zheng (CR51) 2019; 29 Chen, Wei, Ferryman (CR11) 2013; 34 Zhang (CR50) 2012; 19 Ding, Liu, Belyaev, Cheng (CR14) 2018; 77 Bhatti, Yu, Chanussot, Zeeshan, Yuan, Luo (CR6) 2021; 60 UA Bhatti (14214_CR6) 2021; 60 14214_CR12 14214_CR10 A Kamel (14214_CR22) 2018; 49 P Zhang (14214_CR51) 2019; 29 14214_CR49 14214_CR48 W Ding (14214_CR14) 2018; 77 Y Gu (14214_CR18) 2020; 93 H Rahmani (14214_CR34) 2016; 38 14214_CR25 14214_CR24 Z Ahmad (14214_CR2) 2021; 21 UA Bhatti (14214_CR5) 2019; 13 14214_CR19 14214_CR17 F Afza (14214_CR1) 2021; 106 MM Islam (14214_CR21) 2021; 6 14214_CR3 Z Zhang (14214_CR50) 2012; 19 Y Fan (14214_CR16) 2020; 8 14214_CR36 14214_CR9 HH Pham (14214_CR33) 2018; 13 14214_CR35 14214_CR32 14214_CR31 14214_CR30 C Ding (14214_CR15) 2021; 51 14214_CR29 14214_CR28 14214_CR27 14214_CR26 T Huynh-The (14214_CR20) 2020; 513 UA Bhatti (14214_CR8) 2022; 288 14214_CR47 14214_CR46 L Chen (14214_CR11) 2013; 34 14214_CR45 C Dhiman (14214_CR13) 2020; 29 A Shahroudy (14214_CR38) 2017; 40 14214_CR43 14214_CR42 UA Bhatti (14214_CR7) 2022; 29 14214_CR41 14214_CR40 Z Shao (14214_CR39) 2021; 31 14214_CR37 R Kanjilal (14214_CR23) 2021; 53 J Wang (14214_CR44) 2013; 36 UA Bhatti (14214_CR4) 2018; 14 |
References_xml | – ident: CR45 – ident: CR49 – volume: 40 start-page: 1045 issue: 5 year: 2017 end-page: 1058 ident: CR38 article-title: Deep multimodal feature analysis for action recognition in rgb+ d videos publication-title: IEEE Trans Pattern Anal Mach Intell doi: 10.1109/TPAMI.2017.2691321 – ident: CR12 – volume: 13 start-page: 329 issue: 3 year: 2019 end-page: 351 ident: CR5 article-title: Recommendation system using feature extraction and pattern recognition in clinical care systems publication-title: Enterprise Information Systems doi: 10.1080/17517575.2018.1557256 – volume: 93 start-page: 103818 year: 2020 ident: CR18 article-title: Multiple stream deep learning model for human action recognition publication-title: Image Vis Comput doi: 10.1016/j.imavis.2019.10.004 – volume: 6 start-page: 1729 issue: 2 year: 2021 end-page: 1736 ident: CR21 article-title: Multi-gat: a graphical attention-based hierarchical multimodal representation learning approach for human activity recognition publication-title: IEEE Robotics and Automation Letters doi: 10.1109/LRA.2021.3059624 – volume: 34 start-page: 1995 issue: 15 year: 2013 end-page: 2006 ident: CR11 article-title: A survey of human motion analysis using depth imagery publication-title: Pattern Recogn Lett doi: 10.1016/j.patrec.2013.02.006 – ident: CR35 – ident: CR29 – volume: 36 start-page: 914 issue: 5 year: 2013 end-page: 927 ident: CR44 article-title: Learning actionlet ensemble for 3D human action recognition publication-title: IEEE Trans Pattern Anal Mach Intell doi: 10.1109/TPAMI.2013.198 – ident: CR25 – ident: CR42 – ident: CR46 – volume: 8 start-page: 15280 year: 2020 end-page: 15290 ident: CR16 article-title: Context-aware cross-attention for skeleton-based human action recognition publication-title: IEEE Access doi: 10.1109/ACCESS.2020.2968054 – ident: CR19 – volume: 14 start-page: 165 issue: 1 year: 2018 end-page: 171 ident: CR4 article-title: Recommendation system for immunization coverage and monitoring publication-title: Human Vaccines & Immunotherapeutics doi: 10.1080/21645515.2017.1379639 – volume: 38 start-page: 2430 issue: 12 year: 2016 end-page: 2443 ident: CR34 article-title: Histogram of oriented principal components for cross-view action recognition publication-title: IEEE Trans Pattern Anal Mach Intell doi: 10.1109/TPAMI.2016.2533389 – ident: CR9 – ident: CR32 – volume: 288 start-page: 132569 year: 2022 ident: CR8 article-title: Assessing the change of ambient air quality patterns in Jiangsu Province of China pre-to post-COVID-19 publication-title: Chemosphere doi: 10.1016/j.chemosphere.2021.132569 – ident: CR36 – volume: 60 start-page: 1 year: 2021 end-page: 15 ident: CR6 article-title: Local Similarity-Based Spatial–Spectral fusion hyperspectral image classification with deep CNN and gabor filtering publication-title: IEEE Trans Geosci Remote Sens doi: 10.1109/TGRS.2021.3090410 – ident: CR26 – volume: 29 start-page: 14780 issue: 10 year: 2022 end-page: 14790 ident: CR7 article-title: Evaluating the impact of roads on the diversity pattern and density of trees to improve the conservation of species publication-title: Environ Sci Pollut Res doi: 10.1007/s11356-021-16627-y – ident: CR43 – ident: CR47 – volume: 29 start-page: 1061 year: 2019 end-page: 1073 ident: CR51 article-title: Eleatt-rnn: adding attentiveness to neurons in recurrent neural networks publication-title: IEEE Trans Image Process doi: 10.1109/TIP.2019.2937724 – volume: 21 start-page: 3623 issue: 3 year: 2021 end-page: 3634 ident: CR2 article-title: CNN-based multistage gated average fusion (MGAF) for human action recognition using depth and inertial sensors publication-title: IEEE Sens J doi: 10.1109/JSEN.2020.3028561 – ident: CR37 – ident: CR30 – volume: 53 start-page: 561 issue: 1 year: 2021 end-page: 579 ident: CR23 article-title: The future of human activity recognition: deep learning or feature engineering? publication-title: Neur Process Lett doi: 10.1007/s11063-020-10400-x – ident: CR10 – volume: 106 start-page: 104090 year: 2021 ident: CR1 article-title: A framework of human action recognition using length control features fusion and weighted entropy-variances based feature selection publication-title: Image Vis Comput doi: 10.1016/j.imavis.2020.104090 – ident: CR40 – volume: 13 start-page: 319 issue: 3 year: 2018 end-page: 328 ident: CR33 article-title: Learning to recognise 3D human action from a new skeleton-based representation using deep convolutional neural networks publication-title: IET Comput Vision doi: 10.1049/iet-cvi.2018.5014 – ident: CR27 – volume: 31 start-page: 160 issue: 1 year: 2021 end-page: 174 ident: CR39 article-title: Learning representations from skeletal self-similarities for cross-view action recognition publication-title: IEEE Trans Circuits Syst Video Technol doi: 10.1109/TCSVT.2020.2965574 – ident: CR48 – volume: 513 start-page: 112 year: 2020 end-page: 126 ident: CR20 article-title: Image representation of pose-transition feature for 3D skeleton-based action recognition publication-title: Inf Sci doi: 10.1016/j.ins.2019.10.047 – volume: 19 start-page: 4 issue: 2 year: 2012 end-page: 10 ident: CR50 article-title: Microsoft kinect sensor and its effect publication-title: IEEE Multimedia doi: 10.1109/MMUL.2012.24 – volume: 49 start-page: 1806 issue: 9 year: 2018 end-page: 1819 ident: CR22 article-title: Deep convolutional neural networks for human action recognition using depth maps and postures publication-title: IEEE Transactions on Systems, Man, and Cybernetics: Systems doi: 10.1109/TSMC.2018.2850149 – volume: 51 start-page: 560 issue: 1 year: 2021 end-page: 570 ident: CR15 article-title: Spatio-temporal attention on manifold space for 3D human action recognition publication-title: Appl Intell doi: 10.1007/s10489-020-01803-3 – ident: CR3 – ident: CR17 – ident: CR31 – volume: 29 start-page: 3835 year: 2020 end-page: 3844 ident: CR13 article-title: View-invariant deep architecture for human action recognition using two-stream motion and shape temporal dynamics publication-title: IEEE Trans Image Process doi: 10.1109/TIP.2020.2965299 – ident: CR28 – ident: CR41 – ident: CR24 – volume: 77 start-page: 75 year: 2018 end-page: 86 ident: CR14 article-title: Tensor-based linear dynamical systems for action recognition from 3D skeletons publication-title: Pattern Recogn doi: 10.1016/j.patcog.2017.12.004 – volume: 19 start-page: 4 issue: 2 year: 2012 ident: 14214_CR50 publication-title: IEEE Multimedia doi: 10.1109/MMUL.2012.24 – ident: 14214_CR3 doi: 10.1109/CVPR.2018.00300 – volume: 34 start-page: 1995 issue: 15 year: 2013 ident: 14214_CR11 publication-title: Pattern Recogn Lett doi: 10.1016/j.patrec.2013.02.006 – ident: 14214_CR49 doi: 10.1109/CVPR.2014.108 – volume: 6 start-page: 1729 issue: 2 year: 2021 ident: 14214_CR21 publication-title: IEEE Robotics and Automation Letters doi: 10.1109/LRA.2021.3059624 – volume: 60 start-page: 1 year: 2021 ident: 14214_CR6 publication-title: IEEE Trans Geosci Remote Sens doi: 10.1109/TGRS.2021.3090410 – volume: 29 start-page: 3835 year: 2020 ident: 14214_CR13 publication-title: IEEE Trans Image Process doi: 10.1109/TIP.2020.2965299 – volume: 8 start-page: 15280 year: 2020 ident: 14214_CR16 publication-title: IEEE Access doi: 10.1109/ACCESS.2020.2968054 – ident: 14214_CR29 – ident: 14214_CR47 doi: 10.1109/CVPR.2013.365 – ident: 14214_CR48 doi: 10.1609/aaai.v32i1.12328 – volume: 31 start-page: 160 issue: 1 year: 2021 ident: 14214_CR39 publication-title: IEEE Trans Circuits Syst Video Technol doi: 10.1109/TCSVT.2020.2965574 – volume: 36 start-page: 914 issue: 5 year: 2013 ident: 14214_CR44 publication-title: IEEE Trans Pattern Anal Mach Intell doi: 10.1109/TPAMI.2013.198 – ident: 14214_CR26 – ident: 14214_CR41 doi: 10.1609/aaai.v31i1.11212 – volume: 13 start-page: 329 issue: 3 year: 2019 ident: 14214_CR5 publication-title: Enterprise Information Systems doi: 10.1080/17517575.2018.1557256 – ident: 14214_CR9 doi: 10.1109/TPAMI.1986.4767851 – volume: 40 start-page: 1045 issue: 5 year: 2017 ident: 14214_CR38 publication-title: IEEE Trans Pattern Anal Mach Intell doi: 10.1109/TPAMI.2017.2691321 – ident: 14214_CR30 doi: 10.1109/CVPR.2011.5995631 – volume: 38 start-page: 2430 issue: 12 year: 2016 ident: 14214_CR34 publication-title: IEEE Trans Pattern Anal Mach Intell doi: 10.1109/TPAMI.2016.2533389 – volume: 106 start-page: 104090 year: 2021 ident: 14214_CR1 publication-title: Image Vis Comput doi: 10.1016/j.imavis.2020.104090 – ident: 14214_CR17 doi: 10.1007/978-981-19-0840-8_6 – ident: 14214_CR32 doi: 10.1109/CVPR.2013.98 – volume: 51 start-page: 560 issue: 1 year: 2021 ident: 14214_CR15 publication-title: Appl Intell doi: 10.1007/s10489-020-01803-3 – volume: 53 start-page: 561 issue: 1 year: 2021 ident: 14214_CR23 publication-title: Neur Process Lett doi: 10.1007/s11063-020-10400-x – ident: 14214_CR42 doi: 10.1109/TPAMI.2022.3183112 – ident: 14214_CR12 doi: 10.1109/BigMM.2019.00-21 – ident: 14214_CR19 doi: 10.1109/CVPR.2015.7299172 – volume: 77 start-page: 75 year: 2018 ident: 14214_CR14 publication-title: Pattern Recogn doi: 10.1016/j.patcog.2017.12.004 – ident: 14214_CR43 – volume: 29 start-page: 1061 year: 2019 ident: 14214_CR51 publication-title: IEEE Trans Image Process doi: 10.1109/TIP.2019.2937724 – ident: 14214_CR27 – volume: 49 start-page: 1806 issue: 9 year: 2018 ident: 14214_CR22 publication-title: IEEE Transactions on Systems, Man, and Cybernetics: Systems doi: 10.1109/TSMC.2018.2850149 – volume: 513 start-page: 112 year: 2020 ident: 14214_CR20 publication-title: Inf Sci doi: 10.1016/j.ins.2019.10.047 – ident: 14214_CR46 doi: 10.1109/ICCV.2013.389 – volume: 14 start-page: 165 issue: 1 year: 2018 ident: 14214_CR4 publication-title: Human Vaccines & Immunotherapeutics doi: 10.1080/21645515.2017.1379639 – ident: 14214_CR28 doi: 10.1109/ICASSP40776.2020.9053939 – volume: 288 start-page: 132569 year: 2022 ident: 14214_CR8 publication-title: Chemosphere doi: 10.1016/j.chemosphere.2021.132569 – volume: 21 start-page: 3623 issue: 3 year: 2021 ident: 14214_CR2 publication-title: IEEE Sens J doi: 10.1109/JSEN.2020.3028561 – ident: 14214_CR35 doi: 10.1109/CVPR.2016.167 – ident: 14214_CR36 doi: 10.1109/ICPR48806.2021.9412863 – volume: 29 start-page: 14780 issue: 10 year: 2022 ident: 14214_CR7 publication-title: Environ Sci Pollut Res doi: 10.1007/s11356-021-16627-y – ident: 14214_CR25 – ident: 14214_CR40 doi: 10.1007/s00530-019-00645-5 – ident: 14214_CR10 doi: 10.1109/EMBC.2014.6944534 – ident: 14214_CR31 doi: 10.1109/SPCOM.2012.6290032 – ident: 14214_CR37 doi: 10.1109/CVPR.2016.115 – ident: 14214_CR24 doi: 10.1109/CVPR.2017.486 – volume: 13 start-page: 319 issue: 3 year: 2018 ident: 14214_CR33 publication-title: IET Comput Vision doi: 10.1049/iet-cvi.2018.5014 – volume: 93 start-page: 103818 year: 2020 ident: 14214_CR18 publication-title: Image Vis Comput doi: 10.1016/j.imavis.2019.10.004 – ident: 14214_CR45 doi: 10.1109/CVPR.2014.339 |
SSID | ssj0016524 |
Score | 2.3337543 |
Snippet | Human Action Recognition (HAR) is a fundamental challenge that smart surveillance systems must overcome. With the rising affordability of capturing human... |
SourceID | proquest crossref springer |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 19829 |
SubjectTerms | Algorithms Artificial intelligence Artificial neural networks Computer Communication Networks Computer Science Data Structures and Information Theory Datasets Deep learning Edge detection Human activity recognition Machine learning Motion perception Multimedia Multimedia Information Systems Neural networks Special Purpose and Application-Based Systems Surveillance systems |
SummonAdditionalLinks | – databaseName: ProQuest Central dbid: BENPR link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV3NS8MwFA-6XfTgx1ScTsnBmwbTj7TpSdRtDMEh4mC30nx0gtJVVw_7781r01UFdyu0Ce17Sd6vye-9H0IXkYlCTOuQMBkp4kdKE-EzTaAUiEoNJE5S2Bp4HAejif8wZVO74bawtMp6TSwXajWXsEd-7XKohOJyxm_yDwKqUXC6aiU0NlHbLMHc_Hy17wbjp-fVOULArKwtp8S8lWPTZqrkOQdSU4DN7viu45Pl79DU4M0_R6Rl5BnuoR0LGfFt5eN9tKGzDtqt5RiwnZ0dtP2jtuABmva1zrEVhZgRiFYKl_RBAl-KvT4p9flwldmAV0wicw1k-BlevJmQZKAhTjKFlc6LVwx80kM0GQ5e7kfEyigQaeZXQYD1ElHhh26QOJqmicMTmvJAhMyNqCcMJglo4qZMGqykXZUIKTSXaUC9MKKB8I5QK5tn-hhhroRxp_AUaJQE3EuYlJylyo-kZpo6XeTUFoylrTEOUhfvcVMdGaweG6vHpdXjZRddrtrkVYWNtU_3asfEdrYt4mZsdNFV7azm9v-9nazv7RRtgbp8xW_soVbx-aXPDAYpxLkdaN8tU9dB priority: 102 providerName: ProQuest |
Title | Deep learning-based multi-view 3D-human action recognition using skeleton and depth data |
URI | https://link.springer.com/article/10.1007/s11042-022-14214-y https://www.proquest.com/docview/2804512858 |
Volume | 82 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LS8NAEB60vejBR1Ws1rIHb7qweWyyOba2aVEsIhbqKWQfqaDUYuuh_97dZNOoqOApgWyWMLOb-ZL5Zj6A80hHIapUiKmIJPYjqTD3qcKmFYjMNCROM_Nr4HYUDMf-9YRObFHYomS7lynJ_E1dFbs5ppTEsM8d33V8vNqEOjXf7noVj93OOncQUCtlywjWT-LYUpmf5_gajiqM-S0tmkebeA92LExEncKv-7ChZg3YLSUYkN2RDdj-1E_wACY9pebICkFMsYlQEuWUQWwyAMjr4VyTDxXVDGjNHtLnhgA_RYtnHYY0HETpTCKp5ssnZDikhzCO-w9XQ2ylE7DQe2qJDdMlItwP3SB1FMlSh6UkYwEPqRsRj2scEpDUzajQ-Ei5MuWCKyaygHhhRALuHUFt9jpTx4CY5NqF3JNGlyRgXkqFYDSTfiQUVcRpglNaMBG2r7iRt3hJqo7IxuqJtnqSWz1ZNeFifc-86Krx5-hW6ZjE7rBF4jLTGcdllDXhsnRWdfn32U7-N_wUtozCfMFxbEFt-fauzjQOWfI2bLJ40IZ6J-52R-Y4eLzp62O3P7q7b-eL8gPzgdi8 |
linkProvider | Springer Nature |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV3JTsMwEB1BOQAHdkRZfYATWDhOnDoHhBCllPUEUm8hXlIkUFtoEepP8Y14srSABDdukZL48PzsmcRv5gHsRi4KCWtrVOjI0CAylqpAWIqtQEzqUuIkxV8DN7dh8z64bInWBHyUtTAoqyz3xGyjNl2N_8gPucROKFwKedx7oegahaerpYVGTosrO3x3n2z9o4u6m989zhtnd6dNWrgKUO3oNqAoAomYCmo8TDzL0sSTCUtlqGqCR8xXLkSHLOGp0C51sNwkSisrdRoyvxaxUPlu3EmYCnw_whUlG-ejU4tQFCa6klGHgVcU6eSleh4WwqB23gu4F9Dh90A4zm5_HMhmca6xAHNFgkpOckYtwoTtLMF8af5Air1gCWa_dDJchlbd2h4pLCjaFGOjIZlYkSKuxK_TzA2Q5HUUZKRbctcovW-T_pMLgC4RJUnHEGN7g0eC6tUVuP8XeFeh0ul27BoQaZQjj_INOqKE0k-E1lKkJoi0FZZ5VfBKBGNddDRHY43neNyLGVGPHepxhno8rML-6J1e3s_jz6c3y4mJi7Xdj8dMrMJBOVnj27-Ptv73aDsw3by7uY6vL26vNmAGfe1zZeUmVAavb3bLZT8DtZ1RjsDDf3P8EzrYEbI |
linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1Nb9NAEB2lqYTaA9CWikCge4ATXXW99trrA0JAGrUUogq1Um7G--EitUpTEoTy1_h1zNjrBCrRW2-WbO_h7fPOrPfNPIBXOUYh5X3Glc0dT3LnuUmU59QKxFWYEpcV_Rr4MkqPzpNPYzXuwO-2FoZkle2aWC_U7trSP_IDqakTitRKH1RBFnE6GL6b3nBykKKT1tZOo6HIiV_8wu3b7O3xAOf6tZTDw7OPRzw4DHCL1JtzEoTkwiSZTMvIi6qMdCkqnZpMyVzEBsN1KkpZKYtphJeuNNZ4batUxFkuUhPjuGuwnuGuSHRh_cPh6PTr8gwjVcFSVwuOiEShZKcp3IuoLIaU9FEio4Qv_g2Lq1z31vFsHfWGj-FhSFfZ-4ZfW9Dxk2141FpBsLAybMPmX30Nd2A88H7KgiHFBadI6VgtXeSEMosHvPYGZE1VBVuqmPCahPgXbHaJ4RDTUlZOHHN-Ov_OSMv6BM7vBeBd6E6uJ_4pMO0MUsnEjvxRUh2XylqtKpfk1isvoh5ELYKFDf3NyWbjqlh1ZibUC0S9qFEvFj14s3xn2nT3uPPpfjsxRfjSZ8WKlz3Ybydrdfv_oz27e7Q9eID8Lj4fj06ewwaZ3Dcyyz505z9--heYCs3Ny8A5Bt_um-Z_AL8EF0Q |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Deep+learning-based+multi-view+3D-human+action+recognition+using+skeleton+and+depth+data&rft.jtitle=Multimedia+tools+and+applications&rft.au=Ghosh%2C+Sampat+Kumar&rft.au=M%2C+Rashmi&rft.au=Mohan%2C+Biju+R&rft.au=Guddeti%2C+Ram+Mohana+Reddy&rft.date=2023-05-01&rft.pub=Springer+US&rft.issn=1380-7501&rft.eissn=1573-7721&rft.volume=82&rft.issue=13&rft.spage=19829&rft.epage=19851&rft_id=info:doi/10.1007%2Fs11042-022-14214-y&rft.externalDocID=10_1007_s11042_022_14214_y |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1380-7501&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1380-7501&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1380-7501&client=summon |