HARNet in deep learning approach—a systematic survey
A comprehensive examination of human action recognition (HAR) methodologies situated at the convergence of deep learning and computer vision is the subject of this article. We examine the progression from handcrafted feature-based approaches to end-to-end learning, with a particular focus on the sig...
Saved in:
Published in | Scientific reports Vol. 14; no. 1; pp. 8363 - 15 |
---|---|
Main Authors | , , , , , , , , , |
Format | Journal Article |
Language | English |
Published |
London
Nature Publishing Group UK
10.04.2024
Nature Publishing Group Nature Portfolio |
Subjects | |
Online Access | Get full text |
ISSN | 2045-2322 2045-2322 |
DOI | 10.1038/s41598-024-58074-y |
Cover
Loading…
Abstract | A comprehensive examination of human action recognition (HAR) methodologies situated at the convergence of deep learning and computer vision is the subject of this article. We examine the progression from handcrafted feature-based approaches to end-to-end learning, with a particular focus on the significance of large-scale datasets. By classifying research paradigms, such as temporal modelling and spatial features, our proposed taxonomy illuminates the merits and drawbacks of each. We specifically present HARNet, an architecture for Multi-Model Deep Learning that integrates recurrent and convolutional neural networks while utilizing attention mechanisms to improve accuracy and robustness. The VideoMAE v2 method (
https://github.com/OpenGVLab/VideoMAEv2
) has been utilized as a case study to illustrate practical implementations and obstacles. For researchers and practitioners interested in gaining a comprehensive understanding of the most recent advancements in HAR as they relate to computer vision and deep learning, this survey is an invaluable resource. |
---|---|
AbstractList | A comprehensive examination of human action recognition (HAR) methodologies situated at the convergence of deep learning and computer vision is the subject of this article. We examine the progression from handcrafted feature-based approaches to end-to-end learning, with a particular focus on the significance of large-scale datasets. By classifying research paradigms, such as temporal modelling and spatial features, our proposed taxonomy illuminates the merits and drawbacks of each. We specifically present HARNet, an architecture for Multi-Model Deep Learning that integrates recurrent and convolutional neural networks while utilizing attention mechanisms to improve accuracy and robustness. The VideoMAE v2 method ( https://github.com/OpenGVLab/VideoMAEv2 ) has been utilized as a case study to illustrate practical implementations and obstacles. For researchers and practitioners interested in gaining a comprehensive understanding of the most recent advancements in HAR as they relate to computer vision and deep learning, this survey is an invaluable resource. A comprehensive examination of human action recognition (HAR) methodologies situated at the convergence of deep learning and computer vision is the subject of this article. We examine the progression from handcrafted feature-based approaches to end-to-end learning, with a particular focus on the significance of large-scale datasets. By classifying research paradigms, such as temporal modelling and spatial features, our proposed taxonomy illuminates the merits and drawbacks of each. We specifically present HARNet, an architecture for Multi-Model Deep Learning that integrates recurrent and convolutional neural networks while utilizing attention mechanisms to improve accuracy and robustness. The VideoMAE v2 method ( https://github.com/OpenGVLab/VideoMAEv2 ) has been utilized as a case study to illustrate practical implementations and obstacles. For researchers and practitioners interested in gaining a comprehensive understanding of the most recent advancements in HAR as they relate to computer vision and deep learning, this survey is an invaluable resource. A comprehensive examination of human action recognition (HAR) methodologies situated at the convergence of deep learning and computer vision is the subject of this article. We examine the progression from handcrafted feature-based approaches to end-to-end learning, with a particular focus on the significance of large-scale datasets. By classifying research paradigms, such as temporal modelling and spatial features, our proposed taxonomy illuminates the merits and drawbacks of each. We specifically present HARNet, an architecture for Multi-Model Deep Learning that integrates recurrent and convolutional neural networks while utilizing attention mechanisms to improve accuracy and robustness. The VideoMAE v2 method ( https://github.com/OpenGVLab/VideoMAEv2 ) has been utilized as a case study to illustrate practical implementations and obstacles. For researchers and practitioners interested in gaining a comprehensive understanding of the most recent advancements in HAR as they relate to computer vision and deep learning, this survey is an invaluable resource.A comprehensive examination of human action recognition (HAR) methodologies situated at the convergence of deep learning and computer vision is the subject of this article. We examine the progression from handcrafted feature-based approaches to end-to-end learning, with a particular focus on the significance of large-scale datasets. By classifying research paradigms, such as temporal modelling and spatial features, our proposed taxonomy illuminates the merits and drawbacks of each. We specifically present HARNet, an architecture for Multi-Model Deep Learning that integrates recurrent and convolutional neural networks while utilizing attention mechanisms to improve accuracy and robustness. The VideoMAE v2 method ( https://github.com/OpenGVLab/VideoMAEv2 ) has been utilized as a case study to illustrate practical implementations and obstacles. For researchers and practitioners interested in gaining a comprehensive understanding of the most recent advancements in HAR as they relate to computer vision and deep learning, this survey is an invaluable resource. Abstract A comprehensive examination of human action recognition (HAR) methodologies situated at the convergence of deep learning and computer vision is the subject of this article. We examine the progression from handcrafted feature-based approaches to end-to-end learning, with a particular focus on the significance of large-scale datasets. By classifying research paradigms, such as temporal modelling and spatial features, our proposed taxonomy illuminates the merits and drawbacks of each. We specifically present HARNet, an architecture for Multi-Model Deep Learning that integrates recurrent and convolutional neural networks while utilizing attention mechanisms to improve accuracy and robustness. The VideoMAE v2 method ( https://github.com/OpenGVLab/VideoMAEv2 ) has been utilized as a case study to illustrate practical implementations and obstacles. For researchers and practitioners interested in gaining a comprehensive understanding of the most recent advancements in HAR as they relate to computer vision and deep learning, this survey is an invaluable resource. |
ArticleNumber | 8363 |
Author | Khan, Baseem Chinthaginjala, Ravikumar Deepika, G. Buvaneswari, B. Mohammad, Faruq Angadi, Sanjeevkumar Kumar, Neelam Sanjeev Reddy, R. Vijaya Kumar Goutham, V. Dhanamjayulu, C. |
Author_xml | – sequence: 1 givenname: Neelam Sanjeev surname: Kumar fullname: Kumar, Neelam Sanjeev organization: Department of Computer Science and Engineering, SRM Institute of Science and Technology – sequence: 2 givenname: G. surname: Deepika fullname: Deepika, G. organization: Department of Electronics and Communication Engineering, St. Peter’s Engineering College – sequence: 3 givenname: V. surname: Goutham fullname: Goutham, V. organization: Department of Computer Science and Engineering, St Mary’s Group of Institutions – sequence: 4 givenname: B. surname: Buvaneswari fullname: Buvaneswari, B. organization: Department of Information Technology, Panimalar Engineering College – sequence: 5 givenname: R. Vijaya Kumar surname: Reddy fullname: Reddy, R. Vijaya Kumar organization: Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation – sequence: 6 givenname: Sanjeevkumar surname: Angadi fullname: Angadi, Sanjeevkumar organization: Department of Computer Science and Engineering, Nutan College of Engineering and Research – sequence: 7 givenname: C. surname: Dhanamjayulu fullname: Dhanamjayulu, C. email: dhanamjayulu.c@vit.ac.in organization: School of Electrical Engineering, Vellore Institute of Technology – sequence: 8 givenname: Ravikumar surname: Chinthaginjala fullname: Chinthaginjala, Ravikumar organization: School of Electronics Engineering, Vellore Institute of Technology – sequence: 9 givenname: Faruq surname: Mohammad fullname: Mohammad, Faruq organization: Department of Chemistry, College of Science, King Saud University – sequence: 10 givenname: Baseem surname: Khan fullname: Khan, Baseem email: baseemkh@hu.edu.et organization: Department of Electrical and Computer Engineering, Hawassa University |
BackLink | https://www.ncbi.nlm.nih.gov/pubmed/38600138$$D View this record in MEDLINE/PubMed |
BookMark | eNp9Ustu1TAUtFARLaU_wAJFYsMmcPxIYq9QVVFaqQIJwdpy7JPbXOXawU4qZdeP4Av5kvr2ltJ2UW98ZM_MGfvMa7Lng0dC3lL4SIHLT0nQSskSmCgrCY0olxfkgIGoSsYZ23tQ75OjlNaQV8WUoOoV2eeyBqBcHpD67PjHN5yK3hcOcSwGNNH3flWYcYzB2Mu_139MkZY04cZMvS3SHK9weUNedmZIeHS3H5Jfp19-npyVF9-_np8cX5S2EnQqLYXWdqZj1BpgktVMNJbTzjJWS4Wu4lxyk42Ccq4RrWO5qmWbC4lCAD8k5ztdF8xaj7HfmLjoYHp9exDiSpuYbQ2ohVNU0UZJrIxwUrRKdbUD1lloTaO6rPV5pzXO7QadRT9FMzwSfXzj-0u9CleaUoBaCpEVPtwpxPB7xjTpTZ8sDoPxGOakOfCGKyFVk6Hvn0DXYY4-_9UWVfEKhNwKvnto6d7Lv_lkANsBbAwpRezuIRT0Ngd6lwOdc6Bvc6CXTJJPSLaf8vDC9ln98DyV76gp9_ErjP9tP8O6AeS4xp8 |
CitedBy_id | crossref_primary_10_1038_s41598_024_84864_5 crossref_primary_10_1038_s41598_025_92464_0 crossref_primary_10_3390_a17100434 crossref_primary_10_1016_j_asej_2025_103286 crossref_primary_10_1016_j_rineng_2024_103275 crossref_primary_10_1038_s41598_025_92676_4 crossref_primary_10_1016_j_asej_2024_103136 |
Cites_doi | 10.1109/CVPR.2017.143 10.1109/ICCV.2017.322 10.1109/CVPR.2016.91 10.1109/CVPR.2017.787 10.3390/s19081871 10.1109/ICCV.2015.510 10.1109/CVPR.2017.502 10.3390/s23042182 10.1109/WACV.2017.24 10.1109/ICCV.2019.00630 10.1109/ICRA.2011.5980382 10.1007/978-3-319-46484-8_2 10.1109/TPAMI.2012.59 10.1109/CVPR.2015.7298878 10.1109/CVPR.2014.223 10.1109/CVPR.2018.00685 10.1609/aaai.v30i1.10451 10.1109/CVPR.2016.90 10.1007/s10462-021-10116-x 10.1109/ICCV.2013.441 10.1007/978-3-319-04561-0_2 |
ContentType | Journal Article |
Copyright | The Author(s) 2024 2024. The Author(s). The Author(s) 2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. |
Copyright_xml | – notice: The Author(s) 2024 – notice: 2024. The Author(s). – notice: The Author(s) 2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. |
DBID | C6C AAYXX CITATION NPM 3V. 7X7 7XB 88A 88E 88I 8FE 8FH 8FI 8FJ 8FK ABUWG AEUYN AFKRA AZQEC BBNVY BENPR BHPHI CCPQU DWQXO FYUFA GHDGH GNUQQ HCIFZ K9. LK8 M0S M1P M2P M7P PHGZM PHGZT PIMPY PJZUB PKEHL PPXIY PQEST PQGLB PQQKQ PQUKI PRINS Q9U 7X8 5PM DOA |
DOI | 10.1038/s41598-024-58074-y |
DatabaseName | Springer Nature OA Free Journals CrossRef PubMed ProQuest Central (Corporate) Health & Medical Collection ProQuest Central (purchase pre-March 2016) Biology Database (Alumni Edition) Medical Database (Alumni Edition) Science Database (Alumni Edition) ProQuest SciTech Collection ProQuest Natural Science Collection Hospital Premium Collection Hospital Premium Collection (Alumni Edition) ProQuest Central (Alumni) (purchase pre-March 2016) ProQuest Central (Alumni) ProQuest One Sustainability ProQuest Central UK/Ireland ProQuest Central Essentials Biological Science Database ProQuest Central Natural Science Collection ProQuest One Community College ProQuest Central Korea Health Research Premium Collection Health Research Premium Collection (Alumni) ProQuest Central Student SciTech Premium Collection ProQuest Health & Medical Complete (Alumni) ProQuest Biological Science Collection Health & Medical Collection (Alumni) Medical Database Science Database Biological Science Database ProQuest Central Premium ProQuest One Academic Publicly Available Content Database ProQuest Health & Medical Research Collection ProQuest One Academic Middle East (New) ProQuest One Health & Nursing ProQuest One Academic Eastern Edition (DO NOT USE) ProQuest One Applied & Life Sciences ProQuest One Academic ProQuest One Academic UKI Edition ProQuest Central China ProQuest Central Basic MEDLINE - Academic PubMed Central (Full Participant titles) DOAJ Directory of Open Access Journals |
DatabaseTitle | CrossRef PubMed Publicly Available Content Database ProQuest Central Student ProQuest One Academic Middle East (New) ProQuest Central Essentials ProQuest Health & Medical Complete (Alumni) ProQuest Central (Alumni Edition) SciTech Premium Collection ProQuest One Community College ProQuest One Health & Nursing ProQuest Natural Science Collection ProQuest Central China ProQuest Biology Journals (Alumni Edition) ProQuest Central ProQuest One Applied & Life Sciences ProQuest One Sustainability ProQuest Health & Medical Research Collection Health Research Premium Collection Health and Medicine Complete (Alumni Edition) Natural Science Collection ProQuest Central Korea Health & Medical Research Collection Biological Science Collection ProQuest Central (New) ProQuest Medical Library (Alumni) ProQuest Science Journals (Alumni Edition) ProQuest Biological Science Collection ProQuest Central Basic ProQuest Science Journals ProQuest One Academic Eastern Edition ProQuest Hospital Collection Health Research Premium Collection (Alumni) Biological Science Database ProQuest SciTech Collection ProQuest Hospital Collection (Alumni) ProQuest Health & Medical Complete ProQuest Medical Library ProQuest One Academic UKI Edition ProQuest One Academic ProQuest One Academic (New) ProQuest Central (Alumni) MEDLINE - Academic |
DatabaseTitleList | PubMed Publicly Available Content Database CrossRef MEDLINE - Academic |
Database_xml | – sequence: 1 dbid: C6C name: Springer Nature Link OA Free Journals url: http://www.springeropen.com/ sourceTypes: Publisher – sequence: 2 dbid: DOA name: DOAJ Directory of Open Access Journals url: https://www.doaj.org/ sourceTypes: Open Website – sequence: 3 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 4 dbid: BENPR name: ProQuest Central url: https://www.proquest.com/central sourceTypes: Aggregation Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Biology Computer Science |
EISSN | 2045-2322 |
EndPage | 15 |
ExternalDocumentID | oai_doaj_org_article_4d9191798e5a4d84b99f6d02fc0ba79f PMC11006844 38600138 10_1038_s41598_024_58074_y |
Genre | Journal Article |
GroupedDBID | 0R~ 3V. 4.4 53G 5VS 7X7 88A 88E 88I 8FE 8FH 8FI 8FJ AAFWJ AAJSJ AAKDD ABDBF ABUWG ACGFS ACSMW ACUHS ADBBV ADRAZ AENEX AEUYN AFKRA AJTQC ALIPV ALMA_UNASSIGNED_HOLDINGS AOIJS AZQEC BAWUL BBNVY BCNDV BENPR BHPHI BPHCQ BVXVI C6C CCPQU DIK DWQXO EBD EBLON EBS ESX FYUFA GNUQQ GROUPED_DOAJ GX1 HCIFZ HH5 HMCUK HYE KQ8 LK8 M0L M1P M2P M48 M7P M~E NAO OK1 PIMPY PQQKQ PROAC PSQYO RNT RNTTT RPM SNYQT UKHRP AASML AAYXX AFPKN CITATION PHGZM PHGZT NPM PJZUB PPXIY PQGLB 7XB 8FK AARCD K9. PKEHL PQEST PQUKI PRINS Q9U 7X8 PUEGO 5PM |
ID | FETCH-LOGICAL-c541t-c10bcfaf21ca02826247c31fc22689ed53383a04509dd74bd250968bbd28e4403 |
IEDL.DBID | M48 |
ISSN | 2045-2322 |
IngestDate | Wed Aug 27 01:32:14 EDT 2025 Thu Aug 21 18:34:25 EDT 2025 Thu Sep 04 22:40:06 EDT 2025 Wed Aug 13 05:10:25 EDT 2025 Mon Jul 21 05:55:30 EDT 2025 Tue Jul 01 00:51:47 EDT 2025 Thu Apr 24 22:59:49 EDT 2025 Fri Feb 21 02:38:02 EST 2025 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 1 |
Keywords | Human action recognition (HAR) Deep learning CNN Accuracy Feature-based approaches |
Language | English |
License | 2024. The Author(s). Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c541t-c10bcfaf21ca02826247c31fc22689ed53383a04509dd74bd250968bbd28e4403 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
OpenAccessLink | http://journals.scholarsportal.info/openUrl.xqy?doi=10.1038/s41598-024-58074-y |
PMID | 38600138 |
PQID | 3035350484 |
PQPubID | 2041939 |
PageCount | 15 |
ParticipantIDs | doaj_primary_oai_doaj_org_article_4d9191798e5a4d84b99f6d02fc0ba79f pubmedcentral_primary_oai_pubmedcentral_nih_gov_11006844 proquest_miscellaneous_3037394897 proquest_journals_3035350484 pubmed_primary_38600138 crossref_primary_10_1038_s41598_024_58074_y crossref_citationtrail_10_1038_s41598_024_58074_y springer_journals_10_1038_s41598_024_58074_y |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2024-04-10 |
PublicationDateYYYYMMDD | 2024-04-10 |
PublicationDate_xml | – month: 04 year: 2024 text: 2024-04-10 day: 10 |
PublicationDecade | 2020 |
PublicationPlace | London |
PublicationPlace_xml | – name: London – name: England |
PublicationTitle | Scientific reports |
PublicationTitleAbbrev | Sci Rep |
PublicationTitleAlternate | Sci Rep |
PublicationYear | 2024 |
Publisher | Nature Publishing Group UK Nature Publishing Group Nature Portfolio |
Publisher_xml | – name: Nature Publishing Group UK – name: Nature Publishing Group – name: Nature Portfolio |
References | Wang, L., Xiong, Y., Wang, Z., Qiao, Y., Lin, D., Tang, X., & Van Gool, L. Temporal Segment networks: Towards good practices for deep action recognition. In European Conference on Computer Vision (ECCV) 20–36 (2016). JiSXuWYangMYuK3D convolutional neural networks for human action recognitionIEEE Trans. Pattern Anal. Mach. Intell.201335122123110.1109/TPAMI.2012.5922392705 Zolfaghari, M., Singh, K., Brox, T., & Schiele, B. ECOfusion: Fusing via early or late combination. In European Conference on Computer Vision (ECCV) (2018). MorshedMGSultanaTAlamALeeY-KHuman action recognition: A taxonomy-based survey, updates, and opportunitiesSensors20232321822023Senso..23.2182M10.3390/s23042182368507789963970 Khorrami, P., Liao, W., Lech, M., Ternovskiy, E., & Lee, Y. J. CombineNet: A deep neural network for human activity recognition. In Proceedings of the European Conference on Computer Vision (ECCV) 3–19 (2019). Wang, J., Liu, Z., Wu, Y., & Yuan, J. Learning Actionlet ensemble for 3D human action recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 1631–1638 (2013). Zhang, S., Liu, X., & Xiao, J. On geometric features for skeleton-based action recognition using multilayer LSTM networks. In IEEE Winter Conference on Applications of Computer Vision (WACV) 784–791 (2017). He, K., Zhang, X., Ren, S., & Sun, J. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 770–778 (2016) WangHKläserASchmidCLiuC-LHuman action recognition: A surveyIEEE Trans. Pattern Anal. Mach. Intell.2013363537556 Wang, L., Xiong, Y., Wang, Z., & Qiao, Y. Towards good practices for very deep two-stream ConvNets. arXiv preprint arXiv:1705.07750 (2017). Simonyan, K., & Zisserman, A. Two-stream convolutional networks for action recognition in videos. arXiv preprint arXiv:1406.2199 (2014). Carreira, J., & Zisserman, A. Quo Vadis, action recognition? A new model and the kinetics dataset. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 4724–4733 (2017). CarreiraJZissermanAQuo Vadis, action recognition? A new model and the kinetics BenchmarkIEEE Trans. Pattern Anal. Mach. Intell.201840821092123 Zhang, Z., & Liu, L. Joint semantic-embedding space for human action recognition and actionlet ensemble. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 1758–1763 (2018). SinghAGautamADubeySRA survey of human action recognition with depth camerasJ. King Saud Univ. Comput. Inf. Sci.2019314537551 Hara, K., Kataoka, H., & Satoh, Y. Can spatiotemporal 3D CNNs retrace the history of 2D CNNs and ImageNet? In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 6546–6555 (2018). GuptaNGuptaSKPathakRKHuman activity recognition in artificial intelligence framework: A narrative reviewArtif Intell Rev2022554755480810.1007/s10462-021-10116-x350686518763438 Lai, K., Bo, L., Ren, X., & Fox, D. A large-scale hierarchical multi-view RGB-d object dataset. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai 1817–1824 (IEEE, 2011). Pengfei, Z., et al. View adaptive recurrent neural networks for high performance human action recognition from skeleton data. arXiv:1703.08274v2 (2017). Tran, D., Bourdev, L., Fergus, R., Torresani, L., & Paluri, M. Learning spatiotemporal features with 3D convolutional networks. In IEEE International Conference on Computer Vision (ICCV) 4489–4497 (2015). Kay, W., Carreira, J., Simonyan, K., Zhang, B., Hillier, C., Vijayanarasimhan, S., Viola, F., Green, T., Back, T., Natsev, P., Suleyman, M., & Zisserman, A. The kinetics human action video dataset. arXiv preprint arXiv:1705.06950 (2017). Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. You only look once: Unified, real-time object detection. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 779–788 (2016). Wang, H., & Schmid, C. Action recognition with improved trajectories. In IEEE International Conference on Computer Vision (ICCV) 3551–3558 (2013). LiWZhangZLiuZAction recognition based on joint trajectory maps with convolutional neural networksIEEE Trans. Image Process.201827313391350 VarolGLaptevISchmidCLong-term temporal convolutions for action recognitionIEEE Trans. Pattern Anal. Mach. Intell.201739815631577 Zhang, Y., Tian, Y., Kong, Y., & Zhong, B. W-TALC: Weakly-supervised temporal activity localization and classification. In European Conference on Computer Vision (ECCV) 498–513 (2016). Feichtenhofer, C., Pinz, A., & Wildes, R. Spatiotemporal residual networks for video action recognition. In Advances in Neural Information Processing Systems (NeurIPS) 3431–3439 (2016). ZhangYZhaoQYuHDeep learning for human activity recognition: A reviewSensors201919818732019Senso..19.1871Z Donahue, J., Anne Hendricks, L., Guadarrama, S., Rohrbach, M., Venugopalan, S., Saenko, K., & Darrell, T. Long-term recurrent convolutional networks for visual recognition and description. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2625–2634 (2015). Zhu, W., Lan, C., Xing, J., Zeng, W., Li, Y., & Shen, L. Co-occurrence feature learning for skeleton based action recognition using regularized deep LSTM networks. In AAAI Conference on Artificial Intelligence 2396–2402 (2016). He, K., Gkioxari, G., Dollár, P., & Girshick, R. Mask R-CNN. In IEEE International Conference on Computer Vision (ICCV) 2980–2988 (2017). Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., & Li, F. F. Large-scale video classification with convolutional neural networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 1725–1732 (2014). Cao, Z., Simon, T., Wei, S. E., & Sheikh, Y. Realtime multi-person 2D pose estimation using part affinity fields. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 7291–7299 (2017). Simonyan, K., & Zisserman, A. Two-stream convolutional networks for action recognition in videos. In Advances in Neural Information Processing Systems (NeurIPS) 568–576 (2014). GarciaLBruguierDA survey on human activity recognition using wearable sensorsIEEE Sensors J.201818728392850 Feichtenhofer, C., Fan, H., Malik, J., & He, K. SlowFast networks for video recognition. In IEEE International Conference on Computer Vision (ICCV) 6201–6210 (2019). Soomro, K., Zamir, A. R., & Shah, M. UCF101: A dataset of 101 human action classes from videos in the wild. arXiv preprint arXiv:1212.0402 (2012). G Varol (58074_CR36) 2017; 39 58074_CR6 58074_CR4 58074_CR9 58074_CR8 N Gupta (58074_CR5) 2022; 55 58074_CR31 58074_CR32 58074_CR30 58074_CR3 S Ji (58074_CR12) 2013; 35 58074_CR13 Y Zhang (58074_CR27) 2019; 19 58074_CR35 58074_CR2 58074_CR14 58074_CR1 58074_CR11 58074_CR33 58074_CR17 58074_CR18 58074_CR15 MG Morshed (58074_CR37) 2023; 23 58074_CR16 H Wang (58074_CR26) 2013; 36 58074_CR19 J Carreira (58074_CR23) 2018; 40 L Garcia (58074_CR10) 2018; 18 W Li (58074_CR34) 2018; 27 A Singh (58074_CR7) 2019; 31 58074_CR20 58074_CR21 58074_CR24 58074_CR25 58074_CR22 58074_CR28 58074_CR29 |
References_xml | – reference: Zhu, W., Lan, C., Xing, J., Zeng, W., Li, Y., & Shen, L. Co-occurrence feature learning for skeleton based action recognition using regularized deep LSTM networks. In AAAI Conference on Artificial Intelligence 2396–2402 (2016). – reference: Simonyan, K., & Zisserman, A. Two-stream convolutional networks for action recognition in videos. In Advances in Neural Information Processing Systems (NeurIPS) 568–576 (2014). – reference: Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., & Li, F. F. Large-scale video classification with convolutional neural networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 1725–1732 (2014). – reference: Zolfaghari, M., Singh, K., Brox, T., & Schiele, B. ECOfusion: Fusing via early or late combination. In European Conference on Computer Vision (ECCV) (2018). – reference: Wang, H., & Schmid, C. Action recognition with improved trajectories. In IEEE International Conference on Computer Vision (ICCV) 3551–3558 (2013). – reference: Hara, K., Kataoka, H., & Satoh, Y. Can spatiotemporal 3D CNNs retrace the history of 2D CNNs and ImageNet? In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 6546–6555 (2018). – reference: Kay, W., Carreira, J., Simonyan, K., Zhang, B., Hillier, C., Vijayanarasimhan, S., Viola, F., Green, T., Back, T., Natsev, P., Suleyman, M., & Zisserman, A. The kinetics human action video dataset. arXiv preprint arXiv:1705.06950 (2017). – reference: Feichtenhofer, C., Fan, H., Malik, J., & He, K. SlowFast networks for video recognition. In IEEE International Conference on Computer Vision (ICCV) 6201–6210 (2019). – reference: Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. You only look once: Unified, real-time object detection. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 779–788 (2016). – reference: Wang, J., Liu, Z., Wu, Y., & Yuan, J. Learning Actionlet ensemble for 3D human action recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 1631–1638 (2013). – reference: JiSXuWYangMYuK3D convolutional neural networks for human action recognitionIEEE Trans. Pattern Anal. Mach. Intell.201335122123110.1109/TPAMI.2012.5922392705 – reference: Donahue, J., Anne Hendricks, L., Guadarrama, S., Rohrbach, M., Venugopalan, S., Saenko, K., & Darrell, T. Long-term recurrent convolutional networks for visual recognition and description. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2625–2634 (2015). – reference: Khorrami, P., Liao, W., Lech, M., Ternovskiy, E., & Lee, Y. J. CombineNet: A deep neural network for human activity recognition. In Proceedings of the European Conference on Computer Vision (ECCV) 3–19 (2019). – reference: LiWZhangZLiuZAction recognition based on joint trajectory maps with convolutional neural networksIEEE Trans. Image Process.201827313391350 – reference: ZhangYZhaoQYuHDeep learning for human activity recognition: A reviewSensors201919818732019Senso..19.1871Z – reference: Lai, K., Bo, L., Ren, X., & Fox, D. A large-scale hierarchical multi-view RGB-d object dataset. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai 1817–1824 (IEEE, 2011). – reference: WangHKläserASchmidCLiuC-LHuman action recognition: A surveyIEEE Trans. Pattern Anal. Mach. Intell.2013363537556 – reference: Zhang, Y., Tian, Y., Kong, Y., & Zhong, B. W-TALC: Weakly-supervised temporal activity localization and classification. In European Conference on Computer Vision (ECCV) 498–513 (2016). – reference: Feichtenhofer, C., Pinz, A., & Wildes, R. Spatiotemporal residual networks for video action recognition. In Advances in Neural Information Processing Systems (NeurIPS) 3431–3439 (2016). – reference: Cao, Z., Simon, T., Wei, S. E., & Sheikh, Y. Realtime multi-person 2D pose estimation using part affinity fields. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 7291–7299 (2017). – reference: CarreiraJZissermanAQuo Vadis, action recognition? A new model and the kinetics BenchmarkIEEE Trans. Pattern Anal. Mach. Intell.201840821092123 – reference: Simonyan, K., & Zisserman, A. Two-stream convolutional networks for action recognition in videos. arXiv preprint arXiv:1406.2199 (2014). – reference: SinghAGautamADubeySRA survey of human action recognition with depth camerasJ. King Saud Univ. Comput. Inf. Sci.2019314537551 – reference: Carreira, J., & Zisserman, A. Quo Vadis, action recognition? A new model and the kinetics dataset. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 4724–4733 (2017). – reference: GuptaNGuptaSKPathakRKHuman activity recognition in artificial intelligence framework: A narrative reviewArtif Intell Rev2022554755480810.1007/s10462-021-10116-x350686518763438 – reference: MorshedMGSultanaTAlamALeeY-KHuman action recognition: A taxonomy-based survey, updates, and opportunitiesSensors20232321822023Senso..23.2182M10.3390/s23042182368507789963970 – reference: VarolGLaptevISchmidCLong-term temporal convolutions for action recognitionIEEE Trans. Pattern Anal. Mach. Intell.201739815631577 – reference: Zhang, Z., & Liu, L. Joint semantic-embedding space for human action recognition and actionlet ensemble. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 1758–1763 (2018). – reference: Zhang, S., Liu, X., & Xiao, J. On geometric features for skeleton-based action recognition using multilayer LSTM networks. In IEEE Winter Conference on Applications of Computer Vision (WACV) 784–791 (2017). – reference: He, K., Gkioxari, G., Dollár, P., & Girshick, R. Mask R-CNN. In IEEE International Conference on Computer Vision (ICCV) 2980–2988 (2017). – reference: Wang, L., Xiong, Y., Wang, Z., Qiao, Y., Lin, D., Tang, X., & Van Gool, L. Temporal Segment networks: Towards good practices for deep action recognition. In European Conference on Computer Vision (ECCV) 20–36 (2016). – reference: Pengfei, Z., et al. View adaptive recurrent neural networks for high performance human action recognition from skeleton data. arXiv:1703.08274v2 (2017). – reference: Wang, L., Xiong, Y., Wang, Z., & Qiao, Y. Towards good practices for very deep two-stream ConvNets. arXiv preprint arXiv:1705.07750 (2017). – reference: GarciaLBruguierDA survey on human activity recognition using wearable sensorsIEEE Sensors J.201818728392850 – reference: He, K., Zhang, X., Ren, S., & Sun, J. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 770–778 (2016) – reference: Soomro, K., Zamir, A. R., & Shah, M. UCF101: A dataset of 101 human action classes from videos in the wild. arXiv preprint arXiv:1212.0402 (2012). – reference: Tran, D., Bourdev, L., Fergus, R., Torresani, L., & Paluri, M. Learning spatiotemporal features with 3D convolutional networks. In IEEE International Conference on Computer Vision (ICCV) 4489–4497 (2015). – ident: 58074_CR8 doi: 10.1109/CVPR.2017.143 – ident: 58074_CR31 – ident: 58074_CR28 doi: 10.1109/ICCV.2017.322 – ident: 58074_CR35 – ident: 58074_CR15 doi: 10.1109/CVPR.2016.91 – volume: 39 start-page: 1563 issue: 8 year: 2017 ident: 58074_CR36 publication-title: IEEE Trans. Pattern Anal. Mach. Intell. – ident: 58074_CR4 doi: 10.1109/CVPR.2017.787 – volume: 31 start-page: 537 issue: 4 year: 2019 ident: 58074_CR7 publication-title: J. King Saud Univ. Comput. Inf. Sci. – ident: 58074_CR14 – ident: 58074_CR20 – volume: 19 start-page: 1873 issue: 8 year: 2019 ident: 58074_CR27 publication-title: Sensors doi: 10.3390/s19081871 – ident: 58074_CR18 – ident: 58074_CR16 – volume: 40 start-page: 2109 issue: 8 year: 2018 ident: 58074_CR23 publication-title: IEEE Trans. Pattern Anal. Mach. Intell. – ident: 58074_CR21 doi: 10.1109/ICCV.2015.510 – ident: 58074_CR3 doi: 10.1109/CVPR.2017.502 – volume: 23 start-page: 2182 year: 2023 ident: 58074_CR37 publication-title: Sensors doi: 10.3390/s23042182 – ident: 58074_CR25 doi: 10.1109/WACV.2017.24 – ident: 58074_CR17 doi: 10.1109/ICCV.2019.00630 – ident: 58074_CR30 – ident: 58074_CR19 doi: 10.1109/ICRA.2011.5980382 – ident: 58074_CR1 doi: 10.1007/978-3-319-46484-8_2 – ident: 58074_CR6 – ident: 58074_CR22 – volume: 36 start-page: 537 issue: 3 year: 2013 ident: 58074_CR26 publication-title: IEEE Trans. Pattern Anal. Mach. Intell. – ident: 58074_CR2 – volume: 35 start-page: 221 issue: 1 year: 2013 ident: 58074_CR12 publication-title: IEEE Trans. Pattern Anal. Mach. Intell. doi: 10.1109/TPAMI.2012.59 – ident: 58074_CR13 doi: 10.1109/CVPR.2015.7298878 – ident: 58074_CR11 doi: 10.1109/CVPR.2014.223 – ident: 58074_CR24 doi: 10.1109/CVPR.2018.00685 – ident: 58074_CR29 doi: 10.1609/aaai.v30i1.10451 – ident: 58074_CR9 doi: 10.1109/CVPR.2016.90 – volume: 27 start-page: 1339 issue: 3 year: 2018 ident: 58074_CR34 publication-title: IEEE Trans. Image Process. – volume: 18 start-page: 2839 issue: 7 year: 2018 ident: 58074_CR10 publication-title: IEEE Sensors J. – volume: 55 start-page: 4755 year: 2022 ident: 58074_CR5 publication-title: Artif Intell Rev doi: 10.1007/s10462-021-10116-x – ident: 58074_CR32 doi: 10.1109/ICCV.2013.441 – ident: 58074_CR33 doi: 10.1007/978-3-319-04561-0_2 |
SSID | ssj0000529419 |
Score | 2.4725425 |
Snippet | A comprehensive examination of human action recognition (HAR) methodologies situated at the convergence of deep learning and computer vision is the subject of... Abstract A comprehensive examination of human action recognition (HAR) methodologies situated at the convergence of deep learning and computer vision is the... |
SourceID | doaj pubmedcentral proquest pubmed crossref springer |
SourceType | Open Website Open Access Repository Aggregation Database Index Database Enrichment Source Publisher |
StartPage | 8363 |
SubjectTerms | 639/166 639/4077 Accuracy Automation CNN Computer engineering Computer science Computer vision Datasets Deep learning Feature-based approaches Human action recognition (HAR) Humanities and Social Sciences Machine learning multidisciplinary Neural networks Research methodology Science Science (multidisciplinary) Sensors Surveys Taxonomy Trends |
SummonAdditionalLinks | – databaseName: DOAJ Directory of Open Access Journals dbid: DOA link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1La9wwEB5CoJBLadq0dZsGF3JrRSxprMcxLQ1LoTmEBHITerkNFCdkdwt764_IL-wvqSR7N9n0denNWDIePo00MxrpG4B96zrH0COh1kWC6JA4Gy1pvAid7YKSMW8NfDoWkzP8eN6e3yn1lc-EDfTAA3AHGHQOKbSKrcWg0GndidCwzjfOSt3l1TfZvDvB1MDqzTRSPd6Sabg6mCZLlW-TMSRtJoAhizVLVAj7f-dl_npY8l7GtBiio0fwcPQg68NB8m3YiP1jeDDUlFw8ATE5PDmOs_qir0OMV_VYFuJzvWQP__H9xta3BM71dH79LS524Ozow-n7CRmLIxDfIp0RTxvnE5qMepvjJsFQek47n_wppWNoc-xpk8PW6BAkusAy0Yty6UFFxIY_hc3-so_Poabcyo4LabWPKGJUPGiWzDqVubq65hXQJVDGj8zhuYDFV1My2FyZAVyTwDUFXLOo4M3qm6uBN-Ovvd9l_Fc9M-d1eZE0wYyaYP6lCRXsLkfPjBNxapKFbnmblims4PWqOU2hnBexfbyclz6Sa1RaVvBsGOyVJFyJksytQK2pwZqo6y39xZdC053J-ITC9OO3S425levPWLz4H1i8hC2WVT2TUja7sDm7nsdXyXuaub0yUX4CW0EVbg priority: 102 providerName: Directory of Open Access Journals – databaseName: ProQuest Central dbid: BENPR link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV1ba9RAFD7oFsEXL_UWrRLBNx2aZE6SmSdppWURXKRY6Nswt7QFya57EfbNH-Ev9Jc4J5nssl76FpIJmZz7nDPzHYA32jSmQIss18YzRIPMaK9ZZivX6MaJ2lNq4NOkGp_jx4vyIibcFnFb5WATO0PtppZy5IfB1Ja8DPKG72ffGHWNoupqbKFxG_aCCRblCPaOTyafzzZZFqpjYS7jaZmMi8NF8Fh0qqxAVhIQDFvveKQOuP9f0ebfmyb_qJx2Dun0AdyLkWR61LP-Idzy7T7c6XtLrvfh_tCvIY3q-wiq8dHZxC_T6zZ13s_S2DHiMh2AxX_9-KnTLbZzuljNv_v1Yzg_PfnyYcxi3wRmS8yXzOaZsYHQRW41LamqAmvL88aGUEtI70paluoQy2XSuRqNKwgDRphwITxixp_AqJ22_hmkOdd1w6taS-ux8l5wJ4vg8fOaGq9LnkA-0E7ZCCpOvS2-qq64zYXq6a0CvVVHb7VO4O3mnVkPqXHj6GNiyWYkwWF3N6bzSxW1S6GTtO6UwpcanUAjZVO5rGhsZnQtmwQOBoaqqKMLtZWoBF5vHgftopKJbv101Y2puUQh6wSe9vzfzISLqqvzJiB2JGNnqrtP2uurDsGbcPoqgeHD7wYh2s7r_7R4fvNvvIC7Bck1IVFmBzBazlf-ZQiZluZV1IvfcGkURQ priority: 102 providerName: ProQuest – databaseName: Springer Nature HAS Fully OA dbid: AAJSJ link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1baxQxFD7ULYIv4t2pVUbwTQczyZlJ8riWlmXBPqiFvoXcphZktuxF2Dd_hL-wv8QkcymrVfBtSE6Yw5fLOclJvgPwRpvGULRYlNr4AtFgYbTXBbG1a3TjBPfxaODjaT07w_l5db4HdHgLky7tJ0rLtEwPt8Per4KhiY_BKBZV5G8ptndgP1K1kwnsT6fzz_PxZCXGrrCU_QsZwsQtjXesUCLrv83D_POi5G_R0mSETh7A_d57zKedvg9hz7eP4G6XT3L7GOrZ9NOpX-eXbe68v8r7lBAX-cAcfv3jp85vyJvz1Wb53W-fwNnJ8ZejWdEnRihsheW6sCUxNiBJS6vjnqmmyC0rGxt8KSG9q-K-UwdnjUjnOBpHI8mLMOFDeETCnsKkXbT-OeQl07xhNdfSeqy9F8xJGkx6yWNmdckyKAeglO1Zw2Pyim8qRa-ZUB24KoCrErhqm8Hbsc1Vx5nxT-kPEf9RMvJdp4LF8kL1_a_QybixlMJXGp1AI2VTO0IbS4zmssngcOg91U_ClQrWuWJVWKIwg9djdZg-MSaiW7_YJBnOJArJM3jWdfaoCRN1CuRmIHaGwY6quzXt5ddE0R2J-GqB4cfvhhFzo9ffsTj4P_EXcI_GQR2pJ8khTNbLjX8ZfKS1edVPil-Tbwxk priority: 102 providerName: Springer Nature |
Title | HARNet in deep learning approach—a systematic survey |
URI | https://link.springer.com/article/10.1038/s41598-024-58074-y https://www.ncbi.nlm.nih.gov/pubmed/38600138 https://www.proquest.com/docview/3035350484 https://www.proquest.com/docview/3037394897 https://pubmed.ncbi.nlm.nih.gov/PMC11006844 https://doaj.org/article/4d9191798e5a4d84b99f6d02fc0ba79f |
Volume | 14 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1Lb9QwEB71IVAviDeBsgoSNwgksePHAaHtqtVqpa5QYaW9WX6lVKqyZR9Vc-NH8Av5JdhOstXCwoFTIj9k6_M4M-OJvwF4LVWpcqxxkkllE4wVTpS0Mkk1MaUsDaPWHw2cjslwgkfTYroDXbqjFsDFVtfO55OazC_f3XyrP7oN_6G5Ms7eL5wS8hfFcpwUntslqXdh32km4qX8tDX3G67vnOOMt3dntnc9gLuIkRDA21BVgdF_mxn659-Uv4VUg6Y6uQ_3WhMz7jcy8QB2bPUQ7jRJJ-tHQIb9s7FdxhdVbKy9itu8EedxRy_-8_sPGd8yPMeL1fza1o9hcnL8ZTBM2uwJiS5wtkx0lirt4M4zLb1jRXJMNcpK7Qwuxq0pvHMqnUWXcmMoVib3TDBMuRdmMU7RE9irZpV9BnGGJC0RoZJri4m1DBmeO72fUZ9-naMIsg4ooVtqcZ_h4lKEEDdiosFZOJxFwFnUEbxZ97lqiDX-2frI479u6UmxQ8Fsfi7aPSaw4d775MwWEhuGFeclMWle6lRJyssIDrvVE52gCafCC1S47xiO4NW62u0xHziRlZ2tQhuKOGacRvC0Wez1TDphiYBtiMHGVDdrqouvgcfbs_URht3AbzuJuZ3X37F4_v8jvYCD3Mu656pMD2FvOV_Zl86oWqoe7NIp7cF-vz_6PHLPo-PxpzNXOiCDXjio6IW99As5ryN9 |
linkProvider | Scholars Portal |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1Lb9QwEB6VVgguPMorUMBIcAKrie0k9gGhFlptabtCVSv1ZvxKqYSyyz5AufEj-B38KH4Jdl6r5dFbb1HsJM54Zjz22N8H8FzpQhNmGE6UdpgxzbBWTuHYZLZQheW5C0sDh8NscMLen6anK_CzOwsTtlV2PrF21HZkwhr5pne1KU29vrE34y84sEaF7GpHodGoxb6rvvkp2_T13jvfvy8I2d05fjvALasANilLZtgksTa-GSQxKkw4MsJyQ5PC-ECEC2fTMGlTPtKJhbU505YEhBSu_QV3jMXUv_cKrPkwQ3grWtveGX446ld1Qt6MJaI9nRNTvjn1I2Q4xUYYTgPwDK6WRsCaKOBf0e3fmzT_yNTWA-DuLbjRRq5oq1G127DiynW42nBZVutws-OHQK27uAPZYOto6GbovETWuTFqGSrOUAdk_uv7D4UWWNJoOp98ddVdOLkUid6D1XJUugeAEqrygma5EsaxzDlOrSA-wkjyQPQuaARJJztpWhDzwKXxWdbJdMplI2_p5S1recsqgpf9M-MGwuPC2tuhS_qaAX67vjGanMnWmiWzIsxzBXepYpYzLUSR2ZgUJtYqF0UEG12HytYnTOVCgyN41hd7aw4pGlW60byuk1PBuMgjuN_0f98SyrM6rxwBX9KMpaYul5Tnn2rE8IALmHHmP_yqU6JFu_4vi4cX_8ZTuDY4PjyQB3vD_UdwnQQdDyiY8QasziZz99iHazP9pLURBB8v2yx_AweoUAo |
linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1Lb9QwEB6VrUBceJRHAwWCBCewNrGdxD4g1NKuthRWVUWl3ly_UipV2WUfoNz4Efwafg6_BDuv1fLorbcodhJnPDMee-zvA3ghVa4w1RTFUllEqaJISStRpFOTy9ywzPqlgY-jdHhM358kJ2vwsz0L47dVtj6xctRmrP0aed-52oQkTt9oP2-2RRzuDt5OviDPIOUzrS2dRq0iB7b85qZvszf7u66vX2I82Pv0bogahgGkExrPkY4jpV2TcKyln3ykmGaaxLl2QQnj1iR-Aidd1BNxYzKqDPZoKUy5C2YpjYh77zVYz9yoyHqwvrM3OjzqVnh8Do3GvDmpExHWn7nR0p9owxQlHoQGlSujYUUa8K9I9-8Nm39kbavBcHAHbjVRbLhdq91dWLPFBlyveS3LDbjdckWEjeu4B-lw-2hk5-F5ERprJ2HDVnEWtqDmv77_kOESVzqcLaZfbXkfjq9Eog-gV4wLuwlhTGSWkzSTXFuaWsuI4dhFG3HmSd85CSBuZSd0A2jueTUuRJVYJ0zU8hZO3qKStygDeNU9M6nhPC6tveO7pKvpobirG-PpmWgsW1DD_ZyXM5tIahhVnOepiXCuIyUzngew1XaoaPzDTCy1OYDnXbGzbJ-ukYUdL6o6GeGU8SyAh3X_dy0hLK1yzAGwFc1YaepqSXH-uUIP9xiBKaPuw69bJVq26_-yeHT5bzyDG84cxYf90cFjuIm9intAzGgLevPpwj5xkdtcPW1MJITTq7bK37udVDY |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=HARNet+in+deep+learning+approach%E2%80%94a+systematic+survey&rft.jtitle=Scientific+reports&rft.au=Kumar%2C+Neelam+Sanjeev&rft.au=Deepika%2C+G.&rft.au=Goutham%2C+V.&rft.au=Buvaneswari%2C+B.&rft.date=2024-04-10&rft.pub=Nature+Publishing+Group+UK&rft.eissn=2045-2322&rft.volume=14&rft_id=info:doi/10.1038%2Fs41598-024-58074-y&rft_id=info%3Apmid%2F38600138&rft.externalDocID=PMC11006844 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2045-2322&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2045-2322&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2045-2322&client=summon |