DiRecNetV2: A Transformer-Enhanced Network for Aerial Disaster Recognition
The integration of Unmanned Aerial Vehicles (UAVs) with artificial intelligence (AI) models for aerial imagery processing in disaster assessment, necessitates models that demonstrate exceptional accuracy, computational efficiency, and real-time processing capabilities. Traditionally Convolutional Ne...
Saved in:
Published in | SN computer science Vol. 5; no. 6; p. 770 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
Singapore
Springer Nature Singapore
08.08.2024
Springer Nature B.V |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | The integration of Unmanned Aerial Vehicles (UAVs) with artificial intelligence (AI) models for aerial imagery processing in disaster assessment, necessitates models that demonstrate exceptional accuracy, computational efficiency, and real-time processing capabilities. Traditionally Convolutional Neural Networks (CNNs), demonstrate efficiency in local feature extraction but are limited by their potential for global context interpretation. On the other hand, Vision Transformers (ViTs) show promise for improved global context interpretation through the use of attention mechanisms, although they still remain underinvestigated in UAV-based disaster response applications. Bridging this research gap, we introduce DiRecNetV2, an improved hybrid model that utilizes convolutional and transformer layers. It merges the inductive biases of CNNs for robust feature extraction with the global context understanding of Transformers, maintaining a low computational load ideal for UAV applications. Additionally, we introduce a new, compact multi-label dataset of disasters, to set an initial benchmark for future research, exploring how models trained on single-label data perform in a multi-label test set. The study assesses lightweight CNNs and ViTs on the AIDERSv2 dataset, based on the frames per second (FPS) for efficiency and the weighted F1 scores for classification performance. DiRecNetV2 not only achieves a weighted F1 score of 0.964 on a single-label test set but also demonstrates adaptability, with a score of 0.614 on a complex multi-label test set, while functioning at 176.13 FPS on the Nvidia Orin Jetson device. |
---|---|
AbstractList | The integration of Unmanned Aerial Vehicles (UAVs) with artificial intelligence (AI) models for aerial imagery processing in disaster assessment, necessitates models that demonstrate exceptional accuracy, computational efficiency, and real-time processing capabilities. Traditionally Convolutional Neural Networks (CNNs), demonstrate efficiency in local feature extraction but are limited by their potential for global context interpretation. On the other hand, Vision Transformers (ViTs) show promise for improved global context interpretation through the use of attention mechanisms, although they still remain underinvestigated in UAV-based disaster response applications. Bridging this research gap, we introduce DiRecNetV2, an improved hybrid model that utilizes convolutional and transformer layers. It merges the inductive biases of CNNs for robust feature extraction with the global context understanding of Transformers, maintaining a low computational load ideal for UAV applications. Additionally, we introduce a new, compact multi-label dataset of disasters, to set an initial benchmark for future research, exploring how models trained on single-label data perform in a multi-label test set. The study assesses lightweight CNNs and ViTs on the AIDERSv2 dataset, based on the frames per second (FPS) for efficiency and the weighted F1 scores for classification performance. DiRecNetV2 not only achieves a weighted F1 score of 0.964 on a single-label test set but also demonstrates adaptability, with a score of 0.614 on a complex multi-label test set, while functioning at 176.13 FPS on the Nvidia Orin Jetson device. Abstract The integration of Unmanned Aerial Vehicles (UAVs) with artificial intelligence (AI) models for aerial imagery processing in disaster assessment, necessitates models that demonstrate exceptional accuracy, computational efficiency, and real-time processing capabilities. Traditionally Convolutional Neural Networks (CNNs), demonstrate efficiency in local feature extraction but are limited by their potential for global context interpretation. On the other hand, Vision Transformers (ViTs) show promise for improved global context interpretation through the use of attention mechanisms, although they still remain underinvestigated in UAV-based disaster response applications. Bridging this research gap, we introduce DiRecNetV2, an improved hybrid model that utilizes convolutional and transformer layers. It merges the inductive biases of CNNs for robust feature extraction with the global context understanding of Transformers, maintaining a low computational load ideal for UAV applications. Additionally, we introduce a new, compact multi-label dataset of disasters, to set an initial benchmark for future research, exploring how models trained on single-label data perform in a multi-label test set. The study assesses lightweight CNNs and ViTs on the AIDERSv2 dataset, based on the frames per second (FPS) for efficiency and the weighted F1 scores for classification performance. DiRecNetV2 not only achieves a weighted F1 score of 0.964 on a single-label test set but also demonstrates adaptability, with a score of 0.614 on a complex multi-label test set, while functioning at 176.13 FPS on the Nvidia Orin Jetson device. |
ArticleNumber | 770 |
Author | Kyrkou, Christos Shianios, Demetris Kolios, Panayiotis S. |
Author_xml | – sequence: 1 givenname: Demetris orcidid: 0009-0005-8266-0727 surname: Shianios fullname: Shianios, Demetris organization: KIOS Research and Innovation Center of Excellence, University of Cyprus – sequence: 2 givenname: Panayiotis S. orcidid: 0000-0003-3981-993X surname: Kolios fullname: Kolios, Panayiotis S. organization: KIOS Research and Innovation Center of Excellence, University of Cyprus, Department of Computer Science, University of Cyprus – sequence: 3 givenname: Christos orcidid: 0000-0002-7926-7642 surname: Kyrkou fullname: Kyrkou, Christos email: kyrkou.christos@ucy.ac.cy organization: KIOS Research and Innovation Center of Excellence, University of Cyprus |
BookMark | eNp9kE9PAyEQxYmpibX2C3jaxDMKLLDgrWnrvzSamOqVsCxbt7ZQYRuz317qmujJ00xm3u9N5p2CgfPOAnCO0SVGqLiKlMhCQkQoRDniHHZHYEg4x1BIVAz-9CdgHOMaIUQYopSzIXiYNc_WPNr2lVxnk2wZtIu1D1sb4Ny9aWdslaXtpw_vWZpnExsavclmTdSxtSFLsF-5pm28OwPHtd5EO_6pI_ByM19O7-Di6fZ-OllAgzHrIKs1wzVhmpdI5kQya02lNSqqknJuGBZE1KXIGRdUS2xKIW0laFkSXBTMsHwELnrfXfAfextbtfb74NJJlaP0JJW5OKhIrzLBxxhsrXah2erQKYzUITbVx6ZSbOo7NtUlKO-hmMRuZcOv9T_UFx0ZcTc |
Cites_doi | 10.1109/ASONAM49781.2020.9381294 10.1080/10095020.2023.2183145 10.1109/CVPRW53098.2021.00289 10.1016/j.comnet.2021.108001 10.1007/s00138-023-01430-1 10.3389/fenvs.2022.1095986 10.1109/EUVIP53989.2022.9922799 10.3390/rs15041085 10.3390/rs13030504 10.1109/CVPR52688.2022.01167 10.1109/ICAIS53314.2022.9743035 10.3390/app13158583 10.3390/rs12010044 10.3390/app10020602 10.1109/5.726791 10.1145/3065386 10.1109/JSTARS.2020.2969809 10.1007/978-3-031-06794-5_4 10.3390/rs13173527 10.1109/ICCV.2019.00140 10.1007/s00521-022-07717-0 10.1016/j.compeleceng.2022.108401 10.1109/ICSES52305.2021.9633803 10.1109/ICIAI.2019.8850811 10.1109/ICIBA56860.2023.10165349 10.1109/INISTA49547.2020.9194655 10.1007/978-981-16-5987-4_49 10.1016/j.jhydrol.2023.129236 10.1109/JPROC.2022.3223186 10.3390/s19071486 10.3390/rs14112577 10.1016/S0140-6736(23)00326-4 10.1109/ACCESS.2021.3090981 10.1007/978-3-030-01264-9_8 10.1109/HPEC43674.2020.9286248 10.1109/CCDC49329.2020.9163816 10.3390/rs12020260 10.3390/rs13214213 10.1109/ICESIT53460.2021.9696599 10.1088/1742-5468/ac9830 10.1109/CVPR.2018.00474 10.1007/978-3-031-44240-7_24 10.1029/2021JB023657 10.1109/CVPR.2019.00293 10.1109/CVPRW.2019.00077 10.31449/inf.v46i7.4280 10.1016/j.envsoft.2021.105285 10.3390/su13147547 10.1109/TKDE.2013.39 10.20473/jisebi.8.1.31-41 10.1109/ACCESS.2019.2958983 10.1049/ipr2.12046 |
ContentType | Journal Article |
Copyright | The Author(s) 2024 The Author(s) 2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. |
Copyright_xml | – notice: The Author(s) 2024 – notice: The Author(s) 2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. |
DBID | C6C AAYXX CITATION JQ2 |
DOI | 10.1007/s42979-024-03066-y |
DatabaseName | SpringerOpen CrossRef ProQuest Computer Science Collection |
DatabaseTitle | CrossRef ProQuest Computer Science Collection |
DatabaseTitleList | CrossRef ProQuest Computer Science Collection |
Database_xml | – sequence: 1 dbid: C6C name: SpringerOpen url: http://www.springeropen.com/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Computer Science |
EISSN | 2661-8907 |
ExternalDocumentID | 10_1007_s42979_024_03066_y |
GrantInformation_xml | – fundername: HORIZON EUROPE Widening participation and spreading excellence grantid: 739551 funderid: http://dx.doi.org/10.13039/100018706 – fundername: University of Cyprus |
GroupedDBID | 0R~ 406 AACDK AAHNG AAJBT AASML AATNV AAUYE ABAKF ABECU ABFGW ABHQN ABJNI ABMQK ABTEG ABTKH ABWNU ACAOD ACDTI ACHSB ACIPQ ACOKC ACWMK ACZOJ ADKNI ADTPH ADYFF AEFQL AEMSY AESKC AESTI AEVTX AFBBN AFQWF AGMZJ AGQEE AGRTI AIGIU AILAN AIMYW AJZVZ AKQUC ALMA_UNASSIGNED_HOLDINGS AMXSW AMYLF BAPOH C6C DPUIP EBLON EBS FIGPU FNLPD GGCAI GNWQR IKXTQ IWAJR JZLTJ LLZTM NPVJJ NQJWS OK1 PT4 ROL RSV SNE SOJ SRMVM SSLCW UOJIU UTJUX ZMTXR AAYXX CITATION SJYHP JQ2 |
ID | FETCH-LOGICAL-c115y-5fa51f25a6b093295eecdaa07db466c51828fb835684a91cb89ed84bb21775c53 |
IEDL.DBID | C6C |
ISSN | 2661-8907 2662-995X |
IngestDate | Thu Oct 17 05:00:53 EDT 2024 Thu Sep 12 20:38:21 EDT 2024 Fri Aug 09 01:10:28 EDT 2024 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 6 |
Keywords | Convolutional Neural Networks Vision Transformers Natural disaster recognition UAV (Unmanned Aerial Vehicle) Multi-label classification Image classification |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c115y-5fa51f25a6b093295eecdaa07db466c51828fb835684a91cb89ed84bb21775c53 |
ORCID | 0000-0002-7926-7642 0009-0005-8266-0727 0000-0003-3981-993X |
OpenAccessLink | https://doi.org/10.1007/s42979-024-03066-y |
PQID | 3090749385 |
PQPubID | 6623307 |
ParticipantIDs | proquest_journals_3090749385 crossref_primary_10_1007_s42979_024_03066_y springer_journals_10_1007_s42979_024_03066_y |
PublicationCentury | 2000 |
PublicationDate | 8-8-2024 |
PublicationDateYYYYMMDD | 2024-08-08 |
PublicationDate_xml | – month: 08 year: 2024 text: 8-8-2024 day: 08 |
PublicationDecade | 2020 |
PublicationPlace | Singapore |
PublicationPlace_xml | – name: Singapore – name: Kolkata |
PublicationTitle | SN computer science |
PublicationTitleAbbrev | SN COMPUT. SCI |
PublicationYear | 2024 |
Publisher | Springer Nature Singapore Springer Nature B.V |
Publisher_xml | – name: Springer Nature Singapore – name: Springer Nature B.V |
References | LeCunYBottouLBengioYHaffnerPGradient-based learning applied to document recognitionProc IEEE199886112278232410.1109/5.726791 YuanJDingXLiuFCaiXDisaster cassification net: a disaster classification algorithm on remote sensing imageryFront Environ Sci202310269010.3389/fenvs.2022.1095986 Mehta S, Rastegari M. Separable self-attention for mobile vision transformers. arxiv; 2022. arXiv preprint arXiv:2206.02680. Alam F, Alam T, Hasan M, Hasnat A, Imran M, Ofli F, et al. Medic: A multi-task learning dataset for disaster image classification; 2021. arXiv preprint arXiv:2108.12828 ElangovanASasikalaSA multi-label classification of disaster-related tweets with enhanced word embedding ensemble convolutional neural network modelInformatica.202246713114410.31449/inf.v46i7.4280 Sandler M, Howard A, Zhu M, Zhmoginov A, Chen LC. Mobilenetv2: inverted residuals and linear bottlenecks. In: Proceedings of the IEEE Conference on computer vision and pattern recognition; 2018. p. 4510–4520. AnggraeniSRRanggiantoNAGhozaliIFatichahCPurwitasariDDeep learning approaches for multi-label incidents classification from twitter textual informationJ Inform Syst Eng Bus Intell.202281314110.20473/jisebi.8.1.31-41 ChenJShiXGuLWuGSuTWangHMKimJSZhangLXiongLImpacts of climate warming on global floods and their implication to current flood defense standardsJ Hydrol202361812923610.1016/j.jhydrol.2023.129236 Gadhavi VB, Degadwala S, Vyas D. Transfer learning approach for recognizing natural disasters video. In: 2022 Second International Conference on artificial intelligence and smart energy (ICAIS). IEEE; 2022. p. 793–798. GebrehiwotAHashemi-BeniLThompsonGKordjamshidiPLanganTEDeep convolutional neural network for flood extent mapping using unmanned aerial vehicles dataSensors2019197148610.3390/s19071486 MaHLiuYRenYWangDYuLYuJImproved cnn classification method for groups of buildings damaged by earthquake, based on high resolution remote sensing imagesRemote Sens202012226010.3390/rs12020260 Shianios D, Kyrkou C, Kolios PS. A benchmark and investigation of deep-learning-based techniques for detecting natural disasters in aerial images. In: International Conference on computer analysis of images and patterns. Springer; 2023. p. 244–254. Yang NTS, Tham ML, Chua SY, Lee YL, Owada Y, Poomrittigul S. Efficient device-edge inference for disaster classification. In: 2022 Thirteenth International Conference on ubiquitous and future networks (ICUFN). IEEE; 2022. p. 314–319. MunawarHSUllahFQayyumSKhanSIMojtahediMUavs in disaster management: application of integrated aerial imagery and convolutional neural network for flood detectionSustainability20211314754710.3390/su13147547 Tan M, Le Q. Efficientnet: rethinking model scaling for convolutional neural networks. In: International Conference on machine learning. PMLR; 2019. p. 6105–6114. Alam F, Ofli F, Imran M, Alam T, Qazi U. Deep learning benchmarks and datasets for social media image classification for disaster response. In: 2020 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM). IEEE; 2020. p. 151–158. SainiNChattopadhyayCDasDE2alertnet: an explainable, efficient, and lightweight model for emergency alert from aerial imageryRemote Sens Appl Soc Environ202329100896 XieSHouCYuHZhangZLuoXZhuNMulti-label disaster text classification via supervised contrastive learning for social media dataComput Electr Eng202210410840110.1016/j.compeleceng.2022.108401 Roy R, Kulkarni SS, Soni V, Chittora A, et al. Transformer-based flood scene segmentation for developing countries; 2022. arXiv preprint arXiv:2210.04218. RahnemoonfarMChowdhuryTSarkarAVarshneyDYariMMurphyRRFloodnet: a high resolution aerial imagery dataset for post flood scene understandingIEEE Access20219896448965410.1109/ACCESS.2021.3090981 YangWZhangXLuoPTransferability of convolutional neural network models for identifying damaged buildings due to earthquakeRemote Sens202113350410.3390/rs13030504 Liu Z, Mao H, Wu CY, Feichtenhofer C, Darrell T, Xie S. A convnet for the 2020s. In: Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition; 2022. p. 11976–11986. PallyRSamadiSApplication of image processing and convolutional neural networks for flood image classification and semantic segmentationEnviron Model Softw202214810528510.1016/j.envsoft.2021.105285 Hatamizadeh A, Yin H, Heinrich G, Kautz J, Molchanov P. Global context vision transformers. In: International Conference on machine learning. PMLR; 2023. p. 12633–12646. Tan M, Chen B, Pang R, Vasudevan V, Sandler M, Howard A, Le QV. Mnasnet: platform-aware neural architecture search for mobile. In: Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition; 2019. p. 2820–2828. Singh S, Ghosh S, Maity A, Bag BC, Koley C, Maity HK. Disasternet: a multi-label disaster aftermath image classification model. In: ICT Systems and Sustainability: Proceedings of ICT4SD 2021, vol. 1. Springer; 2022. p. 481–490. Jadon A, Omama M, Varshney A, Ansari MS, Sharma R. Firenet: a specialized lightweight fire & smoke detection model for real-time iot applications; 2019. arXiv preprint arXiv:1905.11922. ZhangMLZhouZHA review on multi-label learning algorithmsIEEE Trans Knowl Data Eng20132681819183710.1109/TKDE.2013.39 Doshi J, Basu S, Pang G. From satellite imagery to disaster insights; 2018. arXiv preprint arXiv:1812.07033. Howard A, Sandler M, Chu G, Chen LC, Chen B, Tan M, Wang W, Zhu Y, Pang R, Vasudevan V, et al. Searching for mobilenetv3. In: Proceedings of the IEEE/CVF International Conference on computer vision; 2019. p. 1314–1324. Hossain FA, Zhang Y, Yuan C, Su CY. Wildfire flame and smoke detection using static image features and artificial neural network. In: 2019 1st International Conference on industrial artificial intelligence (iai). IEEE; 2019. p. 1–6. Mehta S, Rastegari M. Mobilevit: light-weight, general-purpose, and mobile-friendly vision transformer; 2021. arXiv preprint arXiv:2110.02178. KyrkouCTheocharidesTEmergencynet: efficient aerial image classification for drone-based emergency monitoring using Atrous convolutional feature fusionIEEE J Sel Top Appl Earth Observ Remote Sens2020131687169910.1109/JSTARS.2020.2969809 FrizziSBouchouichaMGinouxJMMoreauESayadiMConvolutional neural network for smoke and fire semantic segmentationIET Image Proc202115363464710.1049/ipr2.12046 Xiong C, Yu A, Rong L, Huang J, Wang B, Liu H. Fire detection system based on unmanned aerial vehicle. In: 2021 IEEE International Conference on emergency science and information technology (ICESIT). IEEE; 2021. p. 302–306 Steiner A, Kolesnikov A, Zhai X, Wightman R, Uszkoreit J, Beyer L. How to train your vit? data, augmentation, and regularization in vision transformers; 2021. arXiv preprint arXiv:2106.10270. UwishemaOAddressing the effects of the earthquakes on Türkiye’s health-care systemThe Lancet20234011037872710.1016/S0140-6736(23)00326-4 Kyrkou C, Theocharides T. Deep-learning-based aerial image classification for emergency response applications using unmanned aerial vehicles. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW); 2019. p. 517–525. https://doi.org/10.1109/CVPRW.2019.00077 Yuan J, Ma X, Zhang Z, Xu Q, Han G, Li S, Gong W, Liu F, Cai X. EFFC-net: lightweight fully convolutional neural networks in remote sensing disaster images. Geo-spatial Inform Sci. 2023;1–12. https://doi.org/10.1080/10095020.2023.2183145. Agrawal T, Meleet M, et al. Classification of natural disaster using satellite & drone images with cnn using transfer learning. In: 2021 International Conference on Innovative Computing, Intelligent Communication and Smart Electrical Systems (ICSES). IEEE; 2021. p. 1–5. Munawar HS, Hammad A, Ullah F, Ali TH. After the flood: a novel application of image processing and machine learning for post-flood disaster management. In: Proceedings of the 2nd International Conference on sustainable development in civil engineering (ICSDC 2019), Jamshoro, Pakistan; 2019. p. 5–7. SaadOMChenYSavvaidisAFomelSChenYReal-time earthquake detection and magnitude estimation using vision transformerJ Geophys Res Solid Earth20221275e2021JB02365710.1029/2021JB023657 Iandola FN, Han S, Moskewicz MW, Ashraf K, Dally WJ, Keutzer K. Squeezenet: Alexnet-level accuracy with 50x fewer parameters and <0.5\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$< 0.5$$\end{document} mb model size; 2016. arXiv preprint arXiv:1602.07360 KrizhevskyASutskeverIHintonGEImagenet classification with deep convolutional neural networksCommun ACM2017606849010.1145/3065386 Sarp S, Kuzlu M, Cetin M, Sazara C, Guler O. Detecting floodwater on roadways from image data using mask-r-cnn. In: 2020 International Conference on INnovations in Intelligent SysTems and Applications (INISTA). IEEE; 2020. p. 1–6. JiMLiuLZhangRF BuchroithnerMDiscrimination of earthquake-induced building destruction from space using a pretrained cnn modelAppl Sci202010260210.3390/app10020602 Mao J, Harris K, Chang NR, Pennell C, Ren Y. Train and deploy an image classifier for disaster response. In: 2020 IEEE High Performance Extreme Computing Conference (HPEC). IEEE; 2020. p. 1–5. YuanJMaXHanGLiSGongWResearch on lightweight disaster classification based on high-resolution remote sensing imagesRemote Sens20221411257710.3390/rs14112577 MaHLiuYRenYYuJDetection of collapsed buildings in post-earthquake remote sensing images based on the improved yolov3Remote Sens20191214410.3390/rs12010044 Munsif M, Afridi H, Ullah M, Khan SD, Cheikh FA, Sajjad M. A lightweight convolution neural network for automatic disasters recognition. In: 2022 10th European Workshop on Visual Information Processing (EUVIP). IEEE; 2022. p. 1–6. Ma N, Zhang X, Zheng HT, Sun J. Shufflenet v2: Practical guidelines for efficient cnn architecture design. In: Proceedings of the European P Bhadra (3066_CR6) 2023; 34 3066_CR53 3066_CR52 GY Lee (3066_CR33) 2023; 20 SR Anggraeni (3066_CR5) 2022; 8 ML Zhang (3066_CR70) 2013; 26 3066_CR2 3066_CR1 3066_CR4 3066_CR19 3066_CR3 3066_CR7 3066_CR57 3066_CR12 3066_CR56 3066_CR11 3066_CR10 OM Saad (3066_CR50) 2022; 127 S Frizzi (3066_CR14) 2021; 15 3066_CR15 3066_CR59 H Ma (3066_CR36) 2020; 12 3066_CR58 L Shi (3066_CR55) 2021; 13 3066_CR42 3066_CR41 3066_CR40 J Chen (3066_CR9) 2023; 618 A Elangovan (3066_CR13) 2022; 46 A Gebrehiwot (3066_CR17) 2019; 19 A Shamsoshoara (3066_CR54) 2021; 193 3066_CR46 3066_CR44 3066_CR49 N Saini (3066_CR51) 2023; 29 J Yuan (3066_CR67) 2023; 10 A Krizhevsky (3066_CR28) 2017; 60 3066_CR30 M Ji (3066_CR25) 2020; 10 C Kyrkou (3066_CR31) 2020; 13 C Kyrkou (3066_CR29) 2023; 111 F Chen (3066_CR8) 2019; 7 J Yuan (3066_CR68) 2022; 14 X Ge (3066_CR16) 2023; 15 3066_CR35 3066_CR34 O Uwishema (3066_CR61) 2023; 401 3066_CR39 3066_CR38 3066_CR60 3066_CR20 3066_CR64 3066_CR62 R Ghali (3066_CR18) 2021; 13 HS Munawar (3066_CR45) 2021; 13 R Pally (3066_CR47) 2022; 148 H Ma (3066_CR37) 2019; 12 Y LeCun (3066_CR32) 1998; 86 P Mo (3066_CR43) 2023; 13 S Xie (3066_CR63) 2022; 104 3066_CR24 W Yang (3066_CR66) 2021; 13 3066_CR23 3066_CR22 3066_CR21 3066_CR65 M Rahnemoonfar (3066_CR48) 2021; 9 3066_CR27 3066_CR26 3066_CR69 |
References_xml | – ident: 3066_CR4 doi: 10.1109/ASONAM49781.2020.9381294 – ident: 3066_CR69 doi: 10.1080/10095020.2023.2183145 – ident: 3066_CR23 doi: 10.1109/CVPRW53098.2021.00289 – volume: 193 start-page: 108001 year: 2021 ident: 3066_CR54 publication-title: Comput Netw doi: 10.1016/j.comnet.2021.108001 contributor: fullname: A Shamsoshoara – volume: 34 start-page: 76 issue: 5 year: 2023 ident: 3066_CR6 publication-title: Mach Vis Appl doi: 10.1007/s00138-023-01430-1 contributor: fullname: P Bhadra – volume: 29 start-page: 100896 year: 2023 ident: 3066_CR51 publication-title: Remote Sens Appl Soc Environ contributor: fullname: N Saini – ident: 3066_CR22 – volume: 10 start-page: 2690 year: 2023 ident: 3066_CR67 publication-title: Front Environ Sci doi: 10.3389/fenvs.2022.1095986 contributor: fullname: J Yuan – ident: 3066_CR46 doi: 10.1109/EUVIP53989.2022.9922799 – ident: 3066_CR41 – volume: 15 start-page: 1085 issue: 4 year: 2023 ident: 3066_CR16 publication-title: Remote Sens doi: 10.3390/rs15041085 contributor: fullname: X Ge – ident: 3066_CR60 – volume: 13 start-page: 504 issue: 3 year: 2021 ident: 3066_CR66 publication-title: Remote Sens doi: 10.3390/rs13030504 contributor: fullname: W Yang – ident: 3066_CR35 doi: 10.1109/CVPR52688.2022.01167 – ident: 3066_CR15 doi: 10.1109/ICAIS53314.2022.9743035 – volume: 13 start-page: 8583 issue: 15 year: 2023 ident: 3066_CR43 publication-title: Appl Sci doi: 10.3390/app13158583 contributor: fullname: P Mo – ident: 3066_CR19 – volume: 12 start-page: 44 issue: 1 year: 2019 ident: 3066_CR37 publication-title: Remote Sens doi: 10.3390/rs12010044 contributor: fullname: H Ma – ident: 3066_CR2 – volume: 10 start-page: 602 issue: 2 year: 2020 ident: 3066_CR25 publication-title: Appl Sci doi: 10.3390/app10020602 contributor: fullname: M Ji – volume: 86 start-page: 2278 issue: 11 year: 1998 ident: 3066_CR32 publication-title: Proc IEEE doi: 10.1109/5.726791 contributor: fullname: Y LeCun – volume: 60 start-page: 84 issue: 6 year: 2017 ident: 3066_CR28 publication-title: Commun ACM doi: 10.1145/3065386 contributor: fullname: A Krizhevsky – volume: 13 start-page: 1687 year: 2020 ident: 3066_CR31 publication-title: IEEE J Sel Top Appl Earth Observ Remote Sens doi: 10.1109/JSTARS.2020.2969809 contributor: fullname: C Kyrkou – ident: 3066_CR27 doi: 10.1007/978-3-031-06794-5_4 – ident: 3066_CR42 – volume: 13 start-page: 3527 issue: 17 year: 2021 ident: 3066_CR18 publication-title: Remote Sens doi: 10.3390/rs13173527 contributor: fullname: R Ghali – ident: 3066_CR49 – ident: 3066_CR21 doi: 10.1109/ICCV.2019.00140 – ident: 3066_CR3 doi: 10.1007/s00521-022-07717-0 – volume: 104 start-page: 108401 year: 2022 ident: 3066_CR63 publication-title: Comput Electr Eng doi: 10.1016/j.compeleceng.2022.108401 contributor: fullname: S Xie – ident: 3066_CR1 doi: 10.1109/ICSES52305.2021.9633803 – ident: 3066_CR10 – ident: 3066_CR20 doi: 10.1109/ICIAI.2019.8850811 – ident: 3066_CR7 doi: 10.1109/ICIBA56860.2023.10165349 – ident: 3066_CR53 doi: 10.1109/INISTA49547.2020.9194655 – ident: 3066_CR57 doi: 10.1007/978-981-16-5987-4_49 – volume: 618 start-page: 129236 year: 2023 ident: 3066_CR9 publication-title: J Hydrol doi: 10.1016/j.jhydrol.2023.129236 contributor: fullname: J Chen – volume: 111 start-page: 19 issue: 1 year: 2023 ident: 3066_CR29 publication-title: Proc IEEE doi: 10.1109/JPROC.2022.3223186 contributor: fullname: C Kyrkou – volume: 19 start-page: 1486 issue: 7 year: 2019 ident: 3066_CR17 publication-title: Sensors doi: 10.3390/s19071486 contributor: fullname: A Gebrehiwot – ident: 3066_CR24 – ident: 3066_CR62 – volume: 14 start-page: 2577 issue: 11 year: 2022 ident: 3066_CR68 publication-title: Remote Sens doi: 10.3390/rs14112577 contributor: fullname: J Yuan – volume: 401 start-page: 727 issue: 10378 year: 2023 ident: 3066_CR61 publication-title: The Lancet doi: 10.1016/S0140-6736(23)00326-4 contributor: fullname: O Uwishema – volume: 9 start-page: 89644 year: 2021 ident: 3066_CR48 publication-title: IEEE Access doi: 10.1109/ACCESS.2021.3090981 contributor: fullname: M Rahnemoonfar – ident: 3066_CR38 doi: 10.1007/978-3-030-01264-9_8 – ident: 3066_CR34 doi: 10.1109/CVPR52688.2022.01167 – ident: 3066_CR39 doi: 10.1109/HPEC43674.2020.9286248 – ident: 3066_CR26 doi: 10.1109/CCDC49329.2020.9163816 – volume: 12 start-page: 260 issue: 2 year: 2020 ident: 3066_CR36 publication-title: Remote Sens doi: 10.3390/rs12020260 contributor: fullname: H Ma – volume: 13 start-page: 4213 issue: 21 year: 2021 ident: 3066_CR55 publication-title: Remote Sens doi: 10.3390/rs13214213 contributor: fullname: L Shi – ident: 3066_CR64 doi: 10.1109/ICESIT53460.2021.9696599 – ident: 3066_CR11 – ident: 3066_CR12 doi: 10.1088/1742-5468/ac9830 – ident: 3066_CR52 doi: 10.1109/CVPR.2018.00474 – ident: 3066_CR56 doi: 10.1007/978-3-031-44240-7_24 – ident: 3066_CR40 – volume: 127 start-page: e2021JB023657 issue: 5 year: 2022 ident: 3066_CR50 publication-title: J Geophys Res Solid Earth doi: 10.1029/2021JB023657 contributor: fullname: OM Saad – ident: 3066_CR65 – ident: 3066_CR59 doi: 10.1109/CVPR.2019.00293 – volume: 20 start-page: 1 year: 2023 ident: 3066_CR33 publication-title: IEEE Geosci Remote Sens Lett. contributor: fullname: GY Lee – ident: 3066_CR30 doi: 10.1109/CVPRW.2019.00077 – volume: 46 start-page: 131 issue: 7 year: 2022 ident: 3066_CR13 publication-title: Informatica. doi: 10.31449/inf.v46i7.4280 contributor: fullname: A Elangovan – volume: 148 start-page: 105285 year: 2022 ident: 3066_CR47 publication-title: Environ Model Softw doi: 10.1016/j.envsoft.2021.105285 contributor: fullname: R Pally – volume: 13 start-page: 7547 issue: 14 year: 2021 ident: 3066_CR45 publication-title: Sustainability doi: 10.3390/su13147547 contributor: fullname: HS Munawar – ident: 3066_CR44 – volume: 26 start-page: 1819 issue: 8 year: 2013 ident: 3066_CR70 publication-title: IEEE Trans Knowl Data Eng doi: 10.1109/TKDE.2013.39 contributor: fullname: ML Zhang – volume: 8 start-page: 31 issue: 1 year: 2022 ident: 3066_CR5 publication-title: J Inform Syst Eng Bus Intell. doi: 10.20473/jisebi.8.1.31-41 contributor: fullname: SR Anggraeni – volume: 7 start-page: 181396 year: 2019 ident: 3066_CR8 publication-title: IEEE Access doi: 10.1109/ACCESS.2019.2958983 contributor: fullname: F Chen – ident: 3066_CR58 – volume: 15 start-page: 634 issue: 3 year: 2021 ident: 3066_CR14 publication-title: IET Image Proc doi: 10.1049/ipr2.12046 contributor: fullname: S Frizzi |
SSID | ssj0002504465 |
Score | 2.3178356 |
Snippet | The integration of Unmanned Aerial Vehicles (UAVs) with artificial intelligence (AI) models for aerial imagery processing in disaster assessment, necessitates... Abstract The integration of Unmanned Aerial Vehicles (UAVs) with artificial intelligence (AI) models for aerial imagery processing in disaster assessment,... |
SourceID | proquest crossref springer |
SourceType | Aggregation Database Publisher |
StartPage | 770 |
SubjectTerms | Artificial intelligence Artificial neural networks Building failures Classification Computer Analysis of Images and Patterns in the Deep Learning Era Computer Imaging Computer Science Computer Systems Organization and Communication Networks Context Data collection Data Structures and Information Theory Datasets Deep learning Disasters Earthquakes Efficiency Embedded systems Emergency preparedness Feature extraction Floods Forest & brush fires Frames per second Information Systems and Communication Service Labels Original Research Pattern Recognition and Graphics Performance evaluation R&D Real time Research & development Satellites Seismic engineering Social networks Software Engineering/Programming and Operating Systems Test sets Unmanned aerial vehicles Vision |
Title | DiRecNetV2: A Transformer-Enhanced Network for Aerial Disaster Recognition |
URI | https://link.springer.com/article/10.1007/s42979-024-03066-y https://www.proquest.com/docview/3090749385 |
Volume | 5 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV09T8MwED1Bu7DwjSiUygMbWCSu7cRs_VRViQyoRd0iO3ZElxS1Zei_x3YTKioYWOPEw_M5987newdwn5ksEpHKcRBojmkYGBxnUYBjSaQMZaalP8x5SfhoSsczNitlclwtzF7-_mll_5eRwNaTYMduOd4cQt35YNemocd73-cpToqLclbWxfz-6U_fsyOUezlQ71qGp3BcckLU2S7iGRyY4hxOqn4LqNx-FzDuzy3LS8z6jTyjDppUnNMs8aB496l8lGyvdSP7HHW8daH-fCWdHAJ6rS4LLYpLmA4Hk94Il70QcGY52wazXLIwJ0xyFVjKJZgxFkcZRFpRzjNmw4Q4V5ZO8ZhKEWYqFkbHVCkbckQsY-0rqBWLwlwDIkyHubZhTCQFJUQLklNXykEcW6C5bsBDhVL6sZW8SL_FjT2mqcU09ZimmwY0KyDT0vxXaTtwMbdox6wBjxW4u-G_Z7v53-u3cET8-jqV7SbU1stPc2dJwlq1oN4ZdrtJy1vJF7g3tTA |
link.rule.ids | 315,783,787,27938,27939,41134,42203,51590 |
linkProvider | Springer Nature |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV09T8MwELWgDLDwjSgU8MAGFo5rOzFb1Q-V0mZALeoW2bEjuqSoLUP_PbabtKKCgdVxMjyfc-98d88A3KcmDUWoMoSx5ogG2KAoDTGKJJEykKmW_jBnEPPuiPbGbFzI5LhemK38_dPc_i9DgawnQY7dcrTcBXuUEOwTs7y5Pk9xUlyUs6Iv5vdXf_qeDaHcyoF619I5BocFJ4SN1SKegB2Tn4Kj8r4FWGy_M9BrTSzLi83inTzDBhyWnNPMUDv_8Kl8GK_KuqEdhw1vXbA1mUsnhwDfymKhaX4ORp32sNlFxV0IKLWcbYlYJlmQESa5wpZyCWaMxVHiUCvKecpsmBBlytIpHlEpglRFwuiIKmVDjpClrH4BKvk0N5cAEqaDTNswJpTCIqkFyahr5SCOLdBMV8FDiVLyuZK8SNbixh7TxGKaeEyTZRXUSiCTwvznSR27mFvUI1YFjyW4m8d_f-3qf9PvwH53OOgn_Zf49RocEL_WTnG7BiqL2Ze5sYRhoW69pXwDjlq2lQ |
linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3PT8IwGG0UE-PF30YUtQdv2rB1bbd6IyBBVGIMGG5Lt7aRyyAwD_z3tt0GavTgdb8Or-2-9_X73isA16lKQx4mGnmeZIj4nkJRGnooElgIX6RSuM2c5wHrjUh_TMdfVPyu270qSRaaBuvSlOXNmdTNlfDN_EVDjkx8QZbzMrTcBFvE9wPrnt9m7dUuizXoIoyWapnfX_0ekdY080dl1AWc7j7YLZkibBVDewA2VHYI9qpTGGC5KI9AvzMx3G-g8jd8B1twWDFRNUf32bsr8MNB0ewNzXXYcnMOdiYLYU0S4GvVQjTNjsGoez9s91B5QgJKDZNbIqoF9TWmgiWeIWKcKmXQFV4oE8JYSk3yEOnEkCwWEcH9NIm4khFJEpOIhDSlwQmoZdNMnQKIqfS1NMlNKDjBWHKsiRV4YMshiJZ1cFOhFM8KI4x4ZXnsMI0NprHDNF7WQaMCMi4XxSIOPJuJ8yCidXBbgbu-_ffXzv73-BXYful046eHweM52MFuqK0NdwPU8vmHujAsIk8u3UT5BPOYvuU |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=DiRecNetV2%3A+A+Transformer-Enhanced+Network+for+Aerial+Disaster+Recognition&rft.jtitle=SN+computer+science&rft.au=Shianios%2C+Demetris&rft.au=Kolios%2C+Panayiotis+S.&rft.au=Kyrkou%2C+Christos&rft.date=2024-08-08&rft.issn=2661-8907&rft.eissn=2661-8907&rft.volume=5&rft.issue=6&rft_id=info:doi/10.1007%2Fs42979-024-03066-y&rft.externalDBID=n%2Fa&rft.externalDocID=10_1007_s42979_024_03066_y |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2661-8907&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2661-8907&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2661-8907&client=summon |