DECA: a novel multi-scale efficient channel attention module for object detection in real-life fire images
Channel attention mechanisms have attracted more and more researchers because of their generality and effectiveness in deep convolutional neural networks(DCNNs). However, the signal encoding methods of the current popular channel attention mechanisms are limited. For example, SENet uses the full-con...
Saved in:
Published in | Applied intelligence (Dordrecht, Netherlands) Vol. 52; no. 2; pp. 1362 - 1375 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
New York
Springer US
01.01.2022
Springer Nature B.V |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Channel attention mechanisms have attracted more and more researchers because of their generality and effectiveness in deep convolutional neural networks(DCNNs). However, the signal encoding methods of the current popular channel attention mechanisms are limited. For example, SENet uses the full-connection method to encode channel relevance, which is parameters-costly; ECANet uses 1D-Convolution to encode channel relevance, which is parameter fewer but can only encode per
k
adjacent channels in a fixed scale. This paper proposes a novel dilated efficient channel attention module(DECA), which consists of a novel multi-scale channel encoding method and a novel channel relevance feature fusion method. We empirically show that different scale channel relevance also contributes to performance, and fusing various scale channel relevance features can obtain more powerful channel feature representation. Besides, we widely use the weight-sharing method in the DECA module to make it more efficient. Specifically, we have applied our module to the real-life fire image detection task to evaluate its effectiveness. Extensive experiments on different backbone depths, detectors, and fire datasets have shown that the average performance boost of DECA module is more than 4.5
%
compare to the baselines. Meanwhile, DECA outperforms other state-of-art attention modules while keeping lower or comparable parameters in the experiments. The experimental results on different datasets also shown that the DECA module holds great generalization ability. |
---|---|
AbstractList | Channel attention mechanisms have attracted more and more researchers because of their generality and effectiveness in deep convolutional neural networks(DCNNs). However, the signal encoding methods of the current popular channel attention mechanisms are limited. For example, SENet uses the full-connection method to encode channel relevance, which is parameters-costly; ECANet uses 1D-Convolution to encode channel relevance, which is parameter fewer but can only encode per
k
adjacent channels in a fixed scale. This paper proposes a novel dilated efficient channel attention module(DECA), which consists of a novel multi-scale channel encoding method and a novel channel relevance feature fusion method. We empirically show that different scale channel relevance also contributes to performance, and fusing various scale channel relevance features can obtain more powerful channel feature representation. Besides, we widely use the weight-sharing method in the DECA module to make it more efficient. Specifically, we have applied our module to the real-life fire image detection task to evaluate its effectiveness. Extensive experiments on different backbone depths, detectors, and fire datasets have shown that the average performance boost of DECA module is more than 4.5
%
compare to the baselines. Meanwhile, DECA outperforms other state-of-art attention modules while keeping lower or comparable parameters in the experiments. The experimental results on different datasets also shown that the DECA module holds great generalization ability. Channel attention mechanisms have attracted more and more researchers because of their generality and effectiveness in deep convolutional neural networks(DCNNs). However, the signal encoding methods of the current popular channel attention mechanisms are limited. For example, SENet uses the full-connection method to encode channel relevance, which is parameters-costly; ECANet uses 1D-Convolution to encode channel relevance, which is parameter fewer but can only encode per k adjacent channels in a fixed scale. This paper proposes a novel dilated efficient channel attention module(DECA), which consists of a novel multi-scale channel encoding method and a novel channel relevance feature fusion method. We empirically show that different scale channel relevance also contributes to performance, and fusing various scale channel relevance features can obtain more powerful channel feature representation. Besides, we widely use the weight-sharing method in the DECA module to make it more efficient. Specifically, we have applied our module to the real-life fire image detection task to evaluate its effectiveness. Extensive experiments on different backbone depths, detectors, and fire datasets have shown that the average performance boost of DECA module is more than 4.5% compare to the baselines. Meanwhile, DECA outperforms other state-of-art attention modules while keeping lower or comparable parameters in the experiments. The experimental results on different datasets also shown that the DECA module holds great generalization ability. |
Author | He, Zhu Yu, Jiong Wang, Junjie |
Author_xml | – sequence: 1 givenname: Junjie surname: Wang fullname: Wang, Junjie organization: Software Institute, Xinjiang University – sequence: 2 givenname: Jiong surname: Yu fullname: Yu, Jiong email: yujiong@xju.edu.cn organization: Software Institute, Xinjiang University – sequence: 3 givenname: Zhu surname: He fullname: He, Zhu organization: Software Institute, Xinjiang University |
BookMark | eNp9kEtLQzEQhYMoWKt_wFXAdTSv-4i7Up8guFFwF9LcuZpym9QkV-i_N7WC4KKLMIT5zsycc4IOffCA0Dmjl4zS5ioxKltFKGflSVWTzQGasKoRpJGqOUQTqrgkda3ejtFJSktKqRCUTdDy5nY-u8YG-_AFA16NQ3YkWTMAhr531oHP2H4Y70vX5Fy-Lni8Ct1YkD5EHBZLsBl3kEvZ9pzHEcxABtcXwkXAbmXeIZ2io94MCc5-6xS93t2-zB_I0_P943z2RKxgKhMAULZ46mRxspBtZRXQCoTiXImGStlQYeiiZ23V2pp1suFNV9Wy7zpprBRiii52c9cxfI6Qsl6GMfqyUvOac6pq2cpCtTvKxpBShF5bl83WQI7GDZpRvU1W75LVJVn9k6zeFCn_J13HYjFu9ovETpQK7N8h_l21R_UNQzaN9w |
CitedBy_id | crossref_primary_10_1007_s10489_023_04544_1 crossref_primary_10_1155_2022_5736407 crossref_primary_10_1109_TIP_2022_3207006 crossref_primary_10_1016_j_jvcir_2023_103781 crossref_primary_10_2139_ssrn_4191360 crossref_primary_10_1007_s00138_023_01439_6 crossref_primary_10_3390_electronics12224566 crossref_primary_10_3390_fire6120446 crossref_primary_10_1109_ACCESS_2022_3199368 crossref_primary_10_1155_2021_5554316 crossref_primary_10_1088_1361_6560_ad2014 crossref_primary_10_3233_JIFS_231167 crossref_primary_10_3390_rs16183382 crossref_primary_10_1109_ACCESS_2023_3339560 crossref_primary_10_1109_ACCESS_2024_3412796 crossref_primary_10_3390_s22103891 crossref_primary_10_1007_s10489_022_03747_2 crossref_primary_10_1016_j_ins_2023_119281 crossref_primary_10_1007_s10489_022_03774_z crossref_primary_10_1109_TIM_2022_3216399 |
Cites_doi | 10.1016/j.knosys.2020.105590 10.1109/TPAMI.2017.2699184 10.1109/CVPR42600.2020.01155 10.1007/978-3-319-10602-1_48 10.1109/CVPR.2016.91 10.1109/CVPR46437.2021.01008 10.1007/978-3-319-46448-0_2 10.1145/3065386 10.1109/ACCESS.2020.2982994 10.1109/CVPR.2019.00060 10.1109/TIP.2020.3016431 10.1109/CVPR42600.2020.00978 10.1109/ISCID.2018.00070 10.1109/CVPR.2019.00314 10.1109/ICIVC.2018.8492823 10.1109/ICCV.2015.169 10.1109/CVPR.2009.5206848 10.1109/CVPR.2015.7298594 10.1109/CVPR.2016.90 10.1109/ICCV.2017.324 10.1109/CVPR.2017.106 10.1109/ICCV.2019.00615 10.1016/j.ins.2019.12.084 10.1109/CVPR.2018.00813 10.1007/978-3-030-58555-6_16 10.1016/j.knosys.2019.105448 10.1007/978-3-030-01240-3_21 10.1109/ICFSFPE48751.2019.9055795 10.1109/ICCV.2019.00338 10.1007/s11263-009-0275-4 10.1109/CVPR.2017.243 10.1109/CVPR.2018.00745 10.1007/978-3-030-01234-2_1 |
ContentType | Journal Article |
Copyright | The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021 The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021. |
Copyright_xml | – notice: The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021 – notice: The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021. |
DBID | AAYXX CITATION 3V. 7SC 7WY 7WZ 7XB 87Z 8AL 8FD 8FE 8FG 8FK 8FL ABJCF ABUWG AFKRA ARAPS AZQEC BENPR BEZIV BGLVJ CCPQU DWQXO FRNLG F~G GNUQQ HCIFZ JQ2 K60 K6~ K7- L.- L6V L7M L~C L~D M0C M0N M7S P5Z P62 PHGZM PHGZT PKEHL PQBIZ PQBZA PQEST PQGLB PQQKQ PQUKI PSYQQ PTHSS Q9U |
DOI | 10.1007/s10489-021-02496-y |
DatabaseName | CrossRef ProQuest Central (Corporate) Computer and Information Systems Abstracts ABI/INFORM Collection ABI/INFORM Global (PDF only) ProQuest Central (purchase pre-March 2016) ABI/INFORM Collection Computing Database (Alumni Edition) Technology Research Database ProQuest SciTech Collection ProQuest Technology Collection ProQuest Central (Alumni) (purchase pre-March 2016) ABI/INFORM Collection (Alumni) Materials Science & Engineering Collection ProQuest Central (Alumni) ProQuest Central UK/Ireland Advanced Technologies & Aerospace Collection ProQuest Central Essentials ProQuest Central Business Premium Collection Technology Collection ProQuest One Community College ProQuest Central Business Premium Collection (Alumni) ABI/INFORM Global (Corporate) ProQuest Central Student SciTech Premium Collection ProQuest Computer Science Collection ProQuest Business Collection (Alumni Edition) ProQuest Business Collection Computer Science Database ABI/INFORM Professional Advanced ProQuest Engineering Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional ABI/INFORM Global Computing Database ProQuest Engineering Database Advanced Technologies & Aerospace Database ProQuest Advanced Technologies & Aerospace Collection ProQuest Central Premium ProQuest One Academic ProQuest One Academic Middle East (New) ProQuest One Business ProQuest One Business (Alumni) ProQuest One Academic Eastern Edition (DO NOT USE) ProQuest One Applied & Life Sciences ProQuest One Academic ProQuest One Academic UKI Edition ProQuest One Psychology Engineering Collection ProQuest Central Basic |
DatabaseTitle | CrossRef ABI/INFORM Global (Corporate) ProQuest Business Collection (Alumni Edition) ProQuest One Business ProQuest One Psychology Computer Science Database ProQuest Central Student Technology Collection Technology Research Database Computer and Information Systems Abstracts – Academic ProQuest One Academic Middle East (New) ProQuest Advanced Technologies & Aerospace Collection ProQuest Central Essentials ProQuest Computer Science Collection Computer and Information Systems Abstracts ProQuest Central (Alumni Edition) SciTech Premium Collection ProQuest One Community College ABI/INFORM Complete ProQuest Central ABI/INFORM Professional Advanced ProQuest One Applied & Life Sciences ProQuest Engineering Collection ProQuest Central Korea ProQuest Central (New) Advanced Technologies Database with Aerospace ABI/INFORM Complete (Alumni Edition) Engineering Collection Advanced Technologies & Aerospace Collection Business Premium Collection ABI/INFORM Global ProQuest Computing Engineering Database ABI/INFORM Global (Alumni Edition) ProQuest Central Basic ProQuest Computing (Alumni Edition) ProQuest One Academic Eastern Edition ProQuest Technology Collection ProQuest SciTech Collection ProQuest Business Collection Computer and Information Systems Abstracts Professional Advanced Technologies & Aerospace Database ProQuest One Academic UKI Edition Materials Science & Engineering Collection ProQuest One Business (Alumni) ProQuest One Academic ProQuest Central (Alumni) ProQuest One Academic (New) Business Premium Collection (Alumni) |
DatabaseTitleList | ABI/INFORM Global (Corporate) |
Database_xml | – sequence: 1 dbid: 8FG name: ProQuest Technology Collection url: https://search.proquest.com/technologycollection1 sourceTypes: Aggregation Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Computer Science |
EISSN | 1573-7497 |
EndPage | 1375 |
ExternalDocumentID | 10_1007_s10489_021_02496_y |
GrantInformation_xml | – fundername: National Natural Science Foundation of China grantid: 61862060 funderid: https://doi.org/10.13039/501100001809 |
GroupedDBID | -4Z -59 -5G -BR -EM -Y2 -~C -~X .86 .DC .VR 06D 0R~ 0VY 1N0 1SB 2.D 203 23M 28- 2J2 2JN 2JY 2KG 2LR 2P1 2VQ 2~H 30V 3V. 4.4 406 408 409 40D 40E 5GY 5QI 5VS 67Z 6NX 77K 7WY 8FE 8FG 8FL 8TC 8UJ 95- 95. 95~ 96X AAAVM AABHQ AACDK AAHNG AAIAL AAJBT AAJKR AANZL AAOBN AARHV AARTL AASML AATNV AATVU AAUYE AAWCG AAYIU AAYQN AAYTO AAYZH ABAKF ABBBX ABBXA ABDZT ABECU ABFTV ABHLI ABHQN ABIVO ABJCF ABJNI ABJOX ABKCH ABKTR ABMNI ABMQK ABNWP ABQBU ABQSL ABSXP ABTAH ABTEG ABTHY ABTKH ABTMW ABULA ABUWG ABWNU ABXPI ACAOD ACBXY ACDTI ACGFS ACHSB ACHXU ACIWK ACKNC ACMDZ ACMLO ACOKC ACOMO ACPIV ACSNA ACZOJ ADHHG ADHIR ADIMF ADINQ ADKNI ADKPE ADRFC ADTPH ADURQ ADYFF ADZKW AEBTG AEFIE AEFQL AEGAL AEGNC AEJHL AEJRE AEKMD AEMSY AENEX AEOHA AEPYU AESKC AETLH AEVLU AEXYK AFBBN AFEXP AFGCZ AFKRA AFLOW AFQWF AFWTZ AFZKB AGAYW AGDGC AGGDS AGJBK AGMZJ AGQEE AGQMX AGRTI AGWIL AGWZB AGYKE AHAVH AHBYD AHKAY AHSBF AHYZX AIAKS AIGIU AIIXL AILAN AITGF AJBLW AJRNO AJZVZ ALMA_UNASSIGNED_HOLDINGS ALWAN AMKLP AMXSW AMYLF AMYQR AOCGG ARAPS ARMRJ ASPBG AVWKF AXYYD AYJHY AZFZN AZQEC B-. BA0 BBWZM BDATZ BENPR BEZIV BGLVJ BGNMA BPHCQ BSONS CAG CCPQU COF CS3 CSCUP DDRTE DL5 DNIVK DPUIP DWQXO EBLON EBS EIOEI EJD ESBYG FEDTE FERAY FFXSO FIGPU FINBP FNLPD FRNLG FRRFC FSGXE FWDCC GGCAI GGRSB GJIRD GNUQQ GNWQR GQ6 GQ7 GQ8 GROUPED_ABI_INFORM_COMPLETE GXS H13 HCIFZ HF~ HG5 HG6 HMJXF HQYDN HRMNR HVGLF HZ~ I09 IHE IJ- IKXTQ ITM IWAJR IXC IZIGR IZQ I~X I~Z J-C J0Z JBSCW JCJTX JZLTJ K60 K6V K6~ K7- KDC KOV KOW L6V LAK LLZTM M0C M0N M4Y M7S MA- N2Q N9A NB0 NDZJH NPVJJ NQJWS NU0 O9- O93 O9G O9I O9J OAM OVD P19 P2P P62 P9O PF0 PQBIZ PQBZA PQQKQ PROAC PSYQQ PT4 PT5 PTHSS Q2X QOK QOS R4E R89 R9I RHV RNI RNS ROL RPX RSV RZC RZE RZK S16 S1Z S26 S27 S28 S3B SAP SCJ SCLPG SCO SDH SDM SHX SISQX SJYHP SNE SNPRN SNX SOHCF SOJ SPISZ SRMVM SSLCW STPWE SZN T13 T16 TEORI TSG TSK TSV TUC U2A UG4 UOJIU UTJUX UZXMN VC2 VFIZW W23 W48 WK8 YLTOR Z45 Z7R Z7X Z7Z Z81 Z83 Z88 Z8M Z8N Z8R Z8T Z8U Z8W Z92 ZMTXR ZY4 ~A9 ~EX AAPKM AAYXX ABBRH ABDBE ABFSG ACSTC ADHKG ADKFA AEZWR AFDZB AFHIU AFOHR AGQPQ AHPBZ AHWEU AIXLP ATHPR AYFIA CITATION PHGZM PHGZT 7SC 7XB 8AL 8FD 8FK ABRTQ JQ2 L.- L7M L~C L~D PKEHL PQEST PQGLB PQUKI Q9U |
ID | FETCH-LOGICAL-c319t-eee9c007d4024b485c9e05e3922937044703a0bf1858c61d4727d564fdd4ac433 |
IEDL.DBID | U2A |
ISSN | 0924-669X |
IngestDate | Fri Jul 25 12:15:30 EDT 2025 Tue Jul 01 03:31:45 EDT 2025 Thu Apr 24 23:08:17 EDT 2025 Fri Feb 21 02:46:08 EST 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 2 |
Keywords | Attention mechanism Neural network Fire detection Object detection |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c319t-eee9c007d4024b485c9e05e3922937044703a0bf1858c61d4727d564fdd4ac433 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
PQID | 2622096484 |
PQPubID | 326365 |
PageCount | 14 |
ParticipantIDs | proquest_journals_2622096484 crossref_citationtrail_10_1007_s10489_021_02496_y crossref_primary_10_1007_s10489_021_02496_y springer_journals_10_1007_s10489_021_02496_y |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 20220100 2022-01-00 20220101 |
PublicationDateYYYYMMDD | 2022-01-01 |
PublicationDate_xml | – month: 1 year: 2022 text: 20220100 |
PublicationDecade | 2020 |
PublicationPlace | New York |
PublicationPlace_xml | – name: New York – name: Boston |
PublicationSubtitle | The International Journal of Research on Intelligent Systems for Real Life Complex Problems |
PublicationTitle | Applied intelligence (Dordrecht, Netherlands) |
PublicationTitleAbbrev | Appl Intell |
PublicationYear | 2022 |
Publisher | Springer US Springer Nature B.V |
Publisher_xml | – name: Springer US – name: Springer Nature B.V |
References | Goyal P, Dollár P, Girshick R, Noordhuis P, Wesolowski L, Kyrola A, Tulloch A, Jia Y, He K (2017) Accurate, large minibatch sgd: training imagenet in 1 hour. arXiv:1706.02677 EveringhamMGoolLVWilliamsCKIWinnJZissermanAThe pascal visual object classes (voc) challengeInternational Journal of Computer Vision201088230333810.1007/s11263-009-0275-4 Lin T-Y, Goyal P, Girshick R, He K, Dollár P (2017) Focal loss for dense object detection. In: Proceedings of the IEEE international conference on computer vision, pp 2980–2988 Chen L-C, Papandreou G, Kokkinos I, Murphy K, Yuille AL (2017) Deeplab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs, vol 40, pp 834–848 Zhaa X, Ji H, Zhang D, Bao H (2018) Fire smoke detection based on contextual object detection. In: 2018 IEEE 3rd international conference on image, vision and computing (ICIVC). IEEE, pp 473–476 KrizhevskyASutskeverIHintonGEImagenet classification with deep convolutional neural networksCommun ACM2017606849010.1145/3065386 Chen K, Cheng Y, Bai H, Mou C, Zhang Y (2019) Research on image fire detection based on support vector machine. In: 2019 9th international conference on fire science and fire protection engineering (ICFSFPE). IEEE, pp 1–7 Zhang H, Chang H, Ma B, Wang N, Chen X (2020) Dynamic r-cnn: towards high quality object detection via dynamic training. arXiv:2004.06002 Wang X, Girshick R, Gupta A, He K (2018) Non-local neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7794–7803 Li X, Wang W, Hu X, Yang J (2019) Selective kernel networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 510–519 GengYan L (2021) Fire detect dataset. https://github.com/gengyanlei/fire-detect-yolov4 Zhang S, Chi C, Yao Y, Lei Z, Li SZ (2020) Bridging the gap between anchor-based and anchor-free detection via adaptive training sample selection. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 9759–9768 He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778 GaoPZhangQWangFXiaoLFujitaHZhangYLearning reinforced attentional representation for end-to-end visual trackingInf Sci2020517526710.1016/j.ins.2019.12.084 Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu C-Y, Berg AC (2016) Ssd: single shot multibox detector. In: European conference on computer vision, pp 21–37. Springer Lin T-Y, Maire M, Belongie S, Hays J, Perona P, Ramanan D, Dollár P, Zitnick CL (2014) Microsoft coco: common objects in context. In: European conference on computer vision, pp 740–755. Springer ChaoxiaCShangWZhangFInformation-guided flame detection based on faster r-cnnIEEE Access20208589235893210.1109/ACCESS.2020.2982994 Deng J, Dong W, Socher R, Li L-J, Li K, Fei-Fei L (2009) Imagenet: a large-scale hierarchical image database. In: 2009 IEEE conference on computer vision and pattern recognition, pp 248–255. IEEE Li Z, Peng C, Yu G, Zhang X, Deng Y, Sun J (2018) Detnet: a backbone network for object detection. arXiv:1804.06215 Shixiao W u, Zhang L (2018) Using popular object detection methods for real time forest fire detection. In: 2018 11th international symposium on computational intelligence and design (ISCID), vol 1. IEEE, pp 280–284 Ren S, He K, Girshick R, Sun J (2015) Faster r-cnn: towards real-time object detection with region proposal networks. In: Advances in neural information processing systems, pp 91–99 Huang G, Liu Z, Maaten LVD, Weinberger KQ (2017) Densely connected convolutional networks. In: proceedings of the IEEE conference on computer vision and pattern recognition, pp 4700–4708 Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1–9 Chen K, Wang J, Pang J, Cao Y, Xiong Y, Li X, Sun S, Feng W, Liu Z, Xu J, et al. (2019) Mmdetection: open mmlab detection toolbox and benchmark. arXiv:1906.07155 Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556 Redmon J, Divvala S, Girshick R, Farhadi A (2016) You only look once: unified, real-time object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 779–788 Bello I, Zoph B, Vaswani A, Shlens J, Le QV (2019) Attention augmented convolutional networks. In: Proceedings of the IEEE international conference on computer vision, pp 3286–3295 Lin T-Y, Dollár P, Girshick R, He K, Hariharan B, Belongie S (2017) Feature pyramid networks for object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2117–2125 Li Y, Chen Y, Wang N, Zhang Z (2019) Scale-aware trident networks for object detection. In: Proceedings of the IEEE international conference on computer vision, pp 6054–6063 Liu S, Huang D, Wang Y (2019) Learning spatial fusion for single-shot object detection. arXiv:1911.09516 Wang Q, Wu B, Zhu P, Li P, Zuo W, Hu Q (2020) Eca-net: efficient channel attention for deep convolutional neural networks. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 11534–11542 GaoPYuanRWangFXiaoLFujitaHZhangYSiamese attentional keypoint network for high performance visual trackingKnowledge-Based Systems202019310544810.1016/j.knosys.2019.105448 Hu J, Li S, Sun G (2018) Squeeze-and-excitation networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7132–7141 Gaia (2021) D-fire: an image dataset of fire and smoke occurrences designed for machine learning and object recognition algorithms with more than 10000 images. https://github.com/gaiasd/DFireDataset Woo S, Park J, Lee Joon-Young, In SK (2018) Cbam: convolutional block attention module. In: Proceedings of the European conference on computer vision (ECCV), pp 3–19 LiSYanQLiuPAn efficient fire detection method based on multiscale feature extraction, implicit deep supervision and channel attention mechanismIEEE Trans Image Process2020298467847510.1109/TIP.2020.3016431 Gao Z, Xie J, Wang Q, Li P (2019) Global second-order pooling convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3024–3033 Girshick R (2015) Fast r-cnn. In: Proceedings of the IEEE international conference on computer vision, pp 1440–1448 Pérez-HernándezFTabikSLamasAOlmosRFujitaHHerreraFObject detection binary classifiers methodology based on deep learning to identify small objects handled similarly: application in video surveillanceKnowl-Based Syst202019410559010.1016/j.knosys.2020.105590 Chen L-C, Papandreou G, Schroff F, Adam H (2017) Rethinking atrous convolution for semantic image segmentation. arXiv:1706.05587 Qiao S, Chen L-C, Yuille A (2020) Detectors: detecting objects with recursive feature pyramid and switchable atrous convolution. arXiv:2006.02334 2496_CR29 P Gao (2496_CR31) 2020; 517 2496_CR27 2496_CR28 F Pérez-Hernández (2496_CR34) 2020; 194 2496_CR23 2496_CR24 2496_CR21 2496_CR22 2496_CR41 2496_CR20 2496_CR40 C Chaoxia (2496_CR25) 2020; 8 S Li (2496_CR26) 2020; 29 2496_CR7 2496_CR8 2496_CR9 2496_CR18 A Krizhevsky (2496_CR3) 2017; 60 2496_CR19 2496_CR1 2496_CR16 2496_CR38 2496_CR17 2496_CR39 2496_CR14 2496_CR36 2496_CR4 2496_CR15 2496_CR37 2496_CR5 2496_CR12 P Gao (2496_CR32) 2020; 193 2496_CR6 2496_CR13 2496_CR35 2496_CR10 2496_CR11 2496_CR33 2496_CR30 M Everingham (2496_CR2) 2010; 88 |
References_xml | – reference: LiSYanQLiuPAn efficient fire detection method based on multiscale feature extraction, implicit deep supervision and channel attention mechanismIEEE Trans Image Process2020298467847510.1109/TIP.2020.3016431 – reference: Pérez-HernándezFTabikSLamasAOlmosRFujitaHHerreraFObject detection binary classifiers methodology based on deep learning to identify small objects handled similarly: application in video surveillanceKnowl-Based Syst202019410559010.1016/j.knosys.2020.105590 – reference: ChaoxiaCShangWZhangFInformation-guided flame detection based on faster r-cnnIEEE Access20208589235893210.1109/ACCESS.2020.2982994 – reference: Wang X, Girshick R, Gupta A, He K (2018) Non-local neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7794–7803 – reference: Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556 – reference: Woo S, Park J, Lee Joon-Young, In SK (2018) Cbam: convolutional block attention module. In: Proceedings of the European conference on computer vision (ECCV), pp 3–19 – reference: Huang G, Liu Z, Maaten LVD, Weinberger KQ (2017) Densely connected convolutional networks. In: proceedings of the IEEE conference on computer vision and pattern recognition, pp 4700–4708 – reference: Zhang S, Chi C, Yao Y, Lei Z, Li SZ (2020) Bridging the gap between anchor-based and anchor-free detection via adaptive training sample selection. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 9759–9768 – reference: GaoPZhangQWangFXiaoLFujitaHZhangYLearning reinforced attentional representation for end-to-end visual trackingInf Sci2020517526710.1016/j.ins.2019.12.084 – reference: Lin T-Y, Dollár P, Girshick R, He K, Hariharan B, Belongie S (2017) Feature pyramid networks for object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2117–2125 – reference: Chen K, Wang J, Pang J, Cao Y, Xiong Y, Li X, Sun S, Feng W, Liu Z, Xu J, et al. (2019) Mmdetection: open mmlab detection toolbox and benchmark. arXiv:1906.07155 – reference: Qiao S, Chen L-C, Yuille A (2020) Detectors: detecting objects with recursive feature pyramid and switchable atrous convolution. arXiv:2006.02334 – reference: Liu S, Huang D, Wang Y (2019) Learning spatial fusion for single-shot object detection. arXiv:1911.09516 – reference: Redmon J, Divvala S, Girshick R, Farhadi A (2016) You only look once: unified, real-time object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 779–788 – reference: Wang Q, Wu B, Zhu P, Li P, Zuo W, Hu Q (2020) Eca-net: efficient channel attention for deep convolutional neural networks. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 11534–11542 – reference: Gao Z, Xie J, Wang Q, Li P (2019) Global second-order pooling convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3024–3033 – reference: Hu J, Li S, Sun G (2018) Squeeze-and-excitation networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7132–7141 – reference: He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778 – reference: Goyal P, Dollár P, Girshick R, Noordhuis P, Wesolowski L, Kyrola A, Tulloch A, Jia Y, He K (2017) Accurate, large minibatch sgd: training imagenet in 1 hour. arXiv:1706.02677 – reference: Li X, Wang W, Hu X, Yang J (2019) Selective kernel networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 510–519 – reference: EveringhamMGoolLVWilliamsCKIWinnJZissermanAThe pascal visual object classes (voc) challengeInternational Journal of Computer Vision201088230333810.1007/s11263-009-0275-4 – reference: Girshick R (2015) Fast r-cnn. In: Proceedings of the IEEE international conference on computer vision, pp 1440–1448 – reference: Deng J, Dong W, Socher R, Li L-J, Li K, Fei-Fei L (2009) Imagenet: a large-scale hierarchical image database. In: 2009 IEEE conference on computer vision and pattern recognition, pp 248–255. IEEE – reference: Shixiao W u, Zhang L (2018) Using popular object detection methods for real time forest fire detection. In: 2018 11th international symposium on computational intelligence and design (ISCID), vol 1. IEEE, pp 280–284 – reference: Bello I, Zoph B, Vaswani A, Shlens J, Le QV (2019) Attention augmented convolutional networks. In: Proceedings of the IEEE international conference on computer vision, pp 3286–3295 – reference: Li Y, Chen Y, Wang N, Zhang Z (2019) Scale-aware trident networks for object detection. In: Proceedings of the IEEE international conference on computer vision, pp 6054–6063 – reference: Chen L-C, Papandreou G, Schroff F, Adam H (2017) Rethinking atrous convolution for semantic image segmentation. arXiv:1706.05587 – reference: Chen L-C, Papandreou G, Kokkinos I, Murphy K, Yuille AL (2017) Deeplab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs, vol 40, pp 834–848 – reference: GengYan L (2021) Fire detect dataset. https://github.com/gengyanlei/fire-detect-yolov4 – reference: Lin T-Y, Maire M, Belongie S, Hays J, Perona P, Ramanan D, Dollár P, Zitnick CL (2014) Microsoft coco: common objects in context. In: European conference on computer vision, pp 740–755. Springer – reference: GaoPYuanRWangFXiaoLFujitaHZhangYSiamese attentional keypoint network for high performance visual trackingKnowledge-Based Systems202019310544810.1016/j.knosys.2019.105448 – reference: Lin T-Y, Goyal P, Girshick R, He K, Dollár P (2017) Focal loss for dense object detection. In: Proceedings of the IEEE international conference on computer vision, pp 2980–2988 – reference: Zhaa X, Ji H, Zhang D, Bao H (2018) Fire smoke detection based on contextual object detection. In: 2018 IEEE 3rd international conference on image, vision and computing (ICIVC). IEEE, pp 473–476 – reference: KrizhevskyASutskeverIHintonGEImagenet classification with deep convolutional neural networksCommun ACM2017606849010.1145/3065386 – reference: Chen K, Cheng Y, Bai H, Mou C, Zhang Y (2019) Research on image fire detection based on support vector machine. In: 2019 9th international conference on fire science and fire protection engineering (ICFSFPE). IEEE, pp 1–7 – reference: Li Z, Peng C, Yu G, Zhang X, Deng Y, Sun J (2018) Detnet: a backbone network for object detection. arXiv:1804.06215 – reference: Gaia (2021) D-fire: an image dataset of fire and smoke occurrences designed for machine learning and object recognition algorithms with more than 10000 images. https://github.com/gaiasd/DFireDataset – reference: Ren S, He K, Girshick R, Sun J (2015) Faster r-cnn: towards real-time object detection with region proposal networks. In: Advances in neural information processing systems, pp 91–99 – reference: Zhang H, Chang H, Ma B, Wang N, Chen X (2020) Dynamic r-cnn: towards high quality object detection via dynamic training. arXiv:2004.06002 – reference: Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1–9 – reference: Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu C-Y, Berg AC (2016) Ssd: single shot multibox detector. In: European conference on computer vision, pp 21–37. Springer – volume: 194 start-page: 105590 year: 2020 ident: 2496_CR34 publication-title: Knowl-Based Syst doi: 10.1016/j.knosys.2020.105590 – ident: 2496_CR35 doi: 10.1109/TPAMI.2017.2699184 – ident: 2496_CR24 doi: 10.1109/CVPR42600.2020.01155 – ident: 2496_CR1 doi: 10.1007/978-3-319-10602-1_48 – ident: 2496_CR22 – ident: 2496_CR20 doi: 10.1109/CVPR.2016.91 – ident: 2496_CR17 doi: 10.1109/CVPR46437.2021.01008 – ident: 2496_CR19 doi: 10.1007/978-3-319-46448-0_2 – volume: 60 start-page: 84 issue: 6 year: 2017 ident: 2496_CR3 publication-title: Commun ACM doi: 10.1145/3065386 – volume: 8 start-page: 58923 year: 2020 ident: 2496_CR25 publication-title: IEEE Access doi: 10.1109/ACCESS.2020.2982994 – ident: 2496_CR28 doi: 10.1109/CVPR.2019.00060 – volume: 29 start-page: 8467 year: 2020 ident: 2496_CR26 publication-title: IEEE Trans Image Process doi: 10.1109/TIP.2020.3016431 – ident: 2496_CR21 doi: 10.1109/CVPR42600.2020.00978 – ident: 2496_CR9 doi: 10.1109/ISCID.2018.00070 – ident: 2496_CR30 doi: 10.1109/CVPR.2019.00314 – ident: 2496_CR36 – ident: 2496_CR10 doi: 10.1109/ICIVC.2018.8492823 – ident: 2496_CR13 doi: 10.1109/ICCV.2015.169 – ident: 2496_CR39 doi: 10.1109/CVPR.2009.5206848 – ident: 2496_CR5 doi: 10.1109/CVPR.2015.7298594 – ident: 2496_CR38 – ident: 2496_CR6 doi: 10.1109/CVPR.2016.90 – ident: 2496_CR18 doi: 10.1109/ICCV.2017.324 – ident: 2496_CR33 doi: 10.1109/CVPR.2017.106 – ident: 2496_CR16 doi: 10.1109/ICCV.2019.00615 – volume: 517 start-page: 52 year: 2020 ident: 2496_CR31 publication-title: Inf Sci doi: 10.1016/j.ins.2019.12.084 – ident: 2496_CR41 doi: 10.1109/CVPR.2018.00813 – ident: 2496_CR15 doi: 10.1007/978-3-030-58555-6_16 – ident: 2496_CR40 – volume: 193 start-page: 105448 year: 2020 ident: 2496_CR32 publication-title: Knowledge-Based Systems doi: 10.1016/j.knosys.2019.105448 – ident: 2496_CR8 doi: 10.1007/978-3-030-01240-3_21 – ident: 2496_CR4 – ident: 2496_CR11 doi: 10.1109/ICFSFPE48751.2019.9055795 – ident: 2496_CR29 doi: 10.1109/ICCV.2019.00338 – volume: 88 start-page: 303 issue: 2 year: 2010 ident: 2496_CR2 publication-title: International Journal of Computer Vision doi: 10.1007/s11263-009-0275-4 – ident: 2496_CR7 doi: 10.1109/CVPR.2017.243 – ident: 2496_CR12 – ident: 2496_CR23 doi: 10.1109/CVPR.2018.00745 – ident: 2496_CR27 doi: 10.1007/978-3-030-01234-2_1 – ident: 2496_CR14 – ident: 2496_CR37 |
SSID | ssj0003301 |
Score | 2.4193099 |
Snippet | Channel attention mechanisms have attracted more and more researchers because of their generality and effectiveness in deep convolutional neural... |
SourceID | proquest crossref springer |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 1362 |
SubjectTerms | Artificial Intelligence Artificial neural networks Computer Science Datasets Deep learning Efficiency Experiments Image detection Machines Manufacturing Mechanical Engineering Methods Modules Neural networks Object recognition Parameters Processes Sensors Signal encoding |
SummonAdditionalLinks | – databaseName: ProQuest Central dbid: BENPR link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV3dS8MwEA-6vfjitzidkgffNLi1aZb6IlM3huAQcbC30nwMJl07bSfsv_eSpSsK7qUvSQu9y9397nIfCF1xETOmOjEJlUcJlUqBzImQcCbabUF9wKimGvllyAYj-jwOxi7glru0ylInWkWtMmli5Lce8zyA25TT-_knMVOjzO2qG6GxjeqggjmvofpDb_j6ttbF4K3bmXngZRDGwrErm3HFc9SkC3ngToMPwsjyt2mq8OafK1Jrefr7aNdBRtxd8fgAben0EO2V4xiwk84j9PHUe-ze4Rin2bdOsE0VJDnwQGNtG0WAfcGm0DeFVdNW0yY64lmmFrAFwCvOhInKYKULm6CV4mmKAVQmJJlOYAdoRzydgQLKj9Go33t_HBA3SoFIkLGCaK1DCX-twF2kgvJAhroVaABHYO47LUpB8OOWmID15pK1FQVYowJGJ0rRWFLfP0G1NEv1KcLcVNuCDpUTJmgn9GJAhIEPT67A9ZB-A7VLKkbS9Rk34y6SqOqQbCgfAeUjS_lo2UDX63fmqy4bG3c3S-ZETuLyqDofDXRTMqxa_v9rZ5u_do52PFPxYKMuTVQrvhb6AnBIIS7dYfsBgcfX7Q priority: 102 providerName: ProQuest |
Title | DECA: a novel multi-scale efficient channel attention module for object detection in real-life fire images |
URI | https://link.springer.com/article/10.1007/s10489-021-02496-y https://www.proquest.com/docview/2622096484 |
Volume | 52 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LT8JAEJ4IXLz4NqJI9uBNN4F2u7TeAClEIzFGEjw13QcJBoqxYMK_d3ZpQY2aeOkeOt20s52Zb3bnAXDhi5hz1YhpoBxGmVQKZU4E1OeiXhfMRYxqspHv-7w3YLdDb5glhaV5tHt-JGk19adkN2bCexx0f9Fn4HRZgJKHvrsJ5Bo4zbX-RQ_d9slDz4JyHgyzVJmf5_hqjjYY89uxqLU24R7sZDCRNFfrug9bOjmA3bwFA8kk8hBebjrt5jWJSTJ71xNiwwNpinzXRNviEGhTiEnuTfCuKaVpgxvJdKYWSIKAlcyE2YkhSs9tUFZCxglBIDmhk_EIKVAjkvEUlU56BIOw89Tu0ax9ApUoV3OqtQ4kfrVCF5EJ5nsy0DVPIyBCE9-oMYbCHtfECC22L3ldMYQyyuNspBSLJXPdYygms0SfAPFNhi3qTTnigjUCJ0YU6Ll49RW6G9ItQz3nYiSz2uKmxcUk2lRFNpyPkPOR5Xy0LMPl-pnXVWWNP6kr-eJEmZSlkcMdfC_OfFaGq3zBNrd_n-30f-RnsO2YrAe781KB4vxtoc8Ri8xFFQp-2K1CqRm2Wn0zdp_vOji2Ov2Hx6r9MT8AiLLZlA |
linkProvider | Springer Nature |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1LT9tAEB4heiiX0hZQU6DdA5xgRbJeb-xKCCEghOcJpNyM9xEpVbCBhFb5U_2NfLuxiYoENy6WJa_3MDM7883sPIg2Ep0rZds5T62QXBprceZ0yhOlWy0tI2BUX418cam61_K0F_fm6F9dC-PTKmudGBS1LY2Pke8IJQTgtkzk3t0991Oj_O1qPUJjKhZnbvIXLtto9-QQ_N0UonN0ddDl1VQBbiBuY-6cSw0so4XnJLVMYpO6ZuyAE2D52k0pcQbypu7DkCVGtayEhbexkn1rZW6kD4BC5X_AS-qdvaRz_Kz5oyiMW27Cp-FKpb2qSKcq1ZM-OUnAeYfHo_jkf0M4Q7cvLmSDnet8pk8VQGX7U4n6QnOu-EqL9fAHVumCJfp9eHSw_4vlrCj_uCELiYl8BI475kJbClgz5suKC3z1TTxDWiW7Le0jlgAqs1L7GBCzbhzSwQo2KBgg7JAPB32sgC5mg1uou9EyXb8LiVdovigL941Y4mt7obFNX2nZTkUO_BlHeCYWjo6JGtSqqZiZqqu5H64xzGb9mD3lM1A-C5TPJg3aev7nbtrT483VazVzsup8j7KZNDZou2bY7PPru31_e7ef9LF7dXGenZ9cnq3SgvC1FiHes0bz44dHtw4ENNY_gtgxunlvOX8CeKYRwg |
linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1LbxMxEB5VqYS48EYECvgAJ7CaeL3OLhJCpUnUUogqRKXclvVjpaB0t21SUP4av47PjrcRSPTWy17s9WFmPPPNeB5ErzJdKmUHJc-tkFwaa3HndM4zpft9LRNgVF-N_GWiDk7kp2k63aLfbS2MT6tsdWJQ1LYxPka-K5QQgNsyk7tVTIs4Ho4_nJ1zP0HKv7S24zTWInLkVr_gvi3eHw7B69dCjEff9g94nDDADURvyZ1zuYGVtPCipJZZanLXSx0wA6zgoCcl7kPZ0xWMWmZU30pYe5sqWVkrSyN9MBTqf3vgvaIObX8cTY6_XtmBJAnDl3vwcLhS-TSW7MTCPelTlQRcefg_iq_-NosbrPvP82yweuN7dCfCVba3lq_7tOXqB3S3HQXBomZ4SD-Go_29d6xkdfPTzVlIU-QL8N8xF5pUwLYxX2RcY9W39AxJluy0sZfYAuDMGu0jQsy6ZUgOq9msZgC0cz6fVdgBzcxmp1B-i0d0ciNEfkyduqndE2KZr_SF_jaV0nKQixJoNE3wzSzcHpN0qd9SsTCxx7kftTEvNt2ZPeULUL4IlC9WXXpz9c_ZusPHtbt3WuYU8bYvio1sdulty7DN8v9Pe3r9aS_pFmS8-Hw4OXpGt4UvvAjBnx3qLC8u3XPAoaV-EeWO0febFvU_ENIXVA |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=DECA%3A+a+novel+multi-scale+efficient+channel+attention+module+for+object+detection+in+real-life+fire+images&rft.jtitle=Applied+intelligence+%28Dordrecht%2C+Netherlands%29&rft.au=Wang%2C+Junjie&rft.au=Yu%2C+Jiong&rft.au=He%2C+Zhu&rft.date=2022-01-01&rft.pub=Springer+US&rft.issn=0924-669X&rft.eissn=1573-7497&rft.volume=52&rft.issue=2&rft.spage=1362&rft.epage=1375&rft_id=info:doi/10.1007%2Fs10489-021-02496-y&rft.externalDocID=10_1007_s10489_021_02496_y |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0924-669X&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0924-669X&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0924-669X&client=summon |