Camouflaged Object Detection via Context-Aware Cross-Level Fusion
Camouflaged object detection (COD) aims to identify the objects that conceal themselves in natural scenes. Accurate COD suffers from a number of challenges associated with low boundary contrast and the large variation of object appearances, e.g., object size and shape. To address these challenges, w...
Saved in:
Published in | IEEE transactions on circuits and systems for video technology Vol. 32; no. 10; pp. 6981 - 6993 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.10.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Camouflaged object detection (COD) aims to identify the objects that conceal themselves in natural scenes. Accurate COD suffers from a number of challenges associated with low boundary contrast and the large variation of object appearances, e.g., object size and shape. To address these challenges, we propose a novel Context-aware Cross-level Fusion Network (<inline-formula> <tex-math notation="LaTeX">\text{C}^{2}\text{F} </tex-math></inline-formula>-Net), which fuses context-aware cross-level features for accurately identifying camouflaged objects. Specifically, we compute informative attention coefficients from multi-level features with our Attention-induced Cross-level Fusion Module (ACFM), which further integrates the features under the guidance of attention coefficients. We then propose a Dual-branch Global Context Module (DGCM) to refine the fused features for informative feature representations by exploiting rich global context information. Multiple ACFMs and DGCMs are integrated in a cascaded manner for generating a coarse prediction from high-level features. The coarse prediction acts as an attention map to refine the low-level features before passing them to our Camouflage Inference Module (CIM) to generate the final prediction. We perform extensive experiments on three widely used benchmark datasets and compare <inline-formula> <tex-math notation="LaTeX">\text{C}^{2}\text{F} </tex-math></inline-formula>-Net with state-of-the-art (SOTA) models. The results show that <inline-formula> <tex-math notation="LaTeX">\text{C}^{2}\text{F} </tex-math></inline-formula>-Net is an effective COD model and outperforms SOTA models remarkably. Further, an evaluation on polyp segmentation datasets demonstrates the promising potentials of our <inline-formula> <tex-math notation="LaTeX">\text{C}^{2}\text{F} </tex-math></inline-formula>-Net in COD downstream applications. Our code is publicly available at: https://github.com/Ben57882/C2FNet-TSCVT |
---|---|
AbstractList | Camouflaged object detection (COD) aims to identify the objects that conceal themselves in natural scenes. Accurate COD suffers from a number of challenges associated with low boundary contrast and the large variation of object appearances, e.g., object size and shape. To address these challenges, we propose a novel Context-aware Cross-level Fusion Network ([Formula Omitted]-Net), which fuses context-aware cross-level features for accurately identifying camouflaged objects. Specifically, we compute informative attention coefficients from multi-level features with our Attention-induced Cross-level Fusion Module (ACFM), which further integrates the features under the guidance of attention coefficients. We then propose a Dual-branch Global Context Module (DGCM) to refine the fused features for informative feature representations by exploiting rich global context information. Multiple ACFMs and DGCMs are integrated in a cascaded manner for generating a coarse prediction from high-level features. The coarse prediction acts as an attention map to refine the low-level features before passing them to our Camouflage Inference Module (CIM) to generate the final prediction. We perform extensive experiments on three widely used benchmark datasets and compare [Formula Omitted]-Net with state-of-the-art (SOTA) models. The results show that [Formula Omitted]-Net is an effective COD model and outperforms SOTA models remarkably. Further, an evaluation on polyp segmentation datasets demonstrates the promising potentials of our [Formula Omitted]-Net in COD downstream applications. Our code is publicly available at: https://github.com/Ben57882/C2FNet-TSCVT Camouflaged object detection (COD) aims to identify the objects that conceal themselves in natural scenes. Accurate COD suffers from a number of challenges associated with low boundary contrast and the large variation of object appearances, e.g., object size and shape. To address these challenges, we propose a novel Context-aware Cross-level Fusion Network (<inline-formula> <tex-math notation="LaTeX">\text{C}^{2}\text{F} </tex-math></inline-formula>-Net), which fuses context-aware cross-level features for accurately identifying camouflaged objects. Specifically, we compute informative attention coefficients from multi-level features with our Attention-induced Cross-level Fusion Module (ACFM), which further integrates the features under the guidance of attention coefficients. We then propose a Dual-branch Global Context Module (DGCM) to refine the fused features for informative feature representations by exploiting rich global context information. Multiple ACFMs and DGCMs are integrated in a cascaded manner for generating a coarse prediction from high-level features. The coarse prediction acts as an attention map to refine the low-level features before passing them to our Camouflage Inference Module (CIM) to generate the final prediction. We perform extensive experiments on three widely used benchmark datasets and compare <inline-formula> <tex-math notation="LaTeX">\text{C}^{2}\text{F} </tex-math></inline-formula>-Net with state-of-the-art (SOTA) models. The results show that <inline-formula> <tex-math notation="LaTeX">\text{C}^{2}\text{F} </tex-math></inline-formula>-Net is an effective COD model and outperforms SOTA models remarkably. Further, an evaluation on polyp segmentation datasets demonstrates the promising potentials of our <inline-formula> <tex-math notation="LaTeX">\text{C}^{2}\text{F} </tex-math></inline-formula>-Net in COD downstream applications. Our code is publicly available at: https://github.com/Ben57882/C2FNet-TSCVT |
Author | Chen, Geng Wu, Ya-Feng Zhou, Tao Ji, Ge-Peng Liu, Si-Jie Sun, Yu-Jia |
Author_xml | – sequence: 1 givenname: Geng orcidid: 0000-0001-8350-6581 surname: Chen fullname: Chen, Geng email: geng.chen.cs@gmail.com organization: Northwestern Polytechnical University, Xi'an, China – sequence: 2 givenname: Si-Jie surname: Liu fullname: Liu, Si-Jie email: sijieliu_123@sina.com organization: Northwestern Polytechnical University, Xi'an, China – sequence: 3 givenname: Yu-Jia orcidid: 0000-0002-0101-2789 surname: Sun fullname: Sun, Yu-Jia email: thograce@163.com organization: School of Computer Science, Inner Mongolia University, Hohhot, ChinaChina – sequence: 4 givenname: Ge-Peng orcidid: 0000-0001-7092-2877 surname: Ji fullname: Ji, Ge-Peng email: gepengai.ji@gmail.com organization: Artificial Intelligence Institute, School of Computer Science, Wuhan University, Wuhan, China – sequence: 5 givenname: Ya-Feng surname: Wu fullname: Wu, Ya-Feng email: yfwu@nwpu.edu.cn organization: Northwestern Polytechnical University, Xi'an, China – sequence: 6 givenname: Tao surname: Zhou fullname: Zhou, Tao email: taozhou.ai@gmail.com organization: Key Laboratory of System Control and Information Processing, Ministry of Education, Shanghai, China |
BookMark | eNp9kDFPwzAQhS1UJErhD8ASiTnlbMeyPVaBAlKlDhRWy3EuKFWaFMct8O9xaMXAwPRueO_u3XdORm3XIiFXFKaUgr5d5c-vqykDxqacSkUlPyFjKoRKGQMxijMImipGxRk57_s1AM1UJsdklttNt6sa-4ZlsizW6EJyhyFK3bXJvrZJ3rUBP0M6-7Aek9x3fZ8ucI9NMt_10XRBTivb9Hh51Al5md-v8sd0sXx4ymeL1DEtQppZwUtbOsvLwlFeaQGMV4XE0gJYLbVVFQOnuQJdVBIxfsGRI9DY05WOT8jNYe_Wd-877INZdzvfxpOGSUYzpoBn0cUOLjcU9ViZra831n8ZCmZAZX5QmQGVOaKKIfUn5OpgBwLB27r5P3p9iNaI-HtLS8Wy2OYbY0V4vw |
CODEN | ITCTEM |
CitedBy_id | crossref_primary_10_3390_electronics14040779 crossref_primary_10_1007_s11042_023_17751_2 crossref_primary_10_1016_j_cviu_2024_103932 crossref_primary_10_1016_j_inffus_2024_102871 crossref_primary_10_1109_TCSVT_2023_3349007 crossref_primary_10_32604_cmc_2024_055327 crossref_primary_10_1007_s00371_024_03688_6 crossref_primary_10_1016_j_compbiomed_2024_108268 crossref_primary_10_1109_TCSVT_2024_3404005 crossref_primary_10_1109_TETCI_2024_3375022 crossref_primary_10_1109_TCSVT_2023_3241993 crossref_primary_10_1109_TMM_2023_3291823 crossref_primary_10_1109_TSMC_2025_3526234 crossref_primary_10_1007_s40747_024_01455_7 crossref_primary_10_1016_j_ecoinf_2024_102893 crossref_primary_10_1016_j_eswa_2024_124747 crossref_primary_10_1109_JSEN_2024_3401722 crossref_primary_10_1109_ACCESS_2024_3421296 crossref_primary_10_1016_j_patcog_2024_110813 crossref_primary_10_1109_LSP_2023_3348390 crossref_primary_10_3390_app131810295 crossref_primary_10_1007_s11263_025_02406_6 crossref_primary_10_3788_gzxb20245308_0810002 crossref_primary_10_1109_TMM_2024_3521761 crossref_primary_10_1016_j_neucom_2023_127050 crossref_primary_10_1109_TIP_2023_3308295 crossref_primary_10_1016_j_imavis_2025_105470 crossref_primary_10_1016_j_eswa_2024_125398 crossref_primary_10_1109_TCSVT_2023_3349209 crossref_primary_10_1007_s10489_024_05694_6 crossref_primary_10_1007_s11042_024_19891_5 crossref_primary_10_1109_TCYB_2024_3368154 crossref_primary_10_1007_s10489_023_04898_6 crossref_primary_10_1109_TCSVT_2022_3221755 crossref_primary_10_1109_TCSVT_2024_3437437 crossref_primary_10_1016_j_imavis_2025_105517 crossref_primary_10_1007_s00530_024_01475_w crossref_primary_10_1016_j_imavis_2024_104924 crossref_primary_10_1016_j_engappai_2023_107303 crossref_primary_10_1016_j_engappai_2024_109703 crossref_primary_10_1016_j_jvcir_2024_104208 crossref_primary_10_1109_TII_2023_3327341 crossref_primary_10_1007_s44267_023_00019_6 crossref_primary_10_3390_app14062494 crossref_primary_10_3390_s24185903 crossref_primary_10_1109_LSP_2024_3356416 crossref_primary_10_1007_s00371_024_03786_5 crossref_primary_10_1016_j_cviu_2025_104321 crossref_primary_10_1007_s11227_024_06376_3 crossref_primary_10_1016_j_dt_2023_12_011 crossref_primary_10_1016_j_neucom_2024_129249 crossref_primary_10_1016_j_eswa_2025_126451 crossref_primary_10_1109_TCSVT_2023_3245883 crossref_primary_10_1016_j_imavis_2024_105382 crossref_primary_10_3390_s23135789 crossref_primary_10_1109_TCSVT_2024_3417000 crossref_primary_10_1109_TIM_2024_3497181 crossref_primary_10_1109_TMM_2024_3360710 crossref_primary_10_1109_TCSVT_2024_3370685 crossref_primary_10_1016_j_knosys_2025_113070 crossref_primary_10_1016_j_eswa_2024_123558 crossref_primary_10_1109_TCSVT_2024_3462465 crossref_primary_10_1145_3711869 crossref_primary_10_1109_TCSVT_2023_3255304 crossref_primary_10_3390_app15010173 crossref_primary_10_1016_j_knosys_2025_113158 crossref_primary_10_1145_3712598 crossref_primary_10_1007_s10489_025_06264_0 crossref_primary_10_1016_j_imavis_2024_105218 crossref_primary_10_1016_j_displa_2024_102957 crossref_primary_10_1016_j_jvcir_2024_104061 crossref_primary_10_1109_LSP_2023_3286787 crossref_primary_10_1109_TCSVT_2023_3264442 crossref_primary_10_1016_j_patcog_2023_109555 crossref_primary_10_1109_TCSVT_2023_3318672 crossref_primary_10_3390_app14178063 crossref_primary_10_1016_j_compbiomed_2024_108930 crossref_primary_10_1016_j_dsp_2025_105167 crossref_primary_10_1007_s10489_024_05369_2 crossref_primary_10_1109_TIP_2022_3217695 crossref_primary_10_26599_AIR_2024_9150044 crossref_primary_10_1007_s00138_024_01588_2 crossref_primary_10_1016_j_imavis_2024_105247 crossref_primary_10_1109_TIM_2025_3527621 crossref_primary_10_1109_TCSVT_2023_3234578 crossref_primary_10_1109_TII_2024_3426979 crossref_primary_10_1016_j_imavis_2024_104953 crossref_primary_10_1016_j_neucom_2025_129523 crossref_primary_10_1016_j_dsp_2025_105172 crossref_primary_10_1109_TMM_2023_3274933 crossref_primary_10_1007_s00371_025_03859_z crossref_primary_10_1016_j_measurement_2024_115210 crossref_primary_10_1109_TCSVT_2024_3432882 crossref_primary_10_1109_TCSVT_2023_3308964 |
Cites_doi | 10.1109/CVPR.2019.00657 10.1109/CVPR.2014.39 10.1609/aaai.v34i07.6916 10.1007/s11548-013-0926-3 10.1109/CVPR.2019.00511 10.1007/978-3-030-69532-3_30 10.1109/EMBC.2019.8857339 10.1109/TIP.2020.3028289 10.1016/j.compmedimag.2015.02.007 10.1109/TGRS.2021.3123984 10.1007/978-3-030-87193-2_14 10.1109/TMI.2020.2996645 10.1007/978-3-030-69525-5_42 10.1109/TMI.2019.2959609 10.1109/TCSVT.2016.2555719 10.1007/978-3-030-32239-7_34 10.1007/978-3-030-59725-2_26 10.1109/ICCV.2017.487 10.1109/TCYB.2017.2771488 10.1109/CVPR.2019.00326 10.1109/CVPR.2017.106 10.1007/s11633-022-1371-y 10.1109/ACCESS.2021.3064443 10.1109/ISM46123.2019.00049 10.1007/978-3-030-58610-2_17 10.1109/TCSVT.2022.3142771 10.1609/aaai.v34i07.6633 10.1109/ICCV.2017.322 10.24963/ijcai.2021/142 10.1109/CVPR46437.2021.00994 10.1117/12.2254361 10.1109/TPAMI.2021.3050918 10.1109/CVPR.2019.00403 10.1109/TPAMI.2019.2938758 10.1109/WACV48630.2021.00360 10.5402/2011/173176 10.1109/CVPR.2019.00404 10.1109/CVPR.2019.00766 10.1109/EMBC.2018.8512197 10.1109/ISBI.2015.7163821 10.1155/2017/4037190 10.1109/TPAMI.2021.3085766 10.1007/978-3-319-24574-4_28 10.1109/ICCV.2019.00069 10.1109/TPAMI.2021.3060412 10.1109/TMM.2021.3069297 10.1109/TIP.2021.3065822 10.24963/ijcai.2018/97 10.1109/CVPR.2018.00326 10.1360/SSI-2020-0370 10.1109/CVPR42600.2020.00285 10.1109/CVPR.2018.00187 10.1109/TCSVT.2021.3124952 10.1109/ICCV.2019.00887 10.1109/CVPR.2018.00813 10.1109/TIP.2019.2959253 10.1016/j.cviu.2019.04.006 10.1016/j.patcog.2022.108644 10.1109/CVPR.2017.660 10.1016/j.patcog.2021.108414 10.1109/CVPR46437.2021.00866 10.1109/TIP.2020.3042084 10.1109/TCSVT.2021.3126591 10.1109/TMI.2014.2314959 10.1142/S021946782050028X 10.1109/CVPR.2019.00320 10.1109/CVPR46437.2021.01142 10.1109/TPAMI.2017.2699184 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022 |
DBID | 97E RIA RIE AAYXX CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D |
DOI | 10.1109/TCSVT.2022.3178173 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
DatabaseTitle | CrossRef Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional |
DatabaseTitleList | Technology Research Database |
Database_xml | – sequence: 1 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering |
EISSN | 1558-2205 |
EndPage | 6993 |
ExternalDocumentID | 10_1109_TCSVT_2022_3178173 9782434 |
Genre | orig-research |
GrantInformation_xml | – fundername: Open Project of the Key Laboratory of System Control and Information Processing, Ministry of Education, Shanghai Jiao Tong University grantid: Scip202102 funderid: 10.13039/501100004921 – fundername: Fundamental Research Funds for the Central Universities grantid: D5000220213 funderid: 10.13039/501100012226 – fundername: National Science Fund of China grantid: 62172228 funderid: 10.13039/501100001809 |
GroupedDBID | -~X 0R~ 29I 4.4 5GY 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AETIX AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS EJD HZ~ H~9 ICLAB IFIPE IFJZH IPLJI JAVBF LAI M43 O9- OCL P2P RIA RIE RNS RXW TAE TN5 VH1 AAYXX CITATION RIG 7SC 7SP 8FD JQ2 L7M L~C L~D |
ID | FETCH-LOGICAL-c295t-4a53dadca3dbc13f95023fb7eda00a979a8f20c93809bf7ee7813e3e01847cdc3 |
IEDL.DBID | RIE |
ISSN | 1051-8215 |
IngestDate | Mon Jun 30 04:05:58 EDT 2025 Thu Apr 24 23:10:14 EDT 2025 Tue Jul 01 00:41:17 EDT 2025 Wed Aug 27 02:14:17 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 10 |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c295t-4a53dadca3dbc13f95023fb7eda00a979a8f20c93809bf7ee7813e3e01847cdc3 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0001-8350-6581 0000-0002-0101-2789 0000-0001-7092-2877 |
PQID | 2721428034 |
PQPubID | 85433 |
PageCount | 13 |
ParticipantIDs | crossref_primary_10_1109_TCSVT_2022_3178173 ieee_primary_9782434 proquest_journals_2721428034 crossref_citationtrail_10_1109_TCSVT_2022_3178173 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2022-10-01 |
PublicationDateYYYYMMDD | 2022-10-01 |
PublicationDate_xml | – month: 10 year: 2022 text: 2022-10-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | New York |
PublicationPlace_xml | – name: New York |
PublicationTitle | IEEE transactions on circuits and systems for video technology |
PublicationTitleAbbrev | TCSVT |
PublicationYear | 2022 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref13 ref57 ref12 ref56 ref15 ref59 ref14 ref58 ref53 ref52 ref11 ref55 ref10 ref54 ref17 ref16 ref19 ref18 ref51 ref50 ref46 ref45 ref48 ref47 ref42 ref41 ref44 ref43 ref49 ref8 ref7 ref9 ref4 ref3 ref6 ref5 ref40 ref35 ref34 ref37 ref31 ref30 ref33 ref32 ref2 ref1 ref39 ref38 Li (ref36) 2020 ref24 ref68 ref23 ref67 ref26 ref25 ref69 ref20 ref64 ref63 ref22 ref66 ref21 ref65 ref28 ref27 ref29 ref60 ref62 ref61 |
References_xml | – ident: ref26 doi: 10.1109/CVPR.2019.00657 – ident: ref42 doi: 10.1109/CVPR.2014.39 – ident: ref57 doi: 10.1609/aaai.v34i07.6916 – ident: ref51 doi: 10.1007/s11548-013-0926-3 – ident: ref5 doi: 10.1109/CVPR.2019.00511 – ident: ref33 doi: 10.1007/978-3-030-69532-3_30 – ident: ref45 doi: 10.1109/EMBC.2019.8857339 – ident: ref7 doi: 10.1109/TIP.2020.3028289 – ident: ref2 doi: 10.1016/j.compmedimag.2015.02.007 – ident: ref10 doi: 10.1109/TGRS.2021.3123984 – ident: ref29 doi: 10.1007/978-3-030-87193-2_14 – ident: ref20 doi: 10.1109/TMI.2020.2996645 – ident: ref54 doi: 10.1007/978-3-030-69525-5_42 – ident: ref68 doi: 10.1109/TMI.2019.2959609 – ident: ref64 doi: 10.1109/TCSVT.2016.2555719 – ident: ref21 doi: 10.1007/978-3-030-32239-7_34 – ident: ref17 doi: 10.1007/978-3-030-59725-2_26 – ident: ref12 doi: 10.1109/ICCV.2017.487 – ident: ref9 doi: 10.1109/TCYB.2017.2771488 – ident: ref22 doi: 10.1109/CVPR.2019.00326 – ident: ref37 doi: 10.1109/CVPR.2017.106 – ident: ref30 doi: 10.1007/s11633-022-1371-y – ident: ref60 doi: 10.1109/ACCESS.2021.3064443 – ident: ref28 doi: 10.1109/ISM46123.2019.00049 – ident: ref19 doi: 10.1007/978-3-030-58610-2_17 – ident: ref49 doi: 10.1109/TCSVT.2022.3142771 – ident: ref8 doi: 10.1609/aaai.v34i07.6633 – ident: ref24 doi: 10.1109/ICCV.2017.322 – ident: ref52 doi: 10.24963/ijcai.2021/142 – ident: ref35 doi: 10.1109/CVPR46437.2021.00994 – ident: ref4 doi: 10.1117/12.2254361 – ident: ref50 doi: 10.1109/TPAMI.2021.3050918 – ident: ref59 doi: 10.1109/CVPR.2019.00403 – ident: ref23 doi: 10.1109/TPAMI.2019.2938758 – ident: ref11 doi: 10.1109/WACV48630.2021.00360 – year: 2020 ident: ref36 article-title: AdaX: Adaptive gradient descent with exponential long term memory publication-title: arXiv:2004.09740 – ident: ref32 doi: 10.5402/2011/173176 – ident: ref38 doi: 10.1109/CVPR.2019.00404 – ident: ref46 doi: 10.1109/CVPR.2019.00766 – ident: ref1 doi: 10.1109/EMBC.2018.8512197 – ident: ref53 doi: 10.1109/ISBI.2015.7163821 – ident: ref55 doi: 10.1155/2017/4037190 – ident: ref16 doi: 10.1109/TPAMI.2021.3085766 – ident: ref48 doi: 10.1007/978-3-319-24574-4_28 – ident: ref27 doi: 10.1109/ICCV.2019.00069 – ident: ref18 doi: 10.1109/TPAMI.2021.3060412 – ident: ref25 doi: 10.1109/TMM.2021.3069297 – ident: ref58 doi: 10.1109/TIP.2021.3065822 – ident: ref13 doi: 10.24963/ijcai.2018/97 – ident: ref39 doi: 10.1109/CVPR.2018.00326 – ident: ref14 doi: 10.1360/SSI-2020-0370 – ident: ref15 doi: 10.1109/CVPR42600.2020.00285 – ident: ref61 doi: 10.1109/CVPR.2018.00187 – ident: ref3 doi: 10.1109/TCSVT.2021.3124952 – ident: ref66 doi: 10.1109/ICCV.2019.00887 – ident: ref56 doi: 10.1109/CVPR.2018.00813 – ident: ref62 doi: 10.1109/TIP.2019.2959253 – ident: ref34 doi: 10.1016/j.cviu.2019.04.006 – ident: ref69 doi: 10.1016/j.patcog.2022.108644 – ident: ref65 doi: 10.1109/CVPR.2017.660 – ident: ref31 doi: 10.1016/j.patcog.2021.108414 – ident: ref43 doi: 10.1109/CVPR46437.2021.00866 – ident: ref63 doi: 10.1109/TIP.2020.3042084 – ident: ref47 doi: 10.1109/TCSVT.2021.3126591 – ident: ref41 doi: 10.1109/TMI.2014.2314959 – ident: ref44 doi: 10.1142/S021946782050028X – ident: ref67 doi: 10.1109/CVPR.2019.00320 – ident: ref40 doi: 10.1109/CVPR46437.2021.01142 – ident: ref6 doi: 10.1109/TPAMI.2017.2699184 |
SSID | ssj0014847 |
Score | 2.6897545 |
Snippet | Camouflaged object detection (COD) aims to identify the objects that conceal themselves in natural scenes. Accurate COD suffers from a number of challenges... |
SourceID | proquest crossref ieee |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 6981 |
SubjectTerms | Camouflaged object detection Computational modeling Context context-aware deep learning Datasets Deep learning Feature extraction feature fusion Image segmentation Mathematical models Modules Object detection Object recognition polyp segmentation Task analysis |
Title | Camouflaged Object Detection via Context-Aware Cross-Level Fusion |
URI | https://ieeexplore.ieee.org/document/9782434 https://www.proquest.com/docview/2721428034 |
Volume | 32 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LT8QgEJ7onvTg27i6mh68aVdaWoHjZnVjjI-Dq_HWUBgS46ZrtKuJv16gj_iK8dbDQOgMMHzM8A3APnJlbWvsSsPYhC50FgouSZgSmlv_wMixcRHdy6vjs9vk_D69n4PD9i0MIvrkM-y7Tx_L11M1c1dljg02TmgyD_MWuFVvtdqIQcJ9MTF7XIhCbv1Y80CGiKPx8OZubKFgHFuEynjE6Bcn5Kuq_NiKvX8ZLcNlM7IqreSxPyvzvnr_Rtr436GvwFJ90AwG1cxYhTks1mDxE_3gOgyG0gJ_M7Fbig6uc3cjE5xg6ZOziuD1QQaeu8pC48GbfMZg6H4mvHBpRsFo5q7ZNuB2dDoenoV1SYVQxSItw0SmVEutJNW5iqgRqfXZJmeoJSFSMCG5iYkSlBORG4Zo1UWRIrFAkCmt6CZ0immBWxBozlSaIhImRRIrwXNuLNzVOkqcqnUXokbHmar5xl3Zi0nmcQcRmbdL5oSz2i5dOGjbPFVsG39KrztFt5K1jrvQa0yZ1QvyJYuZ55YjNNn-vdUOLLi-qzy9HnTK5xnu2vNGme_5ifYBJJrPwA |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwzV3BbtQwEB2VcgAOFCiIhQI-wAll8dpJbR84rLastnRbDmxRb8GxxxKi7KI2oYJv4Vf4t46d7KoFxK0StxxsR_FMPPNmxm8AnqN2JNtAfxqKkMXUWWa05VnBZUX2QfHtEDO6-wfbk8P87VFxtAY_V3dhEDEVn2E_PqZcvl-4JobKIhusyGXelVDu4fczAminr3d3SJovhBi_mY0mWddDIHPCFHWW20J6652VvnIDGUxBRipUCr3l3BplrA6COyM1N1VQiErHuCByQj7KeSdp3WtwnfyMQrS3w1Y5ilyn9mXkoAwyTZZzeSWHm1ez0fsPMwKfQhAmphWVvGT2Uh-XPw7_ZNHGG_BruRdtIcvnflNXfffjN5rI_3Wz7sDtzpVmw1b378Iazu_BrQsEi5swHNkviyYc06Hp2bsqxpzYDtap_GzOvn2yLLFzEfgfntkTZKO4edk0FlKxcRMDiffh8Eq-4QGszxdzfAjMa-WKApEra3LhjK50IEDv_SCPovU9GCxlWrqOUT029jguE7Lipkx6UMbBZacHPXi5mvO15RP55-jNKNjVyE6mPdhaqk7ZHTmnpVCJPY_L_NHfZz2DG5PZ_rSc7h7sPYab8T1tVeIWrNcnDT4h76quniYlZ_DxqhXlHGyHLzE |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Camouflaged+Object+Detection+via+Context-Aware+Cross-Level+Fusion&rft.jtitle=IEEE+transactions+on+circuits+and+systems+for+video+technology&rft.au=Chen%2C+Geng&rft.au=Liu%2C+Si-Jie&rft.au=Sun%2C+Yu-Jia&rft.au=Ji%2C+Ge-Peng&rft.date=2022-10-01&rft.issn=1051-8215&rft.eissn=1558-2205&rft.volume=32&rft.issue=10&rft.spage=6981&rft.epage=6993&rft_id=info:doi/10.1109%2FTCSVT.2022.3178173&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TCSVT_2022_3178173 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1051-8215&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1051-8215&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1051-8215&client=summon |