TPRNet: camouflaged object detection via transformer-induced progressive refinement network

Camouflaged object detection (COD) is a challenging task which aims to detect objects similar to the surrounding environment. In this paper, we propose a transformer-induced progressive refinement network ( TPRNet ) to solve challenging COD tasks. Specifically, our network includes a Transformer-ind...

Full description

Saved in:
Bibliographic Details
Published inThe Visual computer Vol. 39; no. 10; pp. 4593 - 4607
Main Authors Zhang, Qiao, Ge, Yanliang, Zhang, Cong, Bi, Hongbo
Format Journal Article
LanguageEnglish
Published Berlin/Heidelberg Springer Berlin Heidelberg 01.10.2023
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Camouflaged object detection (COD) is a challenging task which aims to detect objects similar to the surrounding environment. In this paper, we propose a transformer-induced progressive refinement network ( TPRNet ) to solve challenging COD tasks. Specifically, our network includes a Transformer-induced Progressive Refinement Module (TPRM) and a Semantic-Spatial Interaction Enhancement Module (SIEM). In TPRM, high-level features with rich semantic information are integrated through transformers as prior guidance, and then, it is sent to the refinement concurrency unit (RCU), and the accurately positioned feature area is obtained through a progressive refinement strategy. In SIEM, we perform feature interaction to localized-accurate semantic features and low-level features to obtain rich fine-grained clues and increase the symbolic power of boundary features. Extensive experiments on four widely used benchmark datasets (i.e., CAMO, CHAMELEON, COD10K, and NC4K) demonstrate that our TPRNet is an effective COD model and outperforms state-of-the-art models. The code is available https://github.com/zhangqiao970914/TPRNet .
AbstractList Camouflaged object detection (COD) is a challenging task which aims to detect objects similar to the surrounding environment. In this paper, we propose a transformer-induced progressive refinement network (TPRNet) to solve challenging COD tasks. Specifically, our network includes a Transformer-induced Progressive Refinement Module (TPRM) and a Semantic-Spatial Interaction Enhancement Module (SIEM). In TPRM, high-level features with rich semantic information are integrated through transformers as prior guidance, and then, it is sent to the refinement concurrency unit (RCU), and the accurately positioned feature area is obtained through a progressive refinement strategy. In SIEM, we perform feature interaction to localized-accurate semantic features and low-level features to obtain rich fine-grained clues and increase the symbolic power of boundary features. Extensive experiments on four widely used benchmark datasets (i.e., CAMO, CHAMELEON, COD10K, and NC4K) demonstrate that our TPRNet is an effective COD model and outperforms state-of-the-art models. The code is available https://github.com/zhangqiao970914/TPRNet.
Camouflaged object detection (COD) is a challenging task which aims to detect objects similar to the surrounding environment. In this paper, we propose a transformer-induced progressive refinement network ( TPRNet ) to solve challenging COD tasks. Specifically, our network includes a Transformer-induced Progressive Refinement Module (TPRM) and a Semantic-Spatial Interaction Enhancement Module (SIEM). In TPRM, high-level features with rich semantic information are integrated through transformers as prior guidance, and then, it is sent to the refinement concurrency unit (RCU), and the accurately positioned feature area is obtained through a progressive refinement strategy. In SIEM, we perform feature interaction to localized-accurate semantic features and low-level features to obtain rich fine-grained clues and increase the symbolic power of boundary features. Extensive experiments on four widely used benchmark datasets (i.e., CAMO, CHAMELEON, COD10K, and NC4K) demonstrate that our TPRNet is an effective COD model and outperforms state-of-the-art models. The code is available https://github.com/zhangqiao970914/TPRNet .
Author Ge, Yanliang
Bi, Hongbo
Zhang, Qiao
Zhang, Cong
Author_xml – sequence: 1
  givenname: Qiao
  surname: Zhang
  fullname: Zhang, Qiao
  organization: School of electrical information engineering, Northeast Petroleum University
– sequence: 2
  givenname: Yanliang
  surname: Ge
  fullname: Ge, Yanliang
  organization: School of electrical information engineering, Northeast Petroleum University
– sequence: 3
  givenname: Cong
  surname: Zhang
  fullname: Zhang, Cong
  email: congzhang98@126.com
  organization: School of electrical information engineering, Northeast Petroleum University
– sequence: 4
  givenname: Hongbo
  orcidid: 0000-0003-2442-330X
  surname: Bi
  fullname: Bi, Hongbo
  email: bhbdq@126.com
  organization: School of electrical information engineering, Northeast Petroleum University
BookMark eNp9kE9LwzAYh4NMcE6_gKeC52repF1SbzL8B0NF5slDSNtkZK7JTNKJ397MCoKHHcKbwO_J--M5RiPrrELoDPAFYMwuA8aUQY4JSWcKkMMBGkNBSU4olCM0xsB4ThivjtBxCCuc3qyoxuht8fzyqOJV1sjO9Xotl6rNXL1STcxaFdMwzmZbI7PopQ3a-U753Ni2b1Jw493SqxDMVmVeaWNVp2zMrIqfzr-foEMt10Gd_s4Jer29Wczu8_nT3cPsep43FKqYK14xLWWLQeNyCqTSZQuS0ILQBlq2u06Z5rSoGeCaMkpLXsm6qrEuJKWcTtD58G-q89GrEMXK9d6mlYJUwHFRkLJIKTKkGu9CSG3FxptO-i8BWOwkikGiSBLFj0QBCeL_oMZEuXOSdJj1fpQOaEh77FL5v1Z7qG-4v4iC
CitedBy_id crossref_primary_10_1007_s00371_024_03333_2
crossref_primary_10_1007_s00371_024_03688_6
crossref_primary_10_1007_s00371_023_02860_8
crossref_primary_10_1016_j_eswa_2024_124747
crossref_primary_10_1007_s00371_024_03515_y
crossref_primary_10_1016_j_cviu_2024_104061
crossref_primary_10_3390_app15010173
crossref_primary_10_1007_s10489_024_05559_y
crossref_primary_10_1007_s11263_025_02406_6
crossref_primary_10_1016_j_engappai_2024_109984
crossref_primary_10_1007_s00371_024_03713_8
crossref_primary_10_1007_s10489_025_06264_0
crossref_primary_10_1016_j_imavis_2024_105339
crossref_primary_10_1109_TMM_2024_3521761
crossref_primary_10_1007_s00371_023_03104_5
crossref_primary_10_1177_15589250241258272
crossref_primary_10_1109_TPAMI_2024_3438565
crossref_primary_10_3390_electronics13193922
crossref_primary_10_1007_s00371_024_03658_y
crossref_primary_10_1109_TCSVT_2024_3403264
crossref_primary_10_3390_app14178063
crossref_primary_10_1109_TIM_2023_3306520
crossref_primary_10_1109_TIFS_2025_3530703
crossref_primary_10_1109_TCSVT_2024_3417607
crossref_primary_10_1007_s00371_024_03422_2
crossref_primary_10_3390_s25051555
crossref_primary_10_1007_s44267_023_00019_6
crossref_primary_10_1007_s00371_024_03786_5
crossref_primary_10_1007_s11227_024_06376_3
crossref_primary_10_1109_TIM_2023_3290965
crossref_primary_10_1109_TCSVT_2023_3245883
crossref_primary_10_1007_s00371_023_02827_9
crossref_primary_10_1016_j_neucom_2025_130005
Cites_doi 10.1109/ICCV.2019.00736
10.1007/s00371-022-02404-6
10.1109/TMI.2020.2996645
10.1109/CVPR.2018.00745
10.1109/CVPR46437.2021.01280
10.1109/IGARSS.2016.7730352
10.1007/s00371-020-01842-4
10.1016/j.neucom.2019.09.107
10.1109/CVPR46437.2021.00969
10.1109/CVPR.2016.90
10.1109/CVPR.2012.6247743
10.1109/ACCESS.2021.3064443
10.1109/TIE.2021.3078379
10.1109/CVPR46437.2021.00866
10.1109/CVPR.2018.00813
10.1109/CVPR46437.2021.00994
10.24963/ijcai.2021/142
10.1109/ICCV.2017.487
10.1109/ICCV48922.2021.00411
10.1016/j.patcog.2022.108644
10.1007/s00371-021-02231-1
10.1109/ICCV48922.2021.00803
10.1109/ICCV48922.2021.00986
10.1109/ICCV48922.2021.01196
10.1109/CVPR.2019.00326
10.1016/j.patcog.2021.108414
10.1109/CVPR42600.2020.00285
10.1016/j.proeng.2011.08.412
10.1109/WACV51458.2022.00347
10.1007/s00371-020-01855-z
10.1007/s00371-020-01854-0
10.1109/CVPR42600.2020.00943
10.1109/TCSVT.2021.3124952
10.1109/TPAMI.2019.2938758
10.1016/j.cviu.2019.04.006
10.1109/CVPR46437.2021.01142
10.1109/TIP.2021.3058783
10.24963/ijcai.2018/97
10.1109/TPAMI.2021.3085766
10.1109/CVPR.2019.00403
10.5539/mas.v5n4p152
10.1007/978-3-030-59725-2_26
10.1109/ICETET.2008.232
10.1109/ICCV48922.2021.00060
10.1109/CVPR.2014.39
10.1007/s00371-022-02414-4
10.1016/j.neucom.2020.05.027
10.1109/ICCV.2019.00887
10.1109/ICCV48922.2021.00061
10.1109/TIP.2012.2200492
ContentType Journal Article
Copyright The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2022
The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2022.
Copyright_xml – notice: The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2022
– notice: The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2022.
DBID AAYXX
CITATION
8FE
8FG
AFKRA
ARAPS
AZQEC
BENPR
BGLVJ
CCPQU
DWQXO
GNUQQ
HCIFZ
JQ2
K7-
P5Z
P62
PHGZM
PHGZT
PKEHL
PQEST
PQGLB
PQQKQ
PQUKI
DOI 10.1007/s00371-022-02611-1
DatabaseName CrossRef
ProQuest SciTech Collection
ProQuest Technology Collection
ProQuest Central UK/Ireland
Advanced Technologies & Aerospace Collection
ProQuest Central Essentials
ProQuest Central
Technology Collection
ProQuest One Community College
ProQuest Central Korea
ProQuest Central Student
SciTech Premium Collection
ProQuest Computer Science Collection
Computer Science Database
Advanced Technologies & Aerospace Database
ProQuest Advanced Technologies & Aerospace Collection
ProQuest Central Premium
ProQuest One Academic (New)
ProQuest One Academic Middle East (New)
ProQuest One Academic Eastern Edition (DO NOT USE)
ProQuest One Applied & Life Sciences
ProQuest One Academic
ProQuest One Academic UKI Edition
DatabaseTitle CrossRef
Advanced Technologies & Aerospace Collection
Computer Science Database
ProQuest Central Student
Technology Collection
ProQuest One Academic Middle East (New)
ProQuest Advanced Technologies & Aerospace Collection
ProQuest Central Essentials
ProQuest Computer Science Collection
ProQuest One Academic Eastern Edition
SciTech Premium Collection
ProQuest One Community College
ProQuest Technology Collection
ProQuest SciTech Collection
ProQuest Central
Advanced Technologies & Aerospace Database
ProQuest One Applied & Life Sciences
ProQuest One Academic UKI Edition
ProQuest Central Korea
ProQuest Central (New)
ProQuest One Academic
ProQuest One Academic (New)
DatabaseTitleList Advanced Technologies & Aerospace Collection

Database_xml – sequence: 1
  dbid: 8FG
  name: ProQuest Technology Collection
  url: https://search.proquest.com/technologycollection1
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
Computer Science
EISSN 1432-2315
EndPage 4607
ExternalDocumentID 10_1007_s00371_022_02611_1
GrantInformation_xml – fundername: AnHui Province Key Laboratory of Infrared and Low-Temperature Plasma
  grantid: NO.IRKL2022KF07
GroupedDBID -4Z
-59
-5G
-BR
-EM
-Y2
-~C
-~X
.86
.DC
.VR
06D
0R~
0VY
123
1N0
1SB
2.D
203
28-
29R
2J2
2JN
2JY
2KG
2KM
2LR
2P1
2VQ
2~H
30V
4.4
406
408
409
40D
40E
5QI
5VS
67Z
6NX
6TJ
78A
8TC
8UJ
95-
95.
95~
96X
AAAVM
AABHQ
AACDK
AAHNG
AAIAL
AAJBT
AAJKR
AANZL
AAOBN
AARHV
AARTL
AASML
AATNV
AATVU
AAUYE
AAWCG
AAYIU
AAYOK
AAYQN
AAYTO
AAYZH
ABAKF
ABBBX
ABBXA
ABDPE
ABDZT
ABECU
ABFTV
ABHLI
ABHQN
ABJNI
ABJOX
ABKCH
ABKTR
ABMNI
ABMQK
ABNWP
ABQBU
ABQSL
ABSXP
ABTEG
ABTHY
ABTKH
ABTMW
ABULA
ABWNU
ABXPI
ACAOD
ACBXY
ACDTI
ACGFS
ACHSB
ACHXU
ACKNC
ACMDZ
ACMLO
ACOKC
ACOMO
ACPIV
ACZOJ
ADHHG
ADHIR
ADIMF
ADINQ
ADKNI
ADKPE
ADQRH
ADRFC
ADTPH
ADURQ
ADYFF
ADZKW
AEBTG
AEFIE
AEFQL
AEGAL
AEGNC
AEJHL
AEJRE
AEKMD
AEMSY
AENEX
AEOHA
AEPYU
AESKC
AETLH
AEVLU
AEXYK
AFBBN
AFEXP
AFFNX
AFGCZ
AFKRA
AFLOW
AFQWF
AFWTZ
AFZKB
AGAYW
AGDGC
AGGDS
AGJBK
AGMZJ
AGQEE
AGQMX
AGRTI
AGWIL
AGWZB
AGYKE
AHAVH
AHBYD
AHSBF
AHYZX
AIAKS
AIGIU
AIIXL
AILAN
AITGF
AJBLW
AJRNO
AJZVZ
ALMA_UNASSIGNED_HOLDINGS
ALWAN
AMKLP
AMXSW
AMYLF
AMYQR
AOCGG
ARAPS
ARMRJ
ASPBG
AVWKF
AXYYD
AYJHY
AZFZN
B-.
BA0
BBWZM
BDATZ
BENPR
BGLVJ
BGNMA
BSONS
CAG
CCPQU
COF
CS3
CSCUP
DDRTE
DL5
DNIVK
DPUIP
DU5
EBLON
EBS
EIOEI
EJD
ESBYG
FEDTE
FERAY
FFXSO
FIGPU
FINBP
FNLPD
FRRFC
FSGXE
FWDCC
GGCAI
GGRSB
GJIRD
GNWQR
GQ6
GQ7
GQ8
GXS
H13
HCIFZ
HF~
HG5
HG6
HMJXF
HQYDN
HRMNR
HVGLF
HZ~
I09
IHE
IJ-
IKXTQ
ITM
IWAJR
IXC
IZIGR
IZQ
I~X
I~Z
J-C
J0Z
JBSCW
JCJTX
JZLTJ
K7-
KDC
KOV
KOW
LAS
LLZTM
M4Y
MA-
N2Q
N9A
NB0
NDZJH
NPVJJ
NQJWS
NU0
O9-
O93
O9G
O9I
O9J
OAM
P19
P2P
P9O
PF0
PT4
PT5
QOK
QOS
R4E
R89
R9I
RHV
RIG
RNI
RNS
ROL
RPX
RSV
RZK
S16
S1Z
S26
S27
S28
S3B
SAP
SCJ
SCLPG
SCO
SDH
SDM
SHX
SISQX
SJYHP
SNE
SNPRN
SNX
SOHCF
SOJ
SPISZ
SRMVM
SSLCW
STPWE
SZN
T13
T16
TN5
TSG
TSK
TSV
TUC
U2A
UG4
UOJIU
UTJUX
UZXMN
VC2
VFIZW
W23
W48
WK8
YLTOR
YOT
Z45
Z5O
Z7R
Z7S
Z7X
Z7Z
Z83
Z86
Z88
Z8M
Z8N
Z8R
Z8T
Z8W
Z92
ZMTXR
~EX
AAPKM
AAYXX
ABBRH
ABDBE
ABFSG
ACSTC
ADHKG
ADKFA
AEZWR
AFDZB
AFHIU
AFOHR
AGQPQ
AHPBZ
AHWEU
AIXLP
ATHPR
AYFIA
CITATION
PHGZM
PHGZT
8FE
8FG
ABRTQ
AZQEC
DWQXO
GNUQQ
JQ2
P62
PKEHL
PQEST
PQGLB
PQQKQ
PQUKI
ID FETCH-LOGICAL-c319t-e897faad01f056129f5d1a23423c1d71a2367f834b710b3733589ab9b0f4a3383
IEDL.DBID U2A
ISSN 0178-2789
IngestDate Fri Jul 25 23:38:10 EDT 2025
Thu Apr 24 22:52:46 EDT 2025
Tue Jul 01 01:05:51 EDT 2025
Fri Feb 21 02:41:37 EST 2025
IsPeerReviewed true
IsScholarly true
Issue 10
Keywords Deep learning
Transformer
Progressive refinement
Camouflaged object detection
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c319t-e897faad01f056129f5d1a23423c1d71a2367f834b710b3733589ab9b0f4a3383
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0003-2442-330X
PQID 2918044254
PQPubID 2043737
PageCount 15
ParticipantIDs proquest_journals_2918044254
crossref_primary_10_1007_s00371_022_02611_1
crossref_citationtrail_10_1007_s00371_022_02611_1
springer_journals_10_1007_s00371_022_02611_1
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 20231000
2023-10-00
20231001
PublicationDateYYYYMMDD 2023-10-01
PublicationDate_xml – month: 10
  year: 2023
  text: 20231000
PublicationDecade 2020
PublicationPlace Berlin/Heidelberg
PublicationPlace_xml – name: Berlin/Heidelberg
– name: Heidelberg
PublicationSubtitle International Journal of Computer Graphics
PublicationTitle The Visual computer
PublicationTitleAbbrev Vis Comput
PublicationYear 2023
Publisher Springer Berlin Heidelberg
Springer Nature B.V
Publisher_xml – name: Springer Berlin Heidelberg
– name: Springer Nature B.V
References Wang, W., Xie, E., Li, X., Fan, D.P., Song, K., Liang, D., Lu, T., Luo, P., Shao, L.: Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 568–578 (2021)
Mei, H., Ji, G.P., Wei, Z., Yang, X., Wei, X., Fan, D.P.: Camouflaged object segmentation with distraction mining. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8772–8781 (2021)
Zhao, J.X., Liu, J.J., Fan, D.P., Cao, Y., Yang, J., Cheng, M.M.: Egnet: Edge guidance network for salient object detection. In: Proceedings of the IEEE/CVF international conference on computer vision, pp. 8779–8788 (2019)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778 (2016)
Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7132–7141 (2018)
FanDPJiGPChengMMShaoLConcealed object detectionIEEE Trans. Pattern Anal. Mach. Intell.202110.1109/TPAMI.2021.3085766
Fan, D.P., Ji, G.P., Zhou, T., Chen, G., Fu, H., Shen, J., Shao, L.: Pranet: Parallel reverse attention network for polyp segmentation. In: International conference on medical image computing and computer-assisted intervention, pp. 263–273. Springer (2020)
HouJYYHWLiJDetection of the mobile object with camouflage color under dynamic background based on optical flowProcedia Eng.2011152201220510.1016/j.proeng.2011.08.412
Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7794–7803 (2018)
XiaoHRanZMabuSLiYLiLSaunet++: an automatic segmentation model of covid-19 lesion from ct slicesVis. Comput.202210.1007/s00371-022-02414-4
Wu, Z., Su, L., Huang, Q.: Stacked cross refinement network for edge-aware salient object detection. In: Proceedings of the IEEE/CVF international conference on computer vision, pp. 7264–7273 (2019)
ZhangXWangXGuCOnline multi-object tracking with pedestrian re-identification and occlusion processingVis. Comput.20213751089109910.1007/s00371-020-01854-0
Skurowski, P., Abdulameer, H., Błaszczyk, J., Depta, T., Kornacki, A., Kozieł, P.: Animal camouflage analysis: Chameleon database. Unpublished manuscript 2(6), 7 (2018)
Li, A., Zhang, J., Lv, Y., Liu, B., Zhang, T., Dai, Y.: Uncertainty-aware joint salient object and camouflaged object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10071–10081 (2021)
Wang, X., Wang, W., Bi, H., Wang, K.: Reverse collaborative fusion model for co-saliency detection. The Visual Computer pp. 1–11 (2021)
Yuan, L., Chen, Y., Wang, T., Yu, W., Shi, Y., Jiang, Z.H., Tay, F.E., Feng, J., Yan, S.: Tokens-to-token vit: Training vision transformers from scratch on imagenet. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 558–567 (2021)
Zhai, Q., Li, X., Yang, F., Chen, C., Cheng, H., Fan, D.P.: Mutual graph learning for camouflaged object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12997–13007 (2021)
Sun, Y., Chen, G., Zhou, T., Zhang, Y., Liu, N.: Context-aware cross-level fusion network for camouflaged object detection. arXiv preprint arXiv:2105.12555 (2021)
WangKBiHZhangYZhangCLiuZZhengSD 2 c-net: a dual-branch, dual-guidance and cross-refine network for camouflaged object detectionIEEE Trans. Ind. Electron.202169536410.1109/TIE.2021.3078379
LeTNNguyenTVNieZTranMTSugimotoAAnabranch network for camouflaged object segmentationComput. Vis. Image Underst.2019184455610.1016/j.cviu.2019.04.006
WeiJWangSHuangQF3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$^3$$\end{document}net: fusion, feedback and focus for salient object detectionProc. AAAI Conf. Artif. Intell.202034. 1232112328
BiHWangKLuDWuCWangWYangLC 2 net: a complementary co-saliency detection networkVis. Comput.202137591192310.1007/s00371-020-01842-4
Margolin, R., Zelnik-Manor, L., Tal, A.: How to evaluate foreground maps? In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 248–255 (2014)
Sengottuvelan, P., Wahi, A., Shanmugam, A.: Performance of decamouflaging through exploratory image analysis. In: 2008 First International Conference on Emerging Trends in Engineering and Technology, pp. 6–10. IEEE (2008)
JiGPZhuLZhugeMFuKFast camouflaged object detection via edge-based reversible re-calibration networkPattern Recogn.202212310.1016/j.patcog.2021.108414
ZhugeMLuXGuoYCaiZChenSCubenet: X-shape connection for camouflaged object detectionPattern Recogn.202212710.1016/j.patcog.2022.108644
Ba, J.L., Kiros, J.R., Hinton, G.E.: Layer normalization. arXiv preprint arXiv:1607.06450 (2016)
Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: Pytorch: an imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019)
Cui, Y., Yan, L., Cao, Z., Liu, D.: Tf-blender: Temporal feature blender for video object detection. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 8138–8147 (2021)
LiuDCuiYChenYZhangJFanBVideo object detection for autonomous driving: Motion-aid feature calibrationNeurocomputing202040911110.1016/j.neucom.2020.05.027
Youwei, P., Xiaoqi, Z., Tian-Zhu, X., Lihe, Z., Huchuan, L.: Zoom in and out: A mixed-scale triplet network for camouflaged object detection. arXiv preprint arXiv:2203.02688 (2022)
Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020)
Fu, J., Liu, J., Tian, H., Li, Y., Bao, Y., Fang, Z., Lu, H.: Dual attention network for scene segmentation. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 3146–3154 (2019)
Yang, F., Zhai, Q., Li, X., Huang, R., Luo, A., Cheng, H., Fan, D.P.: Uncertainty-guided transformer reasoning for camouflaged object detection. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4146–4155 (2021)
WuYHGaoSHMeiJXuJFanDPZhangRGChengMMJcs: an explainable covid-19 diagnosis system by joint classification and segmentationIEEE Trans. Image Process.2021303113312610.1109/TIP.2021.3058783
Perazzi, F., Krähenbühl, P., Pritch, Y., Hornung, A.: Saliency filters: Contrast based filtering for salient region detection. In: 2012 IEEE conference on computer vision and pattern recognition, pp. 733–740. IEEE (2012)
GaoSHChengMMZhaoKZhangXYYangMHTorrPRes2net: a new multi-scale backbone architectureIEEE Trans. Pattern Anal. Mach. Intell.201943265266210.1109/TPAMI.2019.2938758
YanJLeTNNguyenKDTranMTDoTTNguyenTVMirrornet: bio-inspired camouflaged object segmentationIEEE Access20219432904330010.1109/ACCESS.2021.3064443
Liu, D., Cui, Y., Tan, W., Chen, Y.: Sg-net: Spatial granularity network for one-stage video instance segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9816–9825 (2021)
ZhangYHanSZhangZWangJBiHCf-gan: cross-domain feature fusion generative adversarial network for text-to-image synthesisVis. Comput.202210.1007/s00371-022-02404-6
Amit, S.N.K.B., Shiraishi, S., Inoshita, T., Aoki, Y.: Analysis of satellite images for disaster detection. In: 2016 IEEE International geoscience and remote sensing symposium (IGARSS), pp. 5189–5192. IEEE (2016)
Fan, D.P., Ji, G.P., Sun, G., Cheng, M.M., Shen, J., Shao, L.: Camouflaged object detection. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 2777–2787 (2020)
BiHZhangCWangKTongJZhengFRethinking camouflaged object detection: models and datasetsIEEE Trans. Circuits Syst. Video Technol.202110.1109/TCSVT.2021.3124952
Fan, D.P., Gong, C., Cao, Y., Ren, B., Cheng, M.M., Borji, A.: Enhanced-alignment measure for binary foreground map evaluation. arXiv preprint arXiv:1805.10421 (2018)
Fan, D.P., Cheng, M.M., Liu, Y., Li, T., Borji, A.: Structure-measure: a new way to evaluate foreground maps. In: Proceedings of the IEEE international conference on computer vision, pp. 4548–4557 (2017)
LiuZHuangKTanTForeground object detection using top-down information based on em frameworkIEEE Trans. Image Process.201221942044217297241110.1109/TIP.2012.22004921373.94798
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017)
Lv, Y., Zhang, J., Dai, Y., Li, A., Liu, B., Barnes, N., Fan, D.P.: Simultaneously localize, segment and rank the camouflaged objects. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11591–11601 (2021)
PanYChenYFuQZhangPXuXStudy on the camouflaged target detection method based on 3d convexityMod. Appl. Sci.20115415210.5539/mas.v5n4p152
Wu, Z., Su, L., Huang, Q.: Cascaded partial decoder for fast and accurate salient object detection. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 3907–3916 (2019)
Dong, B., Zhuge, M., Wang, Y., Bi, H., Chen, G.: Towards accurate camouflaged object detection with mixture convolution and interactive fusion. arXiv preprint arXiv:2101.056871(2) (2021)
FanDPZhouTJiGPZhouYChenGFuHShenJShaoLInf-net: automatic covid-19 lung infection segmentation from ct imagesIEEE Trans. Med. Imaging20203982626263710.1109/TMI.2020.2996645
LeXMeiJZhangHZhouBXiJA learning-based approach for surface defect detection using small image da
X Le (2611_CR23) 2020; 408
DP Fan (2611_CR11) 2021
2611_CR26
2611_CR24
2611_CR21
K Wang (2611_CR42) 2021; 69
2611_CR28
2611_CR29
Z Liu (2611_CR27) 2012; 21
H Bi (2611_CR4) 2021
2611_CR52
D Wang (2611_CR41) 2021; 37
J Wei (2611_CR46) 2020; 34
2611_CR15
2611_CR13
2611_CR58
2611_CR55
2611_CR12
2611_CR53
2611_CR10
TN Le (2611_CR22) 2019; 184
2611_CR54
2611_CR19
2611_CR17
GP Ji (2611_CR20) 2022; 123
2611_CR1
2611_CR40
D Liu (2611_CR25) 2020; 409
2611_CR9
2611_CR48
2611_CR8
2611_CR49
2611_CR7
2611_CR6
2611_CR5
2611_CR44
2611_CR45
2611_CR2
2611_CR43
H Xiao (2611_CR50) 2022
SH Gao (2611_CR16) 2019; 43
X Zhang (2611_CR56) 2021; 37
H Bi (2611_CR3) 2021; 37
JYYHW Hou (2611_CR18) 2011; 15
M Zhuge (2611_CR59) 2022; 127
2611_CR30
2611_CR37
2611_CR38
2611_CR35
2611_CR36
2611_CR33
2611_CR34
2611_CR31
J Yan (2611_CR51) 2021; 9
Y Zhang (2611_CR57) 2022
Y Pan (2611_CR32) 2011; 5
2611_CR39
DP Fan (2611_CR14) 2020; 39
YH Wu (2611_CR47) 2021; 30
References_xml – reference: Dong, B., Zhuge, M., Wang, Y., Bi, H., Chen, G.: Towards accurate camouflaged object detection with mixture convolution and interactive fusion. arXiv preprint arXiv:2101.056871(2) (2021)
– reference: Ba, J.L., Kiros, J.R., Hinton, G.E.: Layer normalization. arXiv preprint arXiv:1607.06450 (2016)
– reference: Skurowski, P., Abdulameer, H., Błaszczyk, J., Depta, T., Kornacki, A., Kozieł, P.: Animal camouflage analysis: Chameleon database. Unpublished manuscript 2(6), 7 (2018)
– reference: Wang, W., Xie, E., Li, X., Fan, D.P., Song, K., Liang, D., Lu, T., Luo, P., Shao, L.: Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 568–578 (2021)
– reference: WeiJWangSHuangQF3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$^3$$\end{document}net: fusion, feedback and focus for salient object detectionProc. AAAI Conf. Artif. Intell.202034. 1232112328
– reference: Fan, D.P., Ji, G.P., Zhou, T., Chen, G., Fu, H., Shen, J., Shao, L.: Pranet: Parallel reverse attention network for polyp segmentation. In: International conference on medical image computing and computer-assisted intervention, pp. 263–273. Springer (2020)
– reference: XiaoHRanZMabuSLiYLiLSaunet++: an automatic segmentation model of covid-19 lesion from ct slicesVis. Comput.202210.1007/s00371-022-02414-4
– reference: He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778 (2016)
– reference: LiuZHuangKTanTForeground object detection using top-down information based on em frameworkIEEE Trans. Image Process.201221942044217297241110.1109/TIP.2012.22004921373.94798
– reference: Mei, H., Ji, G.P., Wei, Z., Yang, X., Wei, X., Fan, D.P.: Camouflaged object segmentation with distraction mining. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8772–8781 (2021)
– reference: Li, A., Zhang, J., Lv, Y., Liu, B., Zhang, T., Dai, Y.: Uncertainty-aware joint salient object and camouflaged object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10071–10081 (2021)
– reference: Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 12179–12188 (2021)
– reference: Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: Pytorch: an imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019)
– reference: Fan, D.P., Cheng, M.M., Liu, Y., Li, T., Borji, A.: Structure-measure: a new way to evaluate foreground maps. In: Proceedings of the IEEE international conference on computer vision, pp. 4548–4557 (2017)
– reference: JiGPZhuLZhugeMFuKFast camouflaged object detection via edge-based reversible re-calibration networkPattern Recogn.202212310.1016/j.patcog.2021.108414
– reference: Fu, J., Liu, J., Tian, H., Li, Y., Bao, Y., Fang, Z., Lu, H.: Dual attention network for scene segmentation. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 3146–3154 (2019)
– reference: Wu, Z., Su, L., Huang, Q.: Cascaded partial decoder for fast and accurate salient object detection. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 3907–3916 (2019)
– reference: Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017)
– reference: ZhugeMLuXGuoYCaiZChenSCubenet: X-shape connection for camouflaged object detectionPattern Recogn.202212710.1016/j.patcog.2022.108644
– reference: Wu, Z., Su, L., Huang, Q.: Stacked cross refinement network for edge-aware salient object detection. In: Proceedings of the IEEE/CVF international conference on computer vision, pp. 7264–7273 (2019)
– reference: WuYHGaoSHMeiJXuJFanDPZhangRGChengMMJcs: an explainable covid-19 diagnosis system by joint classification and segmentationIEEE Trans. Image Process.2021303113312610.1109/TIP.2021.3058783
– reference: LeXMeiJZhangHZhouBXiJA learning-based approach for surface defect detection using small image datasetsNeurocomputing202040811212010.1016/j.neucom.2019.09.107
– reference: FanDPJiGPChengMMShaoLConcealed object detectionIEEE Trans. Pattern Anal. Mach. Intell.202110.1109/TPAMI.2021.3085766
– reference: Fan, D.P., Ji, G.P., Sun, G., Cheng, M.M., Shen, J., Shao, L.: Camouflaged object detection. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 2777–2787 (2020)
– reference: Margolin, R., Zelnik-Manor, L., Tal, A.: How to evaluate foreground maps? In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 248–255 (2014)
– reference: Liu, D., Cui, Y., Tan, W., Chen, Y.: Sg-net: Spatial granularity network for one-stage video instance segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9816–9825 (2021)
– reference: Zhao, J.X., Liu, J.J., Fan, D.P., Cao, Y., Yang, J., Cheng, M.M.: Egnet: Edge guidance network for salient object detection. In: Proceedings of the IEEE/CVF international conference on computer vision, pp. 8779–8788 (2019)
– reference: Lv, Y., Zhang, J., Dai, Y., Li, A., Liu, B., Barnes, N., Fan, D.P.: Simultaneously localize, segment and rank the camouflaged objects. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11591–11601 (2021)
– reference: Fan, D.P., Gong, C., Cao, Y., Ren, B., Cheng, M.M., Borji, A.: Enhanced-alignment measure for binary foreground map evaluation. arXiv preprint arXiv:1805.10421 (2018)
– reference: WangDHuGLyuCFrnet: an end-to-end feature refinement neural network for medical image segmentationVis. Comput.20213751101111210.1007/s00371-020-01855-z
– reference: Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7794–7803 (2018)
– reference: Yuan, L., Chen, Y., Wang, T., Yu, W., Shi, Y., Jiang, Z.H., Tay, F.E., Feng, J., Yan, S.: Tokens-to-token vit: Training vision transformers from scratch on imagenet. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 558–567 (2021)
– reference: Pang, Y., Zhao, X., Zhang, L., Lu, H.: Multi-scale interactive network for salient object detection. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 9413–9422 (2020)
– reference: Sengottuvelan, P., Wahi, A., Shanmugam, A.: Performance of decamouflaging through exploratory image analysis. In: 2008 First International Conference on Emerging Trends in Engineering and Technology, pp. 6–10. IEEE (2008)
– reference: Cui, Y., Cao, Z., Xie, Y., Jiang, X., Tao, F., Chen, Y.V., Li, L., Liu, D.: Dg-labeler and dgl-mots dataset: Boost the autonomous driving perception. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 58–67 (2022)
– reference: Zhai, Q., Li, X., Yang, F., Chen, C., Cheng, H., Fan, D.P.: Mutual graph learning for camouflaged object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12997–13007 (2021)
– reference: ZhangXWangXGuCOnline multi-object tracking with pedestrian re-identification and occlusion processingVis. Comput.20213751089109910.1007/s00371-020-01854-0
– reference: Youwei, P., Xiaoqi, Z., Tian-Zhu, X., Lihe, Z., Huchuan, L.: Zoom in and out: A mixed-scale triplet network for camouflaged object detection. arXiv preprint arXiv:2203.02688 (2022)
– reference: Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
– reference: GaoSHChengMMZhaoKZhangXYYangMHTorrPRes2net: a new multi-scale backbone architectureIEEE Trans. Pattern Anal. Mach. Intell.201943265266210.1109/TPAMI.2019.2938758
– reference: BiHZhangCWangKTongJZhengFRethinking camouflaged object detection: models and datasetsIEEE Trans. Circuits Syst. Video Technol.202110.1109/TCSVT.2021.3124952
– reference: PanYChenYFuQZhangPXuXStudy on the camouflaged target detection method based on 3d convexityMod. Appl. Sci.20115415210.5539/mas.v5n4p152
– reference: YanJLeTNNguyenKDTranMTDoTTNguyenTVMirrornet: bio-inspired camouflaged object segmentationIEEE Access20219432904330010.1109/ACCESS.2021.3064443
– reference: Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10012–10022 (2021)
– reference: ZhangYHanSZhangZWangJBiHCf-gan: cross-domain feature fusion generative adversarial network for text-to-image synthesisVis. Comput.202210.1007/s00371-022-02404-6
– reference: Perazzi, F., Krähenbühl, P., Pritch, Y., Hornung, A.: Saliency filters: Contrast based filtering for salient region detection. In: 2012 IEEE conference on computer vision and pattern recognition, pp. 733–740. IEEE (2012)
– reference: WangKBiHZhangYZhangCLiuZZhengSD 2 c-net: a dual-branch, dual-guidance and cross-refine network for camouflaged object detectionIEEE Trans. Ind. Electron.202169536410.1109/TIE.2021.3078379
– reference: LeTNNguyenTVNieZTranMTSugimotoAAnabranch network for camouflaged object segmentationComput. Vis. Image Underst.2019184455610.1016/j.cviu.2019.04.006
– reference: Amit, S.N.K.B., Shiraishi, S., Inoshita, T., Aoki, Y.: Analysis of satellite images for disaster detection. In: 2016 IEEE International geoscience and remote sensing symposium (IGARSS), pp. 5189–5192. IEEE (2016)
– reference: Sun, Y., Chen, G., Zhou, T., Zhang, Y., Liu, N.: Context-aware cross-level fusion network for camouflaged object detection. arXiv preprint arXiv:2105.12555 (2021)
– reference: LiuDCuiYChenYZhangJFanBVideo object detection for autonomous driving: Motion-aid feature calibrationNeurocomputing202040911110.1016/j.neucom.2020.05.027
– reference: Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7132–7141 (2018)
– reference: Yang, F., Zhai, Q., Li, X., Huang, R., Luo, A., Cheng, H., Fan, D.P.: Uncertainty-guided transformer reasoning for camouflaged object detection. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4146–4155 (2021)
– reference: Cui, Y., Yan, L., Cao, Z., Liu, D.: Tf-blender: Temporal feature blender for video object detection. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 8138–8147 (2021)
– reference: Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020)
– reference: HouJYYHWLiJDetection of the mobile object with camouflage color under dynamic background based on optical flowProcedia Eng.2011152201220510.1016/j.proeng.2011.08.412
– reference: Wang, X., Wang, W., Bi, H., Wang, K.: Reverse collaborative fusion model for co-saliency detection. The Visual Computer pp. 1–11 (2021)
– reference: FanDPZhouTJiGPZhouYChenGFuHShenJShaoLInf-net: automatic covid-19 lung infection segmentation from ct imagesIEEE Trans. Med. Imaging20203982626263710.1109/TMI.2020.2996645
– reference: BiHWangKLuDWuCWangWYangLC 2 net: a complementary co-saliency detection networkVis. Comput.202137591192310.1007/s00371-020-01842-4
– ident: 2611_CR49
  doi: 10.1109/ICCV.2019.00736
– year: 2022
  ident: 2611_CR57
  publication-title: Vis. Comput.
  doi: 10.1007/s00371-022-02404-6
– volume: 39
  start-page: 2626
  issue: 8
  year: 2020
  ident: 2611_CR14
  publication-title: IEEE Trans. Med. Imaging
  doi: 10.1109/TMI.2020.2996645
– ident: 2611_CR19
  doi: 10.1109/CVPR.2018.00745
– ident: 2611_CR55
  doi: 10.1109/CVPR46437.2021.01280
– ident: 2611_CR1
  doi: 10.1109/IGARSS.2016.7730352
– volume: 37
  start-page: 911
  issue: 5
  year: 2021
  ident: 2611_CR3
  publication-title: Vis. Comput.
  doi: 10.1007/s00371-020-01842-4
– volume: 408
  start-page: 112
  year: 2020
  ident: 2611_CR23
  publication-title: Neurocomputing
  doi: 10.1016/j.neucom.2019.09.107
– ident: 2611_CR26
  doi: 10.1109/CVPR46437.2021.00969
– ident: 2611_CR17
  doi: 10.1109/CVPR.2016.90
– ident: 2611_CR35
  doi: 10.1109/CVPR.2012.6247743
– volume: 9
  start-page: 43290
  year: 2021
  ident: 2611_CR51
  publication-title: IEEE Access
  doi: 10.1109/ACCESS.2021.3064443
– volume: 69
  start-page: 5364
  year: 2021
  ident: 2611_CR42
  publication-title: IEEE Trans. Ind. Electron.
  doi: 10.1109/TIE.2021.3078379
– ident: 2611_CR31
  doi: 10.1109/CVPR46437.2021.00866
– ident: 2611_CR44
  doi: 10.1109/CVPR.2018.00813
– ident: 2611_CR8
– ident: 2611_CR24
  doi: 10.1109/CVPR46437.2021.00994
– ident: 2611_CR39
  doi: 10.24963/ijcai.2021/142
– ident: 2611_CR9
  doi: 10.1109/ICCV.2017.487
– ident: 2611_CR52
  doi: 10.1109/ICCV48922.2021.00411
– volume: 127
  year: 2022
  ident: 2611_CR59
  publication-title: Pattern Recogn.
  doi: 10.1016/j.patcog.2022.108644
– ident: 2611_CR45
  doi: 10.1007/s00371-021-02231-1
– ident: 2611_CR6
  doi: 10.1109/ICCV48922.2021.00803
– ident: 2611_CR28
  doi: 10.1109/ICCV48922.2021.00986
– ident: 2611_CR36
  doi: 10.1109/ICCV48922.2021.01196
– ident: 2611_CR15
  doi: 10.1109/CVPR.2019.00326
– volume: 123
  year: 2022
  ident: 2611_CR20
  publication-title: Pattern Recogn.
  doi: 10.1016/j.patcog.2021.108414
– ident: 2611_CR12
  doi: 10.1109/CVPR42600.2020.00285
– volume: 34
  start-page: . 12321
  year: 2020
  ident: 2611_CR46
  publication-title: Proc. AAAI Conf. Artif. Intell.
– volume: 15
  start-page: 2201
  year: 2011
  ident: 2611_CR18
  publication-title: Procedia Eng.
  doi: 10.1016/j.proeng.2011.08.412
– ident: 2611_CR5
  doi: 10.1109/WACV51458.2022.00347
– volume: 37
  start-page: 1101
  issue: 5
  year: 2021
  ident: 2611_CR41
  publication-title: Vis. Comput.
  doi: 10.1007/s00371-020-01855-z
– volume: 37
  start-page: 1089
  issue: 5
  year: 2021
  ident: 2611_CR56
  publication-title: Vis. Comput.
  doi: 10.1007/s00371-020-01854-0
– ident: 2611_CR33
  doi: 10.1109/CVPR42600.2020.00943
– ident: 2611_CR40
– ident: 2611_CR38
– year: 2021
  ident: 2611_CR4
  publication-title: IEEE Trans. Circuits Syst. Video Technol.
  doi: 10.1109/TCSVT.2021.3124952
– ident: 2611_CR7
– volume: 43
  start-page: 652
  issue: 2
  year: 2019
  ident: 2611_CR16
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2019.2938758
– volume: 184
  start-page: 45
  year: 2019
  ident: 2611_CR22
  publication-title: Comput. Vis. Image Underst.
  doi: 10.1016/j.cviu.2019.04.006
– ident: 2611_CR34
– ident: 2611_CR29
  doi: 10.1109/CVPR46437.2021.01142
– volume: 30
  start-page: 3113
  year: 2021
  ident: 2611_CR47
  publication-title: IEEE Trans. Image Process.
  doi: 10.1109/TIP.2021.3058783
– ident: 2611_CR10
  doi: 10.24963/ijcai.2018/97
– year: 2021
  ident: 2611_CR11
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2021.3085766
– ident: 2611_CR48
  doi: 10.1109/CVPR.2019.00403
– volume: 5
  start-page: 152
  issue: 4
  year: 2011
  ident: 2611_CR32
  publication-title: Mod. Appl. Sci.
  doi: 10.5539/mas.v5n4p152
– ident: 2611_CR13
  doi: 10.1007/978-3-030-59725-2_26
– ident: 2611_CR37
  doi: 10.1109/ICETET.2008.232
– ident: 2611_CR54
  doi: 10.1109/ICCV48922.2021.00060
– ident: 2611_CR21
– ident: 2611_CR2
– ident: 2611_CR30
  doi: 10.1109/CVPR.2014.39
– year: 2022
  ident: 2611_CR50
  publication-title: Vis. Comput.
  doi: 10.1007/s00371-022-02414-4
– ident: 2611_CR53
– volume: 409
  start-page: 1
  year: 2020
  ident: 2611_CR25
  publication-title: Neurocomputing
  doi: 10.1016/j.neucom.2020.05.027
– ident: 2611_CR58
  doi: 10.1109/ICCV.2019.00887
– ident: 2611_CR43
  doi: 10.1109/ICCV48922.2021.00061
– volume: 21
  start-page: 4204
  issue: 9
  year: 2012
  ident: 2611_CR27
  publication-title: IEEE Trans. Image Process.
  doi: 10.1109/TIP.2012.2200492
SSID ssj0017749
Score 2.5445879
Snippet Camouflaged object detection (COD) is a challenging task which aims to detect objects similar to the surrounding environment. In this paper, we propose a...
SourceID proquest
crossref
springer
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 4593
SubjectTerms Artificial Intelligence
Computer Graphics
Computer Science
Deep learning
Image Processing and Computer Vision
Methods
Mining
Modules
Neural networks
Object recognition
Original Article
Semantics
Transformers
SummonAdditionalLinks – databaseName: ProQuest Central
  dbid: BENPR
  link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV3dS8MwEA-6veiD-InTKXnwTYNN27WpL6KyMQTHGBsMfCj5BGF0c6v7-71k6aaC61NK21Duksvvd7ncIXQT8EAaCrRER5EhMaiYcCoSaOlMxHBxl9TnrZd0R_HruDX2DreFD6usbKIz1GoqrY_8PswoC2IYYfHj7JPYqlF2d9WX0NhFdTDBDMhX_bnd6w_W-wgAbhwApsCV7JlPf2zGHZ5z2eqIjWa3PIQS-ntp2uDNP1ukbuXpHKIDDxnx00rHR2hHF8do_0ciwRP0PuwPerp8wJIDlTcTMBIKT4X1sWClSxduVeDlB8dlBVT1nAAdB8Uq7GK0bDjsUmP4K-jWugxxsYoQP0WjTnv40iW-bAKRMJ9KolmWGs5VQI2lB2FmWory0Kb6k1SltpmkhkWxAHQhojSKWizjIhOBibllrGeoVkwLfY4w3AMfYtIwYFU8TkQQKZllYaIYjwIdNhCtJJZLn1PclraY5OtsyE7KOUg5d1LOaQPdrr-ZrTJqbH27WSki97NrkW_GQgPdVcrZPP6_t4vtvV2iPVtNfhWr10S1cv6lrwBzlOLaD6xv8Q_Qzg
  priority: 102
  providerName: ProQuest
Title TPRNet: camouflaged object detection via transformer-induced progressive refinement network
URI https://link.springer.com/article/10.1007/s00371-022-02611-1
https://www.proquest.com/docview/2918044254
Volume 39
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1bS8MwFD7o9qIPXqbivIw8-KaFpvf4NmUXFMcYG0x8KEmbgDA62ep-vydpu6moYF-a0jSUc3L5vuRcAK5sbieKIi2RrqssD1VscSoCLEkmPLy4CerzNAj6E-9h6k9Lp7BlZe1eHUmamXrt7Gaiy1na-lzzBmoh56n7mrtjL5447fXZAQIaA3op8iPt51m6yvzcxtflaIMxvx2LmtWmewB7JUwk7UKvh7AlswbsVykYSDkiG7D7KZ7gEbyMh6OBzG9JwpHRqxnOFSmZC73VQlKZG6urjKxeOckrvCoXFrJy1G9KjKmWtopdSYI_is3qnUOSFYbixzDpdsb3favMnmAlOKxyS0YsVJynNlWaJThM-Snljo74l9A01MUgVJHrCQQZwg1d148YF0zYyuOauJ5ALZtn8hQIPiMtihIVIbniXiBsN00Yc4I04q4tnSbQSohxUoYW1xkuZvE6KLIRfIyCj43gY9qE6_U3b0VgjT9rX1S6ictBtowdRiPbw0nHa8JNpa_N699bO_tf9XPY0UnmCxO-C6jli3d5iVAkFy3Yjrq9FtTbvefHDt7vOoPhqGX64wf73daT
linkProvider Springer Nature
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1Lb9QwEB6V9gAcKspDbB_UBziBRRx7szESqhB02dJ2hdBWqsQhtWNbQirZPtKi_qn-RmacZBcqtbfm5CiJFc18tuez5wHwOjFJGQTSEi9l4ApVzI2wGba8tgovE5P67I-z0YH6dtg_XIDrLhaG3Cq7OTFO1G5a0h75-1SLPFGIMLV1csqpahSdrnYlNBpY7PqrP0jZzj_ufEH9vknT4fbk84i3VQV4iXCruc_1IBjjEhHIek516DthUsqEVwo3oGY2CLlUFhdfKwdS9nNtrLZJUIYIHfb7AJaUlJpGVD78Oju1QFMqmtsCmRlFmLZBOjFUL-bG4-Q7T6xHcPH_Qji3bm8cyMZ1bvgEllsDlX1qELUCC756Co__SVv4DH5Ovv8Y-_oDK83v6UU4xinJsamlHR3mfB2duyp2-cuwujOL_RlH8o8wcix6hJHz7aVn-FfYLW1QsqrxR38OB_cizhewWE0r_xIY3iP7ysuQI4czKrOJdKXWaeZyIxOf9kB0EivKNoM5FdI4Lma5l6OUC5RyEaVciB68nX1z0uTvuPPt9U4RRTuWz4s58nrwrlPO_PHtva3e3dsmPBxN9veKvZ3x7ho8ojr2jZfgOizWZxd-A62d2r6KEGNwdN-Y_gsz7AqJ
linkToPdf http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LSwMxEA5SQfTgoypWq-bgTZdudtN9eCtqqa9SpIWCh5BsEhDKttS1v99J9tEqKrinLJsNYSaP-ZKZbxC6cLmbaAKwRPm-diio2OFEBFBSsaDwcEvq89wPeiP6MG6PV6L4rbd7eSWZxzQYlqY0a82kblWBb5ZpzjGe6AZDEAfwzzosx8SM65HXqe4RwLixBjABrGRiPouwmZ_b-Lo1Le3Nb1ekdufp7qLtwmTEnVzHe2hNpXW0U6ZjwMXsrKOtFW7BffQ6HLz0VXaNEw7oXk9g3ZB4KsyxC5Yqsx5YKV68cZyVtquaO4DQQdcSW7ct4yG7UBg6Cs2aU0Sc5k7jB2jUvRve9Jwik4KTgEwyR0VxqDmXLtEGMXixbkvCPcP-lxAZmmIQ6sinAgwO4Ye-345iLmLhasoNiD1EtXSaqiOE4R0gUpToCIAWp4FwfZnEsRfIiPuu8hqIlEJkSUEzbrJdTFhFkGwFz0DwzAqekQa6rP6Z5SQbf9ZulrphxYR7Z15MIpfCAkQb6KrU1_Lz760d_6_6OdoY3HbZ033_8QRtmtzzuWdfE9Wy-Yc6BQslE2d2EH4CaQ3Zrg
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=TPRNet%3A+camouflaged+object+detection+via+transformer-induced+progressive+refinement+network&rft.jtitle=The+Visual+computer&rft.au=Zhang%2C+Qiao&rft.au=Ge%2C+Yanliang&rft.au=Zhang%2C+Cong&rft.au=Bi%2C+Hongbo&rft.date=2023-10-01&rft.pub=Springer+Berlin+Heidelberg&rft.issn=0178-2789&rft.eissn=1432-2315&rft.volume=39&rft.issue=10&rft.spage=4593&rft.epage=4607&rft_id=info:doi/10.1007%2Fs00371-022-02611-1&rft.externalDocID=10_1007_s00371_022_02611_1
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0178-2789&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0178-2789&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0178-2789&client=summon