Structure-Measure: A New Way to Evaluate Foreground Maps
Foreground map evaluation is crucial for gauging the progress of object segmentation algorithms, in particular in the field of salient object detection where the purpose is to accurately detect and segment the most salient object in a scene. Several measures (e.g., area-under-the-curve, F1-measure,...
Saved in:
Published in | International journal of computer vision Vol. 129; no. 9; pp. 2622 - 2638 |
---|---|
Main Authors | , |
Format | Journal Article |
Language | English |
Published |
New York
Springer US
01.09.2021
Springer Springer Nature B.V |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Foreground map evaluation is crucial for gauging the progress of object segmentation algorithms, in particular in the field of salient object detection where the purpose is to accurately detect and segment the most salient object in a scene. Several measures (e.g., area-under-the-curve, F1-measure, average precision, etc.) have been used to evaluate the similarity between a foreground map and a ground-truth map. The existing measures are based on pixel-wise errors and often ignore the structural similarities. Behavioral vision studies, however, have shown that the human visual system is highly sensitive to structures in scenes. Here, we propose a novel, efficient (0.005 s per image), and easy to calculate measure known as
S-measure
(structural measure) to evaluate foreground maps. Our new measure simultaneously evaluates region-aware and object-aware structural similarity between a foreground map and a ground-truth map. We demonstrate superiority of our measure over existing ones using 4 meta-measures on 5 widely-used benchmark datasets. Furthermore, we conduct a behavioral judgment study over a new database. Data from 45 subjects shows that on average they preferred the saliency maps chosen by our measure over the saliency maps chosen by the state-of-the-art measures. Our experimental results offer new insights into foreground map evaluation where current measures fail to truly examine the strengths and weaknesses of models. Code:
https://github.com/DengPingFan/S-measure
. |
---|---|
AbstractList | Foreground map evaluation is crucial for gauging the progress of object segmentation algorithms, in particular in the field of salient object detection where the purpose is to accurately detect and segment the most salient object in a scene. Several measures (e.g., area-under-the-curve, F1-measure, average precision, etc.) have been used to evaluate the similarity between a foreground map and a ground-truth map. The existing measures are based on pixel-wise errors and often ignore the structural similarities. Behavioral vision studies, however, have shown that the human visual system is highly sensitive to structures in scenes. Here, we propose a novel, efficient (0.005 s per image), and easy to calculate measure known as S-measure (structural measure) to evaluate foreground maps. Our new measure simultaneously evaluates region-aware and object-aware structural similarity between a foreground map and a ground-truth map. We demonstrate superiority of our measure over existing ones using 4 meta-measures on 5 widely-used benchmark datasets. Furthermore, we conduct a behavioral judgment study over a new database. Data from 45 subjects shows that on average they preferred the saliency maps chosen by our measure over the saliency maps chosen by the state-of-the-art measures. Our experimental results offer new insights into foreground map evaluation where current measures fail to truly examine the strengths and weaknesses of models. Code: https://github.com/DengPingFan/S-measure. Foreground map evaluation is crucial for gauging the progress of object segmentation algorithms, in particular in the field of salient object detection where the purpose is to accurately detect and segment the most salient object in a scene. Several measures (e.g., area-under-the-curve, F1-measure, average precision, etc.) have been used to evaluate the similarity between a foreground map and a ground-truth map. The existing measures are based on pixel-wise errors and often ignore the structural similarities. Behavioral vision studies, however, have shown that the human visual system is highly sensitive to structures in scenes. Here, we propose a novel, efficient (0.005 s per image), and easy to calculate measure known as S-measure (structural measure) to evaluate foreground maps. Our new measure simultaneously evaluates region-aware and object-aware structural similarity between a foreground map and a ground-truth map. We demonstrate superiority of our measure over existing ones using 4 meta-measures on 5 widely-used benchmark datasets. Furthermore, we conduct a behavioral judgment study over a new database. Data from 45 subjects shows that on average they preferred the saliency maps chosen by our measure over the saliency maps chosen by the state-of-the-art measures. Our experimental results offer new insights into foreground map evaluation where current measures fail to truly examine the strengths and weaknesses of models. Code: Foreground map evaluation is crucial for gauging the progress of object segmentation algorithms, in particular in the field of salient object detection where the purpose is to accurately detect and segment the most salient object in a scene. Several measures (e.g., area-under-the-curve, F1-measure, average precision, etc.) have been used to evaluate the similarity between a foreground map and a ground-truth map. The existing measures are based on pixel-wise errors and often ignore the structural similarities. Behavioral vision studies, however, have shown that the human visual system is highly sensitive to structures in scenes. Here, we propose a novel, efficient (0.005 s per image), and easy to calculate measure known as S-measure (structural measure) to evaluate foreground maps. Our new measure simultaneously evaluates region-aware and object-aware structural similarity between a foreground map and a ground-truth map. We demonstrate superiority of our measure over existing ones using 4 meta-measures on 5 widely-used benchmark datasets. Furthermore, we conduct a behavioral judgment study over a new database. Data from 45 subjects shows that on average they preferred the saliency maps chosen by our measure over the saliency maps chosen by the state-of-the-art measures. Our experimental results offer new insights into foreground map evaluation where current measures fail to truly examine the strengths and weaknesses of models. Code: https://github.com/DengPingFan/S-measure . |
Audience | Academic |
Author | Cheng, Ming-Ming Fan, Deng-Ping |
Author_xml | – sequence: 1 givenname: Ming-Ming orcidid: 0000-0001-5550-8758 surname: Cheng fullname: Cheng, Ming-Ming email: cmm@nankai.edu.cn organization: College of Computer Science, Nankai University – sequence: 2 givenname: Deng-Ping orcidid: 0000-0002-5245-7518 surname: Fan fullname: Fan, Deng-Ping organization: College of Computer Science, Nankai University |
BookMark | eNp9kE1LJDEQhsOisKPuH9hTw548xE2lk3Syt0HUFfwAP9hjKDPVQ8vYPSZpP_69me0FcQ9WHQrC89ZbeXfYVj_0xNh3EAcgRPMzAUhTcyGBC1BOcPuFzUA3NQcl9BabCScF18bBV7aT0r0QQlpZz5i9znEMeYzEzwlTmb-qeXVBz9UffK3yUB094WrETNXxEGkZh7FfVOe4Tntsu8VVom__5i67PT66OfzNzy5PTg_nZzzUTmZOzglCKxrtoJaAtjEB8K6lUkYrtKQW6q6VLQinKZAxjTIL29SkAqrg6l32Y9q7jsPjSCn7-2GMfbH0UmsDqtHSFupgopa4It_17ZAjhtILeuhCyartyvvcNAIMaL1Zu_9BUJhML3mJY0r-9PrqIysnNsQhpUitX8fuAeOrB-E38fspfl_i93_j95uL7H-i0GXMXfGJ2K0-l9aTNBWffknx_cufqN4A0BOYsQ |
CitedBy_id | crossref_primary_10_3390_math13050747 crossref_primary_10_1109_TPAMI_2021_3140168 crossref_primary_10_1007_s00371_022_02630_y crossref_primary_10_1007_s00521_025_11144_2 crossref_primary_10_3390_electronics14061118 crossref_primary_10_1016_j_compbiomed_2024_108302 crossref_primary_10_1007_s12274_023_6104_1 crossref_primary_10_1007_s11042_023_17614_w crossref_primary_10_1016_j_knosys_2022_109938 crossref_primary_10_1109_TCSVT_2024_3437685 crossref_primary_10_1038_s41598_025_86353_9 crossref_primary_10_1109_TCSVT_2023_3349209 crossref_primary_10_1007_s11042_024_19891_5 crossref_primary_10_1109_TCI_2023_3237176 crossref_primary_10_1016_j_aej_2023_02_020 crossref_primary_10_1007_s10489_023_04645_x crossref_primary_10_1109_TIM_2024_3470039 crossref_primary_10_3390_app122311967 crossref_primary_10_1007_s00034_024_02983_w crossref_primary_10_1016_j_infrared_2023_104673 crossref_primary_10_1109_TII_2023_3327341 crossref_primary_10_1007_s44267_023_00019_6 crossref_primary_10_1007_s00138_024_01552_0 crossref_primary_10_3390_electronics14050831 crossref_primary_10_1109_ACCESS_2024_3382195 crossref_primary_10_1007_s11042_023_15794_z crossref_primary_10_1016_j_asoc_2024_112685 crossref_primary_10_1016_j_media_2025_103510 crossref_primary_10_1007_s00371_024_03786_5 crossref_primary_10_1016_j_patcog_2022_108624 crossref_primary_10_1109_ACCESS_2025_3532303 crossref_primary_10_1109_TPAMI_2022_3179526 crossref_primary_10_1007_s41095_022_0268_6 crossref_primary_10_1145_3570507 crossref_primary_10_1109_TCSVT_2023_3339181 crossref_primary_10_1109_TCSVT_2024_3370685 crossref_primary_10_1007_s11042_023_17560_7 crossref_primary_10_1109_TCSVT_2022_3180274 crossref_primary_10_1109_TIM_2023_3330222 crossref_primary_10_1016_j_displa_2025_102993 crossref_primary_10_1016_j_engappai_2024_107995 crossref_primary_10_1007_s11633_023_1472_2 crossref_primary_10_1109_TETCI_2024_3367821 crossref_primary_10_1109_TRPMS_2023_3272209 crossref_primary_10_1016_j_bspc_2024_106906 crossref_primary_10_1109_TCDS_2022_3172331 crossref_primary_10_1177_15485129241233299 crossref_primary_10_3389_fmars_2022_1003568 crossref_primary_10_1007_s10489_022_03612_2 crossref_primary_10_26599_AIR_2023_9150015 crossref_primary_10_1016_j_imavis_2024_105218 crossref_primary_10_1007_s11042_023_15304_1 crossref_primary_10_1007_s11633_022_1371_y crossref_primary_10_1109_ACCESS_2024_3496999 crossref_primary_10_32604_cmc_2024_057833 crossref_primary_10_1109_TCSVT_2024_3374758 crossref_primary_10_1016_j_patcog_2023_109555 crossref_primary_10_1016_j_neunet_2024_106284 crossref_primary_10_12677_pm_2025_151023 crossref_primary_10_1016_j_isprsjprs_2023_07_027 crossref_primary_10_1016_j_compbiomed_2024_108930 crossref_primary_10_1117_1_JRS_16_046507 crossref_primary_10_2139_ssrn_4170553 crossref_primary_10_1016_j_jvcir_2024_104304 crossref_primary_10_1109_TIP_2022_3217695 crossref_primary_10_26599_AIR_2024_9150044 crossref_primary_10_1016_j_displa_2023_102600 crossref_primary_10_3390_s24237473 crossref_primary_10_1016_j_neucom_2025_129648 crossref_primary_10_1117_1_JEI_31_4_043032 crossref_primary_10_1186_s13007_023_01045_7 crossref_primary_10_3390_electronics13214277 crossref_primary_10_1007_s00530_025_01674_z crossref_primary_10_1016_j_dsp_2025_105172 crossref_primary_10_1109_TMM_2023_3274933 crossref_primary_10_3390_rs16162978 crossref_primary_10_7717_peerj_cs_2178 crossref_primary_10_1364_AO_466339 crossref_primary_10_1109_TPAMI_2024_3511621 |
Cites_doi | 10.1109/ICCV.2017.487 10.2307/2347111 10.1109/CVPR.2018.00746 10.1109/CVPR.2017.34 10.1109/CVPR.2016.257 10.1109/ICCV.2019.00887 10.1109/CVPR.2009.5206596 10.1007/978-3-319-46493-0_50 10.1109/TNNLS.2020.3007534 10.1109/TPAMI.2021.3060412 10.1109/TPAMI.2021.3073689 10.1109/CVPR.2016.58 10.1109/CVPR.2014.43 10.1109/CVPR.2018.00177 10.1007/978-3-030-59725-2_26 10.1109/TPAMI.2011.272 10.1109/ICCV.2019.00733 10.1109/TPAMI.2014.2345401 10.1109/TIP.2012.2216276 10.1109/TNNLS.2015.2506664 10.1109/TIP.2009.2030969 10.5244/C.25.110 10.1109/TNNLS.2020.2996406 10.1007/s41095-020-0199-z 10.1109/TPAMI.2010.70 10.1109/TIP.2014.2383320 10.1109/TIP.2012.2210727 10.1007/s11263-009-0275-4 10.1109/CVPR.2006.68 10.1109/TPAMI.2022.3179526 10.1109/TPAMI.2012.89 10.1109/CVPR.2016.78 10.1109/CVPR46437.2021.01211 10.1109/CVPR.2010.5539947 10.1109/TPAMI.2021.3073564 10.1109/MMUL.2013.15 10.1007/978-3-319-10578-9_7 10.1109/CVPR46437.2021.01280 10.1109/TIP.2019.2959253 10.1109/ISCC-C.2013.21 10.1109/CVPR.2018.00326 10.1109/CVPR.2013.151 10.1109/CVPR.2018.00342 10.1016/0031-3203(93)90135-J 10.1109/CVPR.2018.00187 10.1016/j.patcog.2015.02.005 10.1109/TIP.2013.2297027 10.1609/aaai.v34i07.6860 10.1007/978-3-030-58520-4_21 10.1109/CVPR.2018.00783 10.1109/TPAMI.2010.161 10.1109/CVPR.2018.00864 10.1609/aaai.v34i07.6633 10.1109/CVPR.2018.00941 10.1007/978-3-030-58536-5_3 10.1109/TIP.2015.2487833 10.1109/CVPR.2013.271 10.1109/TIP.2004.834657 10.1109/ICCV.2017.31 10.1109/CVPR.2013.277 10.1109/CVPR46437.2021.00866 10.1109/TIP.2003.819861 10.1109/CVPR.2015.7298731 10.1109/ICCV.2001.937655 10.1109/CVPR.2018.00322 10.1109/ICCV.2013.370 10.1109/CVPR.2018.00184 10.1016/j.visres.2013.07.016 10.1109/CVPR.2016.80 10.1109/CVPR.2014.39 10.1109/CVPR.2018.00081 10.1360/SSI-2020-0370 10.1109/CVPR46437.2021.01246 |
ContentType | Journal Article |
Copyright | The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021 COPYRIGHT 2021 Springer The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021. |
Copyright_xml | – notice: The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021 – notice: COPYRIGHT 2021 Springer – notice: The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021. |
DBID | AAYXX CITATION ISR 3V. 7SC 7WY 7WZ 7XB 87Z 8AL 8FD 8FE 8FG 8FK 8FL ABUWG AFKRA ARAPS AZQEC BENPR BEZIV BGLVJ CCPQU DWQXO FRNLG F~G GNUQQ HCIFZ JQ2 K60 K6~ K7- L.- L7M L~C L~D M0C M0N P5Z P62 PHGZM PHGZT PKEHL PQBIZ PQBZA PQEST PQGLB PQQKQ PQUKI PYYUZ Q9U |
DOI | 10.1007/s11263-021-01490-8 |
DatabaseName | CrossRef Gale In Context: Science ProQuest Central (Corporate) Computer and Information Systems Abstracts ABI/INFORM Collection ABI/INFORM Global (PDF only) ProQuest Central (purchase pre-March 2016) ABI/INFORM Global (Alumni Edition) Computing Database (Alumni Edition) Technology Research Database ProQuest SciTech Collection ProQuest Technology Collection ProQuest Central (Alumni) (purchase pre-March 2016) ABI/INFORM Collection (Alumni) ProQuest Central (Alumni) ProQuest Central UK/Ireland Advanced Technologies & Aerospace Collection ProQuest Central Essentials - QC ProQuest Central Business Premium Collection ProQuest Technology Collection ProQuest One ProQuest Central Korea Business Premium Collection (Alumni) ABI/INFORM Global (Corporate) ProQuest Central Student SciTech Premium Collection ProQuest Computer Science Collection ProQuest Business Collection (Alumni Edition) ProQuest Business Collection Computer Science Database ABI/INFORM Professional Advanced Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional ABI/INFORM Global Computing Database Advanced Technologies & Aerospace Database ProQuest Advanced Technologies & Aerospace Collection ProQuest Central Premium ProQuest One Academic (New) ProQuest One Academic Middle East (New) ProQuest One Business ProQuest One Business (Alumni) ProQuest One Academic Eastern Edition (DO NOT USE) ProQuest One Applied & Life Sciences ProQuest One Academic ProQuest One Academic UKI Edition ABI/INFORM Collection China ProQuest Central Basic |
DatabaseTitle | CrossRef ABI/INFORM Global (Corporate) ProQuest Business Collection (Alumni Edition) ProQuest One Business Computer Science Database ProQuest Central Student Technology Collection Technology Research Database Computer and Information Systems Abstracts – Academic ProQuest One Academic Middle East (New) ProQuest Advanced Technologies & Aerospace Collection ProQuest Central Essentials ProQuest Computer Science Collection Computer and Information Systems Abstracts ProQuest Central (Alumni Edition) SciTech Premium Collection ProQuest One Community College ABI/INFORM Complete ProQuest Central ABI/INFORM Professional Advanced ProQuest One Applied & Life Sciences ProQuest Central Korea ProQuest Central (New) Advanced Technologies Database with Aerospace ABI/INFORM Complete (Alumni Edition) Advanced Technologies & Aerospace Collection Business Premium Collection ABI/INFORM Global ProQuest Computing ABI/INFORM Global (Alumni Edition) ProQuest Central Basic ProQuest Computing (Alumni Edition) ProQuest One Academic Eastern Edition ABI/INFORM China ProQuest Technology Collection ProQuest SciTech Collection ProQuest Business Collection Computer and Information Systems Abstracts Professional Advanced Technologies & Aerospace Database ProQuest One Academic UKI Edition ProQuest One Business (Alumni) ProQuest One Academic ProQuest Central (Alumni) ProQuest One Academic (New) Business Premium Collection (Alumni) |
DatabaseTitleList | ABI/INFORM Global (Corporate) |
Database_xml | – sequence: 1 dbid: 8FG name: ProQuest Technology Collection url: https://search.proquest.com/technologycollection1 sourceTypes: Aggregation Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Applied Sciences Computer Science |
EISSN | 1573-1405 |
EndPage | 2638 |
ExternalDocumentID | A670161559 10_1007_s11263_021_01490_8 |
GrantInformation_xml | – fundername: Major Project for New Generation of AI grantid: 2018AAA0100400 – fundername: National Natural Science Foundation of China grantid: 61922046 funderid: http://dx.doi.org/10.13039/501100001809 |
GroupedDBID | -4Z -59 -5G -BR -EM -Y2 -~C .4S .86 .DC .VR 06D 0R~ 0VY 199 1N0 1SB 2.D 203 28- 29J 2J2 2JN 2JY 2KG 2KM 2LR 2P1 2VQ 2~H 30V 3V. 4.4 406 408 409 40D 40E 5GY 5QI 5VS 67Z 6NX 6TJ 78A 7WY 8FE 8FG 8FL 8TC 8UJ 95- 95. 95~ 96X AAAVM AABHQ AACDK AAHNG AAIAL AAJBT AAJKR AANZL AAOBN AARHV AARTL AASML AATNV AATVU AAUYE AAWCG AAYIU AAYQN AAYTO AAYZH ABAKF ABBBX ABBXA ABDBF ABDZT ABECU ABFTD ABFTV ABHLI ABHQN ABJNI ABJOX ABKCH ABKTR ABMNI ABMQK ABNWP ABQBU ABQSL ABSXP ABTEG ABTHY ABTKH ABTMW ABULA ABUWG ABWNU ABXPI ACAOD ACBXY ACDTI ACGFO ACGFS ACHSB ACHXU ACIHN ACKNC ACMDZ ACMLO ACOKC ACOMO ACPIV ACREN ACUHS ACZOJ ADHHG ADHIR ADIMF ADINQ ADKNI ADKPE ADMLS ADRFC ADTPH ADURQ ADYFF ADYOE ADZKW AEAQA AEBTG AEFIE AEFQL AEGAL AEGNC AEJHL AEJRE AEKMD AEMSY AENEX AEOHA AEPYU AESKC AETLH AEVLU AEXYK AFBBN AFEXP AFGCZ AFKRA AFLOW AFQWF AFWTZ AFYQB AFZKB AGAYW AGDGC AGGDS AGJBK AGMZJ AGQEE AGQMX AGRTI AGWIL AGWZB AGYKE AHAVH AHBYD AHKAY AHSBF AHYZX AIAKS AIGIU AIIXL AILAN AITGF AJBLW AJRNO AJZVZ ALMA_UNASSIGNED_HOLDINGS ALWAN AMKLP AMTXH AMXSW AMYLF AMYQR AOCGG ARAPS ARCSS ARMRJ ASPBG AVWKF AXYYD AYJHY AZFZN AZQEC B-. B0M BA0 BBWZM BDATZ BENPR BEZIV BGLVJ BGNMA BPHCQ BSONS CAG CCPQU COF CS3 CSCUP DDRTE DL5 DNIVK DPUIP DU5 DWQXO EAD EAP EAS EBLON EBS EDO EIOEI EJD EMK EPL ESBYG ESX F5P FEDTE FERAY FFXSO FIGPU FINBP FNLPD FRNLG FRRFC FSGXE FWDCC GGCAI GGRSB GJIRD GNUQQ GNWQR GQ6 GQ7 GQ8 GROUPED_ABI_INFORM_COMPLETE GXS H13 HCIFZ HF~ HG5 HG6 HMJXF HQYDN HRMNR HVGLF HZ~ I-F I09 IAO IHE IJ- IKXTQ ISR ITC ITM IWAJR IXC IZIGR IZQ I~X I~Y I~Z J-C J0Z JBSCW JCJTX JZLTJ K60 K6V K6~ K7- KDC KOV KOW LAK LLZTM M0C M0N M4Y MA- N2Q N9A NB0 NDZJH NPVJJ NQJWS NU0 O9- O93 O9G O9I O9J OAM OVD P19 P2P P62 P9O PF0 PQBIZ PQBZA PQQKQ PROAC PT4 PT5 QF4 QM1 QN7 QO4 QOK QOS R4E R89 R9I RHV RNI RNS ROL RPX RSV RZC RZE RZK S16 S1Z S26 S27 S28 S3B SAP SCJ SCLPG SCO SDH SDM SHX SISQX SJYHP SNE SNPRN SNX SOHCF SOJ SPISZ SRMVM SSLCW STPWE SZN T13 T16 TAE TEORI TSG TSK TSV TUC TUS U2A UG4 UOJIU UTJUX UZXMN VC2 VFIZW W23 W48 WK8 YLTOR Z45 Z7R Z7S Z7V Z7W Z7X Z7Y Z7Z Z83 Z86 Z88 Z8M Z8N Z8P Z8Q Z8R Z8S Z8T Z8W Z92 ZMTXR ~8M ~EX AAPKM AAYXX ABBRH ABDBE ABFSG ACMFV ACSTC ADHKG ADKFA AEZWR AFDZB AFHIU AFOHR AGQPQ AHPBZ AHWEU AIXLP ATHPR AYFIA CITATION ICD PHGZM PHGZT AEIIB PMFND 7SC 7XB 8AL 8FD 8FK ABRTQ JQ2 L.- L7M L~C L~D PKEHL PQEST PQGLB PQUKI Q9U |
ID | FETCH-LOGICAL-c392t-e990ea807591321a876c1abfeeee654a8e4d4bf2f1095ece66746d873e4ca4c93 |
IEDL.DBID | BENPR |
ISSN | 0920-5691 |
IngestDate | Sat Jul 26 00:05:23 EDT 2025 Tue Jun 10 20:35:28 EDT 2025 Fri Jun 27 03:36:45 EDT 2025 Tue Jul 01 04:30:57 EDT 2025 Thu Apr 24 23:07:26 EDT 2025 Fri Feb 21 02:48:06 EST 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 9 |
Keywords | Evaluation Foreground maps Structure measure Salient object detection S-measure |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c392t-e990ea807591321a876c1abfeeee654a8e4d4bf2f1095ece66746d873e4ca4c93 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0002-5245-7518 0000-0001-5550-8758 |
PQID | 2556147528 |
PQPubID | 1456341 |
PageCount | 17 |
ParticipantIDs | proquest_journals_2556147528 gale_infotracacademiconefile_A670161559 gale_incontextgauss_ISR_A670161559 crossref_primary_10_1007_s11263_021_01490_8 crossref_citationtrail_10_1007_s11263_021_01490_8 springer_journals_10_1007_s11263_021_01490_8 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 20210900 2021-09-00 20210901 |
PublicationDateYYYYMMDD | 2021-09-01 |
PublicationDate_xml | – month: 9 year: 2021 text: 20210900 |
PublicationDecade | 2020 |
PublicationPlace | New York |
PublicationPlace_xml | – name: New York |
PublicationTitle | International journal of computer vision |
PublicationTitleAbbrev | Int J Comput Vis |
PublicationYear | 2021 |
Publisher | Springer US Springer Springer Nature B.V |
Publisher_xml | – name: Springer US – name: Springer – name: Springer Nature B.V |
References | WangZBovikACSheikhHRSimoncelliEPImage quality assessment: From error visibility to structural similarityIEEE Transactions on Image Processing200413460061210.1109/TIP.2003.819861 Fu, K., Fan, D. -P., Ji, G. -P., Zhao, Q., Shen, J., & Zhu, C. (2021). Siamese network for RGB-D salient object detection and beyond. IEEE Transactions on Pattern Analysis and Machine Intelligence. https://doi.org/10.1109/TPAMI.2021.3073689 Ghosh, J., Lee, Y. J., & Grauman, K. (2012). Discovering important people and objects for egocentric video summarization. In IEEE conference on computer vision and pattern recognition (pp. 1346–1353). LiuZZouWLe MeurOSaliency tree: A novel saliency detection frameworkIEEE Transactions on Image Processing201423519371952320897710.1109/TIP.2013.2297027 BorjiAIttiLState-of-the-art in visual attention modelingIEEE Transactions on Pattern Analysis and Machine Intelligence201335118520710.1109/TPAMI.2012.89 Zhang, J., Zhang, T., Dai, Y., Harandi, M., & Hartley, R. (2018a). Deep unsupervised saliency detection: A multiple noisy labeling perspective. In IEEE conference on computer vision and pattern recognition. Fan, D. P., Cheng, M. M., Liu, Y., Li, T., & Borji, A. (2017). Structure-measure: A new way to evaluate foreground maps. In International conference on computer vision (pp. 4548–4557). LewMSSebeNDjerabaCJainRContent-based multimedia information retrieval: State of the art and challengesACM Transactions on Multimedia Computing200021119 Li, G., & Yu, Y. (2015). Visual saliency based on multiscale deep features. In IEEE conference on computer vision and pattern recognition (pp. 5455–5463). Jiang, Y., Zhou, T., Ji, G. P., Fu, K., Zhao, Q., & Fan, D. P. (2020). Light field salient object detection: A review and benchmark. arXiv preprint arXiv:2010.04968 Zhang, Q., Cong, R., Hou, J., Li, C., & Zhao, Y. (2020b). Coadnet: Collaborative aggregation-and-distribution networks for co-salient object detection. In Advances in neural information processing systems. BorjiASihiteDIttiLQuantitative analysis of human-model agreement in visual saliency modeling: A comparative studyIEEE Transactions on Image Processing20132215569301744610.1109/TIP.2012.2210727 Feng, D., Barnes, N., You, S., & McCarthy, C. (2016). Local background enclosure for RGB-D salient object detection. In IEEE conference on computer vision and pattern recognition (pp. 2343–2350). Zeng, Y., Zhang, P., Zhang, J., Lin, Z., & Lu, H. (2019). Towards high-resolution salient object detection. In International conference on computer vision (pp. 7234–7243). GofermanSZelnik-ManorLTalAContext-aware saliency detectionIEEE Transactions on Pattern Analysis and Machine Intelligence201234101915192610.1109/TPAMI.2011.272 Margolin, R., Tal, A., & Zelnik-Manor, L. (2013). What makes a patch distinct? In IEEE conference on computer vision and pattern recognition (pp. 1139–1146). Wang, L., Wang, L., Lu, H., Zhang, P., & Ruan, X. (2016). Saliency detection with recurrent fully convolutional networks. In European conference on computer vision (pp. 825–841). XieYLuHYangMHBayesian saliency via low and mid level cuesIEEE Transactions on Image Processing201322516891698306161510.1109/TIP.2012.2216276 Zhang, M., Li, J., Wei, J., Piao, Y., & Lu, H. (2019a). Memory-oriented decoder for light field salient object detection. In Advances in neural information processing systems (pp. 898–908). Fan, Q., Fan, D. P., Fu, H., Tang, C. K., Shao, L., & Tai, Y. W. (2021e). Group collaborative learning for co-salient object detection. In IEEE conference on computer vision and pattern recognition. Lee, G., Tai, Y. W., & Kim, J. (2016). Deep saliency with encoded low level distance map and high level features. In IEEE conference on computer vision and pattern recognition (pp. 660–668). LiuTYuanZSunJWangJZhengNTangXShumHYLearning to detect a salient objectIEEE Transactions on Pattern Analysis and Machine Intelligence201133235336710.1109/TPAMI.2010.70 IttiLAutomatic foveation for video compression using a neurobiological model of visual attentionIEEE Transactions on Image Processing200413101304131810.1109/TIP.2004.834657 ZhangQHuangNYaoLZhangDShanCHanJRGB-T salient object detection via fusing multi-level CNN featuresIEEE Transactions on Image Processing2019293321333510.1109/TIP.2019.2959253 ChengMMitraNJHuangXTorrPHHuSGlobal contrast based salient region detectionIEEE Transactions on Pattern Analysis and Machine Intelligence201537356958210.1109/TPAMI.2014.2345401 Wang, W., Shen, J., Dong, X., & Borji, A. (2018). Salient object detection driven by fixation prediction. In IEEE conference on computer vision and pattern recognition. Pont-Tuset, J., & Marques, F. (2013). Measures and meta-measures for the supervised evaluation of image segmentation. In IEEE conference on computer vision and pattern recognition (pp. 2131–2138). Lazebnik, S., Schmid, C., & Ponce, J. (2006). Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In IEEE conference on computer vision and pattern recognition (Vol. 2, pp. 2169–2178). Zhang, P., Wang, D., Lu, H., Wang, H., & Ruan, X. (2017). Amulet: Aggregating multi-level convolutional features for salient object detection. In International conference on computer vision. Zhao, J. X., Liu, J. J., Fan, D. P., Cao, Y., Yang, J., & Cheng, M. M. (2019). EGNet: Edge guidance network for salient object detection. In International conference on computer vision (pp. 8779–8788). Zhang, J., Xie, J., & Barnes, N. (2020a). Learning noise-aware encoder–decoder from noisy labels by alternating back-propagation for saliency detection. In European conference on computer vision. Achanta, R., Hemami, S., Estrada, F., & Susstrunk, S. (2009). Frequency-tuned salient region detection. In IEEE conference on computer vision and pattern recognition (pp. 1597–1604) . Li, X., Lu, H., Zhang, L., Ruan, X., & Yang, M. H. (2013b). Saliency detection via dense and sparse reconstruction. In International conference on computer vision (pp. 2976–2983). Mei, H., Ji, G. P., Wei, Z., Yang, X., Wei, X., & Fan, D. P. (2021). Camouflaged object segmentation with distraction mining. In IEEE conference on computer vision and pattern recognition. BorjiAWhat is a salient object? A dataset and a baseline model for salient object detectionIEEE Transactions on Image Processing2015242742756341785210.1109/TIP.2014.2383320 EveringhamMVan GoolLWilliamsCKWinnJZissermanAThe pascal visual object classes (VOC) challengeInternational Journal of Computer Vision201088230333810.1007/s11263-009-0275-4 PalNRPalSKA review on image segmentation techniquesPattern Recognition19932691277129410.1016/0031-3203(93)90135-J Li, G., & Yu, Y. (2016). Deep contrast learning for salient object detection. In IEEE conference on computer vision and pattern recognition (pp. 478–487). Margolin, R., Zelnik-Manor, L., & Tal, A. (2014). How to evaluate foreground maps? In IEEE conference on computer vision and pattern recognition (pp. 248–255). Qin, X., Fan, D. P., Huang, C., Diagne, C., Zhang, Z., Sant’Anna, A. C., Suàrez, A., Jagersand, M., & Shao, L. (2021). Boundary-aware segmentation network for mobile and web applications. arXiv preprint arXiv:2101.04704 Ji, Y., Zhang, H., Jie, Z., Ma, L., & Wu, Q. J. (2021). Casnet: A cross-attention siamese network for video salient object detection. IEEE Transactions on Neural Networks and Learning Systems, 32(6), 2676–2690. https://doi.org/10.1109/TNNLS.2020.3007534 Kanan, C., & Cottrell, G. (2010). Robust classification of objects, faces, and flowers using natural image statistics. In IEEE conference on computer vision and pattern recognition (pp. 2472–2479). Zeng, Y., Lu, H., Zhang, L., Feng, M., & Borji, A. (2018). Learning to promote saliency detectors. In IEEE conference on computer vision and pattern recognition. GuoCZhangLA novel multiresolution spatiotemporal saliency detection model and its applications in image and video compressionIEEE Transactions on Image Processing2010191185198274446410.1109/TIP.2009.2030969 Zhou, T., Fan, D. -P., Cheng, M. -M., Shen, J., & Shao, L. (2021). RGB-D salient object detection: A survey. Computational Visual Media, 7(1), 37–69. Li, G., Xie, Y., Lin, L., & Yu, Y. (2017). Instance-level salient object segmentation. In IEEE conference on computer vision and pattern recognition (pp. 2386–2395). Best, D., & Roberts, D.: Algorithm as 89: The upper tail probabilities of spearman’s rho. Journal of the Royal Statistical Society. Series C (Applied Statistics) 24(3), 377–379 (1975) Chen, H., & Li, Y. F. (2018). Progressively complementarity-aware fusion network for RGB-D salient object detection. In IEEE conference on computer vision and pattern recognition. ChenTLinLLiuLLuoXLiXDisc: Deep image saliency computing via progressive representation learningIEEE Transactions on Neural Networks and Learning Systems201627611351149350723010.1109/TNNLS.2015.2506664 Zhuge, M., Fan, D. P., Liu, N., Zhang, D., Xu, D., & Shao, L. (2021a). Salient object detection via integrity learning. arXiv:2101.07663 Amirul Islam, M., Kalash, M., & Bruce, N. D. (2018). Revisiting salient object detection: Simultaneous detection, ranking, and subitizing of multiple salient objects. In IEEE conference on computer vision and pattern recognition (pp. 7142–7150). BorjiAChengMMJiangHLiJSalient object detection: A benchmarkIEEE Transactions on Image Processing2015241257065722341785210.1109/TIP.2015.2487833 Jiang, H., Wang, J., Yuan, Z., Liu, T., Zheng, N., & Li, S. (2011). Automatic salient object segmentation based on context and shape prior. In British machine vision conference (p. 9). Jiang, H., Wang, J., Yuan, Z., Wu, Y., Zheng, N., & Li, S. (2013). Salient object detection: A discriminative regional feature integration approach. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2083–2090). Chen, Z., Xu, Q., Cong, R., & Huang, Q. (2020). Global context-aware progressive aggregation network for salient object detection. In AAAI conference on artificial intelligence. Fan, D. P., Ji, G. P., Cheng DP Fan (1490_CR20) 2021 1490_CR48 A Borji (1490_CR5) 2015; 24 1490_CR44 1490_CR46 GH Liu (1490_CR49) 2015; 48 T Chen (1490_CR14) 2016; 27 1490_CR47 T Liu (1490_CR52) 2011; 33 1490_CR84 1490_CR41 1490_CR85 1490_CR42 1490_CR86 1490_CR43 1490_CR87 1490_CR81 1490_CR4 1490_CR82 1490_CR83 A Borji (1490_CR7) 2013; 35 C Guo (1490_CR30) 2010; 19 T Chen (1490_CR13) 2009; 28 1490_CR19 A Borji (1490_CR9) 2013; 91 1490_CR15 1490_CR59 Q Zhang (1490_CR80) 2019; 29 L Itti (1490_CR32) 2004; 13 1490_CR18 1490_CR11 1490_CR55 1490_CR12 1490_CR56 1490_CR57 1490_CR2 1490_CR51 1490_CR1 1490_CR10 1490_CR54 1490_CR50 Z Liu (1490_CR53) 2014; 23 Z Wang (1490_CR67) 2004; 13 L Li (1490_CR45) 2013; 20 Y Xie (1490_CR68) 2013; 22 1490_CR26 1490_CR27 1490_CR29 1490_CR22 1490_CR66 1490_CR23 1490_CR24 1490_CR25 1490_CR69 1490_CR62 1490_CR63 1490_CR64 1490_CR21 1490_CR65 A Borji (1490_CR8) 2013; 22 1490_CR60 1490_CR61 P Arbelaez (1490_CR3) 2011; 33 1490_CR37 1490_CR38 1490_CR39 1490_CR33 1490_CR77 1490_CR34 1490_CR78 1490_CR35 1490_CR79 M Everingham (1490_CR17) 2010; 88 1490_CR36 NR Pal (1490_CR58) 1993; 26 1490_CR73 1490_CR74 1490_CR31 1490_CR75 S Goferman (1490_CR28) 2012; 34 1490_CR76 MS Lew (1490_CR40) 2000; 2 1490_CR70 1490_CR71 1490_CR72 M Cheng (1490_CR16) 2015; 37 A Borji (1490_CR6) 2015; 24 |
References_xml | – reference: Zhou, T., Fan, D. -P., Cheng, M. -M., Shen, J., & Shao, L. (2021). RGB-D salient object detection: A survey. Computational Visual Media, 7(1), 37–69. – reference: Liu, G., & Fan, D. (2013). A model of visual attention for natural image retrieval. In International conference on information science and cloud computing companion (pp. 728–733). – reference: Wang, L., Wang, L., Lu, H., Zhang, P., & Ruan, X. (2016). Saliency detection with recurrent fully convolutional networks. In European conference on computer vision (pp. 825–841). – reference: Bylinskii, Z., Judd, T., Borji, A., Itti, L., Durand, F., Oliva, A., & Torralba, A. (2015). Mit saliency benchmark. http://saliency.mit.edu/results_mit300.html – reference: GofermanSZelnik-ManorLTalAContext-aware saliency detectionIEEE Transactions on Pattern Analysis and Machine Intelligence201234101915192610.1109/TPAMI.2011.272 – reference: Best, D., & Roberts, D.: Algorithm as 89: The upper tail probabilities of spearman’s rho. Journal of the Royal Statistical Society. Series C (Applied Statistics) 24(3), 377–379 (1975) – reference: BorjiAIttiLState-of-the-art in visual attention modelingIEEE Transactions on Pattern Analysis and Machine Intelligence201335118520710.1109/TPAMI.2012.89 – reference: ChenTChengMMTanPShamirAHuSMSketch2photo: Internet image montageACM Transactions on Graphics2009285124 – reference: ChengMMitraNJHuangXTorrPHHuSGlobal contrast based salient region detectionIEEE Transactions on Pattern Analysis and Machine Intelligence201537356958210.1109/TPAMI.2014.2345401 – reference: Mei, H., Ji, G. P., Wei, Z., Yang, X., Wei, X., & Fan, D. P. (2021). Camouflaged object segmentation with distraction mining. In IEEE conference on computer vision and pattern recognition. – reference: Peng, H., Li, B., Xiong, W., Hu, W., & Ji, R. (2014). RGBD salient object detection: A benchmark and algorithms. In European conference on computer vision (pp. 92–109). – reference: Chen, H., & Li, Y. F. (2018). Progressively complementarity-aware fusion network for RGB-D salient object detection. In IEEE conference on computer vision and pattern recognition. – reference: Achanta, R., Hemami, S., Estrada, F., & Susstrunk, S. (2009). Frequency-tuned salient region detection. In IEEE conference on computer vision and pattern recognition (pp. 1597–1604) . – reference: Gorji, S., & Clark, J. (2018). Going from image to video saliency: Augmenting image salience with dynamic attentional push. In IEEE conference on computer vision and pattern recognition. – reference: Zhang, X., Wang, T., Qi, J., Lu, H., & Wang, G. (2018c). Progressive attention guided recurrent network for salient object detection. In IEEE conference on computer vision and pattern recognition. – reference: Li, X., Lu, H., Zhang, L., Ruan, X., & Yang, M. H. (2013b). Saliency detection via dense and sparse reconstruction. In International conference on computer vision (pp. 2976–2983). – reference: Islam, M.A., Kalash, M., D. B. Bruce, N. (2018). Revisiting salient object detection: Simultaneous detection, ranking, and subitizing of multiple salient objects. In IEEE conference on computer vision and pattern recognition. – reference: Ji, Y., Zhang, H., Jie, Z., Ma, L., & Wu, Q. J. (2021). Casnet: A cross-attention siamese network for video salient object detection. IEEE Transactions on Neural Networks and Learning Systems, 32(6), 2676–2690. https://doi.org/10.1109/TNNLS.2020.3007534 – reference: Zhang, Q., Cong, R., Hou, J., Li, C., & Zhao, Y. (2020b). Coadnet: Collaborative aggregation-and-distribution networks for co-salient object detection. In Advances in neural information processing systems. – reference: Kanan, C., & Cottrell, G. (2010). Robust classification of objects, faces, and flowers using natural image statistics. In IEEE conference on computer vision and pattern recognition (pp. 2472–2479). – reference: ArbelaezPMaireMFowlkesCMalikJContour detection and hierarchical image segmentationIEEE Transactions on Pattern Analysis and Machine Intelligence201133589891610.1109/TPAMI.2010.161 – reference: Zhang, L., Dai, J., Lu, H., He, Y., & Wang, G. (2018b). A bi-directional message passing model for salient object detection. In IEEE conference on computer vision and pattern recognition. – reference: Zhang, J., Xie, J., & Barnes, N. (2020a). Learning noise-aware encoder–decoder from noisy labels by alternating back-propagation for saliency detection. In European conference on computer vision. – reference: Zhang, P., Wang, D., Lu, H., Wang, H., & Ruan, X. (2017). Amulet: Aggregating multi-level convolutional features for salient object detection. In International conference on computer vision. – reference: LiuGHYangJYLiZContent-based image retrieval using computational visual attention modelPattern Recognition20154882554256610.1016/j.patcog.2015.02.005 – reference: BorjiAWhat is a salient object? A dataset and a baseline model for salient object detectionIEEE Transactions on Image Processing2015242742756341785210.1109/TIP.2014.2383320 – reference: Liu, N., & Han, J. (2016). Dhsnet: Deep hierarchical saliency network for salient object detection. In IEEE conference on computer vision pattern recognition (pp. 678–686). – reference: Li, G., Xie, Y., Lin, L., & Yu, Y. (2017). Instance-level salient object segmentation. In IEEE conference on computer vision and pattern recognition (pp. 2386–2395). – reference: ZhangQHuangNYaoLZhangDShanCHanJRGB-T salient object detection via fusing multi-level CNN featuresIEEE Transactions on Image Processing2019293321333510.1109/TIP.2019.2959253 – reference: Chen, Z., Xu, Q., Cong, R., & Huang, Q. (2020). Global context-aware progressive aggregation network for salient object detection. In AAAI conference on artificial intelligence. – reference: Liu, N., Han, J., & Yang, M. H. (2018). Picanet: Learning pixel-wise contextual attention for saliency detection. In IEEE conference on computer vision and pattern recognition. – reference: Li, G., & Yu, Y. (2016). Deep contrast learning for salient object detection. In IEEE conference on computer vision and pattern recognition (pp. 478–487). – reference: Zhang, J., Zhang, T., Dai, Y., Harandi, M., & Hartley, R. (2018a). Deep unsupervised saliency detection: A multiple noisy labeling perspective. In IEEE conference on computer vision and pattern recognition. – reference: Li, G., Xie, Y., Wei, T., & Lin, L. (2018). Flow guided recurrent neural encoder for video salient object detection. In IEEE conference on computer vision and pattern recognition. – reference: Zeng, Y., Lu, H., Zhang, L., Feng, M., & Borji, A. (2018). Learning to promote saliency detectors. In IEEE conference on computer vision and pattern recognition. – reference: Tiantian, W., Zhang, L., Lu, H., & Borji, A. (2018). Detect globally, refine locally: A novel approach to saliency detection. In IEEE conference on computer vision and pattern recognition. – reference: Yu, Q., Xie, L., Wang, Y., Zhou, Y., Fishman, E. K., & Yuille, A. L. (2018). Recurrent saliency transformation network: Incorporating multi-stage visual cues for small organ segmentation. In IEEE conference on computer vision and pattern recognition. – reference: EveringhamMVan GoolLWilliamsCKWinnJZissermanAThe pascal visual object classes (VOC) challengeInternational Journal of Computer Vision201088230333810.1007/s11263-009-0275-4 – reference: Fu, K., Fan, D. -P., Ji, G. -P., Zhao, Q., Shen, J., & Zhu, C. (2021). Siamese network for RGB-D salient object detection and beyond. IEEE Transactions on Pattern Analysis and Machine Intelligence. https://doi.org/10.1109/TPAMI.2021.3073689 – reference: Li, Y., Hou, X., Koch, C., Rehg, J. M., & Yuille, A. L. (2014). The secrets of salient object segmentation. In IEEE conference on computer vision and pattern recognition (pp. 280–287). – reference: ChenTLinLLiuLLuoXLiXDisc: Deep image saliency computing via progressive representation learningIEEE Transactions on Neural Networks and Learning Systems201627611351149350723010.1109/TNNLS.2015.2506664 – reference: Rutishauser, U., Walther, D., Koch, C., & Perona, P. (2004). Is bottom-up attention useful for object recognition? In IEEE conference on computer vision and pattern recognition (Vol. 2, pp. II–37). – reference: Zhuge, M., Fan, D. P., Liu, N., Zhang, D., Xu, D., & Shao, L. (2021a). Salient object detection via integrity learning. arXiv:2101.07663 – reference: LiuTYuanZSunJWangJZhengNTangXShumHYLearning to detect a salient objectIEEE Transactions on Pattern Analysis and Machine Intelligence201133235336710.1109/TPAMI.2010.70 – reference: Piao, Y., Rong, Z., Zhang, M., & Lu, H. (2020). Exploit and replace: An asymmetrical two-stream architecture for versatile light field saliency detection. In AAAI conference on artificial intelligence (pp. 11865–11873). – reference: WangZBovikACSheikhHRSimoncelliEPImage quality assessment: From error visibility to structural similarityIEEE Transactions on Image Processing200413460061210.1109/TIP.2003.819861 – reference: Qin, X., Fan, D. P., Huang, C., Diagne, C., Zhang, Z., Sant’Anna, A. C., Suàrez, A., Jagersand, M., & Shao, L. (2021). Boundary-aware segmentation network for mobile and web applications. arXiv preprint arXiv:2101.04704 – reference: Fan, D. -P., Li, T., Lin, Z., Ji, G. -P., Zhang, D., Cheng, M. -M., Fu, H., & Shen, J. (2021c). Re-thinking co-salient object detection. IEEE Transactions on Pattern Analysis and Machine Intelligence. https://doi.org/10.1109/TPAMI.2021.3060412 – reference: Margolin, R., Zelnik-Manor, L., & Tal, A. (2014). How to evaluate foreground maps? In IEEE conference on computer vision and pattern recognition (pp. 248–255). – reference: Martin, D., Fowlkes, C., Tal, D., & Malik, J. (2001). A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In International conference on computer vision (Vol. 2, pp. 416–423). – reference: BorjiAChengMMJiangHLiJSalient object detection: A benchmarkIEEE Transactions on Image Processing2015241257065722341785210.1109/TIP.2015.2487833 – reference: Zhao, J. X., Liu, J. J., Fan, D. P., Cao, Y., Yang, J., & Cheng, M. M. (2019). EGNet: Edge guidance network for salient object detection. In International conference on computer vision (pp. 8779–8788). – reference: Fan, D. P., Ji, G. P., Cheng, M. M., & Shao, L. (2021a). Concealed object detection. arXiv preprint arXiv:2102.10274 – reference: Ghosh, J., Lee, Y. J., & Grauman, K. (2012). Discovering important people and objects for egocentric video summarization. In IEEE conference on computer vision and pattern recognition (pp. 1346–1353). – reference: Zhuge, M., Gao, D., Fan, D. P., Jin, L., Chen, B., Zhou, H., Qiu, M., & Shao, L. (2021b). Kaleido-bert: Vision-language pre-training on fashion domain. In IEEE conference on computer vision and pattern recognition. – reference: Fan, D. -P., Ji, G. -P., Zhou, T., Chen, G., Fu, H., Shen, J., & Shao, L. (2020). Pranet: Parallel reverse attention network for polyp segmentation. In International conference on medical image computing and computer-assisted intervention (pp. 263–273). Springer. – reference: Zeng, Y., Zhang, P., Zhang, J., Lin, Z., & Lu, H. (2019). Towards high-resolution salient object detection. In International conference on computer vision (pp. 7234–7243). – reference: LewMSSebeNDjerabaCJainRContent-based multimedia information retrieval: State of the art and challengesACM Transactions on Multimedia Computing200021119 – reference: BorjiASihiteDIttiLQuantitative analysis of human-model agreement in visual saliency modeling: A comparative studyIEEE Transactions on Image Processing20132215569301744610.1109/TIP.2012.2210727 – reference: Li, G., & Yu, Y. (2015). Visual saliency based on multiscale deep features. In IEEE conference on computer vision and pattern recognition (pp. 5455–5463). – reference: BorjiASihiteDNIttiLWhat stands out in a scene? A study of human explicit saliency judgmentVision Research201391627710.1016/j.visres.2013.07.016 – reference: XieYLuHYangMHBayesian saliency via low and mid level cuesIEEE Transactions on Image Processing201322516891698306161510.1109/TIP.2012.2216276 – reference: Fan, D. P., Cheng, M. M., Liu, Y., Li, T., & Borji, A. (2017). Structure-measure: A new way to evaluate foreground maps. In International conference on computer vision (pp. 4548–4557). – reference: IttiLAutomatic foveation for video compression using a neurobiological model of visual attentionIEEE Transactions on Image Processing200413101304131810.1109/TIP.2004.834657 – reference: Lazebnik, S., Schmid, C., & Ponce, J. (2006). Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In IEEE conference on computer vision and pattern recognition (Vol. 2, pp. 2169–2178). – reference: Wang, W., Shen, J., Dong, X., & Borji, A. (2018). Salient object detection driven by fixation prediction. In IEEE conference on computer vision and pattern recognition. – reference: Zhai, Q., Li, X., Yang, F., Chen, C., Cheng, H., & Fan, D. P. (2021). Mutual graph learning for camouflaged object detection. In IEEE conference on computer vision and pattern recognition. – reference: Feng, D., Barnes, N., You, S., & McCarthy, C. (2016). Local background enclosure for RGB-D salient object detection. In IEEE conference on computer vision and pattern recognition (pp. 2343–2350). – reference: LiLJiangSZhaZWuZHuangQPartial-duplicate image retrieval via saliency-guided visually matchingIEEE Transactions on Multimedia2013203132310.1109/MMUL.2013.15 – reference: Fan, Q., Fan, D. P., Fu, H., Tang, C. K., Shao, L., & Tai, Y. W. (2021e). Group collaborative learning for co-salient object detection. In IEEE conference on computer vision and pattern recognition. – reference: Margolin, R., Tal, A., & Zelnik-Manor, L. (2013). What makes a patch distinct? In IEEE conference on computer vision and pattern recognition (pp. 1139–1146). – reference: Zhang, M., Li, J., Wei, J., Piao, Y., & Lu, H. (2019a). Memory-oriented decoder for light field salient object detection. In Advances in neural information processing systems (pp. 898–908). – reference: Jiang, H., Wang, J., Yuan, Z., Liu, T., Zheng, N., & Li, S. (2011). Automatic salient object segmentation based on context and shape prior. In British machine vision conference (p. 9). – reference: Lee, G., Tai, Y. W., & Kim, J. (2016). Deep saliency with encoded low level distance map and high level features. In IEEE conference on computer vision and pattern recognition (pp. 660–668). – reference: Zhao, X., Pang, Y., Zhang, L., Lu, H., & Zhang, L. (2020). Suppress and balance: A simple gated network for salient object detection. In European conference on computer vision. – reference: Pont-Tuset, J., & Marques, F. (2013). Measures and meta-measures for the supervised evaluation of image segmentation. In IEEE conference on computer vision and pattern recognition (pp. 2131–2138). – reference: Jiang, H., Wang, J., Yuan, Z., Wu, Y., Zheng, N., & Li, S. (2013). Salient object detection: A discriminative regional feature integration approach. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2083–2090). – reference: Zhao, R., Ouyang, W., Li, H., & Wang, X. (2015). Saliency detection by multi-context deep learning. In IEEE conference on computer vision and pattern recognition (pp. 1265–1274). – reference: PalNRPalSKA review on image segmentation techniquesPattern Recognition19932691277129410.1016/0031-3203(93)90135-J – reference: Amirul Islam, M., Kalash, M., & Bruce, N. D. (2018). Revisiting salient object detection: Simultaneous detection, ranking, and subitizing of multiple salient objects. In IEEE conference on computer vision and pattern recognition (pp. 7142–7150). – reference: GuoCZhangLA novel multiresolution spatiotemporal saliency detection model and its applications in image and video compressionIEEE Transactions on Image Processing2010191185198274446410.1109/TIP.2009.2030969 – reference: Fan, D. -P., Lin, Z., Zhang, Z., Zhu, M., & Cheng, M. -M. (2021d). Rethinking RGB-D salient object detection: Models, data sets, and large-scale benchmarks. IEEE Transactions on Neural Networks and Learning Systems, 32(5), 2075–2089. https://doi.org/10.1109/TNNLS.2020.2996406 – reference: Chang, K. Y., Liu, T. L., Chen, H. T., & Lai, S. H. (2011). Fusing generic objectness and visual saliency for salient object detection. In International conference on computer vision (pp. 914–921). – reference: Jiang, Y., Zhou, T., Ji, G. P., Fu, K., Zhao, Q., & Fan, D. P. (2020). Light field salient object detection: A review and benchmark. arXiv preprint arXiv:2010.04968 – reference: LiuZZouWLe MeurOSaliency tree: A novel saliency detection frameworkIEEE Transactions on Image Processing201423519371952320897710.1109/TIP.2013.2297027 – reference: FanDPJiGPQinXChengMMCognitive vision inspired object segmentation metric and loss functionSSI202110.1360/SSI-2020-0370 – reference: Zhang, J., Fan, D. -P., Dai, Y., Anwar, S., Saleh, F., Aliakbarian, S., & Barnes, N. (2021). Uncertainty inspired RGB-D saliency detection. IEEE Transactions on Pattern Analysis and Machine Intelligence. https://doi.org/10.1109/TPAMI.2021.3073564 – ident: 1490_CR18 doi: 10.1109/ICCV.2017.487 – ident: 1490_CR4 doi: 10.2307/2347111 – ident: 1490_CR31 doi: 10.1109/CVPR.2018.00746 – ident: 1490_CR41 doi: 10.1109/CVPR.2017.34 – ident: 1490_CR25 doi: 10.1109/CVPR.2016.257 – ident: 1490_CR82 doi: 10.1109/ICCV.2019.00887 – ident: 1490_CR1 doi: 10.1109/CVPR.2009.5206596 – ident: 1490_CR36 – ident: 1490_CR10 – ident: 1490_CR62 – ident: 1490_CR65 doi: 10.1007/978-3-319-46493-0_50 – ident: 1490_CR33 doi: 10.1109/TNNLS.2020.3007534 – ident: 1490_CR22 doi: 10.1109/TPAMI.2021.3060412 – ident: 1490_CR26 doi: 10.1109/TPAMI.2021.3073689 – ident: 1490_CR44 doi: 10.1109/CVPR.2016.58 – ident: 1490_CR43 – ident: 1490_CR47 doi: 10.1109/CVPR.2014.43 – ident: 1490_CR70 doi: 10.1109/CVPR.2018.00177 – ident: 1490_CR21 doi: 10.1007/978-3-030-59725-2_26 – volume: 2 start-page: 1 issue: 1 year: 2000 ident: 1490_CR40 publication-title: ACM Transactions on Multimedia Computing – volume: 34 start-page: 1915 issue: 10 year: 2012 ident: 1490_CR28 publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence doi: 10.1109/TPAMI.2011.272 – ident: 1490_CR11 – ident: 1490_CR71 doi: 10.1109/ICCV.2019.00733 – volume: 37 start-page: 569 issue: 3 year: 2015 ident: 1490_CR16 publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence doi: 10.1109/TPAMI.2014.2345401 – volume: 22 start-page: 1689 issue: 5 year: 2013 ident: 1490_CR68 publication-title: IEEE Transactions on Image Processing doi: 10.1109/TIP.2012.2216276 – volume: 27 start-page: 1135 issue: 6 year: 2016 ident: 1490_CR14 publication-title: IEEE Transactions on Neural Networks and Learning Systems doi: 10.1109/TNNLS.2015.2506664 – volume: 19 start-page: 185 issue: 1 year: 2010 ident: 1490_CR30 publication-title: IEEE Transactions on Image Processing doi: 10.1109/TIP.2009.2030969 – ident: 1490_CR34 doi: 10.5244/C.25.110 – ident: 1490_CR23 doi: 10.1109/TNNLS.2020.2996406 – ident: 1490_CR85 doi: 10.1007/s41095-020-0199-z – volume: 33 start-page: 353 issue: 2 year: 2011 ident: 1490_CR52 publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence doi: 10.1109/TPAMI.2010.70 – volume: 24 start-page: 742 issue: 2 year: 2015 ident: 1490_CR5 publication-title: IEEE Transactions on Image Processing doi: 10.1109/TIP.2014.2383320 – volume: 22 start-page: 55 issue: 1 year: 2013 ident: 1490_CR8 publication-title: IEEE Transactions on Image Processing doi: 10.1109/TIP.2012.2210727 – volume: 88 start-page: 303 issue: 2 year: 2010 ident: 1490_CR17 publication-title: International Journal of Computer Vision doi: 10.1007/s11263-009-0275-4 – ident: 1490_CR38 doi: 10.1109/CVPR.2006.68 – ident: 1490_CR86 doi: 10.1109/TPAMI.2022.3179526 – volume: 35 start-page: 185 issue: 1 year: 2013 ident: 1490_CR7 publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence doi: 10.1109/TPAMI.2012.89 – ident: 1490_CR39 doi: 10.1109/CVPR.2016.78 – ident: 1490_CR63 – ident: 1490_CR24 doi: 10.1109/CVPR46437.2021.01211 – ident: 1490_CR37 doi: 10.1109/CVPR.2010.5539947 – ident: 1490_CR73 doi: 10.1109/TPAMI.2021.3073564 – volume: 20 start-page: 13 issue: 3 year: 2013 ident: 1490_CR45 publication-title: IEEE Transactions on Multimedia doi: 10.1109/MMUL.2013.15 – ident: 1490_CR59 doi: 10.1007/978-3-319-10578-9_7 – ident: 1490_CR72 doi: 10.1109/CVPR46437.2021.01280 – volume: 29 start-page: 3321 year: 2019 ident: 1490_CR80 publication-title: IEEE Transactions on Image Processing doi: 10.1109/TIP.2019.2959253 – ident: 1490_CR48 doi: 10.1109/ISCC-C.2013.21 – ident: 1490_CR51 doi: 10.1109/CVPR.2018.00326 – ident: 1490_CR54 doi: 10.1109/CVPR.2013.151 – ident: 1490_CR77 – ident: 1490_CR19 – ident: 1490_CR42 doi: 10.1109/CVPR.2018.00342 – volume: 26 start-page: 1277 issue: 9 year: 1993 ident: 1490_CR58 publication-title: Pattern Recognition doi: 10.1016/0031-3203(93)90135-J – ident: 1490_CR76 doi: 10.1109/CVPR.2018.00187 – volume: 48 start-page: 2554 issue: 8 year: 2015 ident: 1490_CR49 publication-title: Pattern Recognition doi: 10.1016/j.patcog.2015.02.005 – volume: 28 start-page: 124 issue: 5 year: 2009 ident: 1490_CR13 publication-title: ACM Transactions on Graphics – volume: 23 start-page: 1937 issue: 5 year: 2014 ident: 1490_CR53 publication-title: IEEE Transactions on Image Processing doi: 10.1109/TIP.2013.2297027 – ident: 1490_CR60 doi: 10.1609/aaai.v34i07.6860 – ident: 1490_CR2 doi: 10.1109/CVPR.2018.00746 – ident: 1490_CR74 doi: 10.1007/978-3-030-58520-4_21 – ident: 1490_CR29 doi: 10.1109/CVPR.2018.00783 – volume: 33 start-page: 898 issue: 5 year: 2011 ident: 1490_CR3 publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence doi: 10.1109/TPAMI.2010.161 – ident: 1490_CR64 – ident: 1490_CR69 doi: 10.1109/CVPR.2018.00864 – ident: 1490_CR15 doi: 10.1609/aaai.v34i07.6633 – ident: 1490_CR75 doi: 10.1109/CVPR.2018.00941 – ident: 1490_CR84 doi: 10.1007/978-3-030-58536-5_3 – volume: 24 start-page: 5706 issue: 12 year: 2015 ident: 1490_CR6 publication-title: IEEE Transactions on Image Processing doi: 10.1109/TIP.2015.2487833 – ident: 1490_CR35 doi: 10.1109/CVPR.2013.271 – volume: 13 start-page: 1304 issue: 10 year: 2004 ident: 1490_CR32 publication-title: IEEE Transactions on Image Processing doi: 10.1109/TIP.2004.834657 – ident: 1490_CR78 doi: 10.1109/ICCV.2017.31 – ident: 1490_CR61 doi: 10.1109/CVPR.2013.277 – ident: 1490_CR57 doi: 10.1109/CVPR46437.2021.00866 – volume: 13 start-page: 600 issue: 4 year: 2004 ident: 1490_CR67 publication-title: IEEE Transactions on Image Processing doi: 10.1109/TIP.2003.819861 – ident: 1490_CR83 doi: 10.1109/CVPR.2015.7298731 – ident: 1490_CR56 doi: 10.1109/ICCV.2001.937655 – ident: 1490_CR12 doi: 10.1109/CVPR.2018.00322 – ident: 1490_CR46 doi: 10.1109/ICCV.2013.370 – ident: 1490_CR66 doi: 10.1109/CVPR.2018.00184 – volume: 91 start-page: 62 year: 2013 ident: 1490_CR9 publication-title: Vision Research doi: 10.1016/j.visres.2013.07.016 – ident: 1490_CR50 doi: 10.1109/CVPR.2016.80 – ident: 1490_CR55 doi: 10.1109/CVPR.2014.39 – ident: 1490_CR79 – ident: 1490_CR81 doi: 10.1109/CVPR.2018.00081 – year: 2021 ident: 1490_CR20 publication-title: SSI doi: 10.1360/SSI-2020-0370 – ident: 1490_CR27 – ident: 1490_CR87 doi: 10.1109/CVPR46437.2021.01246 |
SSID | ssj0002823 |
Score | 2.6560237 |
Snippet | Foreground map evaluation is crucial for gauging the progress of object segmentation algorithms, in particular in the field of salient object detection where... |
SourceID | proquest gale crossref springer |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 2622 |
SubjectTerms | Algorithms Analysis Artificial Intelligence Computer Imaging Computer Science Evaluation Gaging Image Processing and Computer Vision Image segmentation Object recognition Pattern Recognition Pattern Recognition and Graphics Salience Similarity Vision |
SummonAdditionalLinks | – databaseName: SpringerLink Journals (ICM) dbid: U2A link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LT8MwDI5gXLjwRgwGihASB4hEujRtuU1o00AaB8bEblGSplxQN9HtwL_HbtNN4yXRY-o2qZ04dh1_JuTCRkJmsXMsi1LLhIs5M05qZhIughuTZGGE2ciDR9kfiYdxOPZJYUV92r0OSZaaepnsxoMy5ojur8AE6HWyEaLvDrN4FHQW-heciKqAPDhGoUy4T5X5-R0r29FXpfwtOlpuOr0dsuWtRdqpxLtL1ly-R7a95Uj9uiygqS7OULftk3hYIsPO3x0bVP8Bb2mHgk6jL_qDzia0W-F8O4rVOTG5I0_pQE-LAzLqdZ_v-syXSWAWjJsZc7ChOI2gwgm4llyDfrNcm8zBJUOhYydSYbIg42BOOeukBPmkcdR2wmphk_YhaeST3B0RCua2TGyitYiNyBDKxSDAnwGzIA3CVDcJr7mlrMcQx1IWb2qJfowcVsBhVXJYxU1ytXhmWiFo_El9jkJQCE2R49mXVz0vCnU_fFIdGfkwapNceqJsAt1b7VMJ4CMQzWqFslULU_nFWShEXeMiCgPo7boW8PL274M7_h_5CdkMyqmGJ9JapAFid6dgwszMWTljPwFHv-OZ priority: 102 providerName: Springer Nature |
Title | Structure-Measure: A New Way to Evaluate Foreground Maps |
URI | https://link.springer.com/article/10.1007/s11263-021-01490-8 https://www.proquest.com/docview/2556147528 |
Volume | 129 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV1Lb9QwEB613QsXSnmIbcvKQpU4gAVOHcfhglK62_LYCnVZUU6W7Ti9VNmFbA_8-84kTlcF0Vwi2U6czNjjGY_nG4ADn0lV6RB4lZWey6AFd0FZ7nIhk3cur9KMopGnZ-p0Lj9fpBdxw62Jxyp7mdgK6nLhaY_8LUFlCZmlif6w_MUpaxR5V2MKjU0YoAjWaHwNjsZn385vZTEaFF0yeTSSUpWLGDbTBc-JpPVhkjktKaD6ztL0t4D-x1PaLkCTR_Awao6s6Fi9AxuhfgzbUYtkcY42WNQnaujLnoCetSix178Dn3Z7gu9ZwVC-sR_2D1st2LjD_A6MMnVSoEddsqldNk9hPhl__3jKY8oE7lHRWfGAi0uwBDCco5kpLMo6L6yrAl4qlVYHWUpXJZVA1Sr4oBTyqtTZYZDeSp8fPoOtelGH58BQ9Va5z62V2smKYF0cgf05VBHKJC3tEERPLeMjnjiltbgyayRkorBBCpuWwkYP4fXtM8sOTePe1i-JCYZgKmo6B3Npr5vGfJqdm0Jl0aU6hFexUbXA7r2NYQX4E4Rsdaflfs9MEydqY9bDaghvegavq___cbv3v20PHiTt0KLTaPuwhWwOL1B9WbkRbOrJyQgGxfH064zuJz-_jEdx5GLtPCluAM5W7R8 |
linkProvider | ProQuest |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV3fTxQxEJ4QfMAXFdR4itgYjQ_aaPe63S4JMRfgvBOOB4HIW2m7XV_M3sEeMfxT_o3O7Ha5gIE39rHb_TUz_Tqz7XwD8M5nUpU6BF5mhecyaMFdUJa7XMjki8vLNKNs5MmBGh3L7yfpyRL87XJhaFtlh4kNUBdTT__IPxNVlpBZmuivszNOVaNodbUrodGaxV64_IMhW7013kH9vk-S4e7R9ojHqgLcoy8w5wHxN1ji4M0xEhMW4cAL68qAh0ql1UEW0pVJKdD7CD4ohZ9T6KwfpLfSE_kSQv4D2e_nNKL08NsV8mP40paux5AsVbmISTptqp5ImhVTCt4lpW9fmwhvTgf_rcs2093wCTyKfiobtIa1CkuhWoPH0WdlERFqbOrKQnRtT0EfNpy0F-eBT9o_kJtswBBN2U97yeZTttsyjAdGdUEpraQq2MTO6mdwfC-ifA7L1bQKL4Cho69yn1srtZMlkcg4ohZ06JAUSVrYHohOWsZH9nIqovHbLHiXScIGJWwaCRvdg49X18xa7o47e78lJRgixaho180ve1HXZnz4wwxUFhdwe_Ahdiqn-HhvYxIDfgTxaF3rud4p00RYqM3CiHvwqVPw4vTtL_fy7ru9gZXR0WTf7I8P9l7Bw6QxM9oHtw7LqPLwGh2nudtorJXB6X0Pj3_4YCRx |
linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV3db9MwED9NnYR4YXyKwgYWAvEA1ubgOAkSQoW1Whmtpo2JvXm2Y-8Fpd3SCe1f21_HXeKsGoi9LY-O8-G78_nO5_sdwGuXSRVy73nISselzwW3XhluCyGTLVuENKNs5MlU7RzKb0fp0QpcdrkwdKyy04mNoi5njvbINwkqS8gsTfLNEI9F7G2PPs9POVWQokhrV06jFZFdf_Eb3bf603gbef0mSUbDH193eKwwwB3aBQvuURd7Q3i8BXplwqBqcMLY4PFSqTS5l6W0IQkCLRHvvFI4tDLPPnjpjHQExITqfzVDr2irB6tfhtO9_at1AJ2ZtpA9OmipKkRM2WkT90TSxE_JlZeUzH1tWfx7cfgnStssfqP7cC9arWzQitkDWPHVQ1iLFiyL-qHGpq5IRNf2CPKDBqH2_MzzSbsf-ZENGOpW9tNcsMWMDVu8cc-oSiglmVQlm5h5_RgOb4WYT6BXzSr_FBia_apwhTEytzIQpIwloEGL5kmZpKXpg-iopV3EMqeSGr_0EoWZKKyRwrqhsM778O7qmXmL5HFj71fEBE0QGRUJ24k5r2s9PtjXA5XFcG4f3sZOYYafdyamNOAgCFXrWs_1jpk6KolaL0W6D-87Bi9v___nnt38tpdwB6eG_j6e7j6Hu0kjZXQobh16yHG_gVbUwr6I4srg-LZnyB_MJSoD |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Structure-Measure%3A+A+New+Way+to+Evaluate+Foreground+Maps&rft.jtitle=International+journal+of+computer+vision&rft.au=Cheng%2C+Ming-Ming&rft.au=Fan%2C+Deng-Ping&rft.date=2021-09-01&rft.pub=Springer&rft.issn=0920-5691&rft.volume=129&rft.issue=9&rft.spage=2622&rft_id=info:doi/10.1007%2Fs11263-021-01490-8&rft.externalDocID=A670161559 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0920-5691&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0920-5691&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0920-5691&client=summon |