Group attention retention network for co-salient object detection
The co-salient object detection (Co-SOD) aims to discover common, salient objects from a group of images. With the development of convolutional neural networks, the performance of Co-SOD methods has been significantly improved. However, some models cannot construct collaborative relationships across...
Saved in:
Published in | Machine vision and applications Vol. 34; no. 6; p. 107 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
Berlin/Heidelberg
Springer Berlin Heidelberg
01.11.2023
Springer Nature B.V |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | The co-salient object detection (Co-SOD) aims to discover common, salient objects from a group of images. With the development of convolutional neural networks, the performance of Co-SOD methods has been significantly improved. However, some models cannot construct collaborative relationships across images optimally and lack effective retention of collaborative features in the top-down decoding process. In this paper, we propose a novel group attention retention network (GARNet), which captures excellent collaborative features and retains them. First, a group attention module is designed to construct the inter-image relationships. Second, an attention retention module and a spatial attention module are designed to retain inter-image relationships for protecting them from being diluted and filter out the cluttered context during feature fusion, respectively. Finally, considering the intra-group consistency and inter-group separability of images, an embedding loss is additionally designed to discriminate between real collaborative objects and distracting objects. The experiments on four datasets (iCoSeg, CoSal2015, CoSoD3k, and CoCA) show that our GARNet outperforms previous state-of-the-art methods. The source code is available at
https://github.com/TJUMMG/GARNet
. |
---|---|
AbstractList | The co-salient object detection (Co-SOD) aims to discover common, salient objects from a group of images. With the development of convolutional neural networks, the performance of Co-SOD methods has been significantly improved. However, some models cannot construct collaborative relationships across images optimally and lack effective retention of collaborative features in the top-down decoding process. In this paper, we propose a novel group attention retention network (GARNet), which captures excellent collaborative features and retains them. First, a group attention module is designed to construct the inter-image relationships. Second, an attention retention module and a spatial attention module are designed to retain inter-image relationships for protecting them from being diluted and filter out the cluttered context during feature fusion, respectively. Finally, considering the intra-group consistency and inter-group separability of images, an embedding loss is additionally designed to discriminate between real collaborative objects and distracting objects. The experiments on four datasets (iCoSeg, CoSal2015, CoSoD3k, and CoCA) show that our GARNet outperforms previous state-of-the-art methods. The source code is available at https://github.com/TJUMMG/GARNet. The co-salient object detection (Co-SOD) aims to discover common, salient objects from a group of images. With the development of convolutional neural networks, the performance of Co-SOD methods has been significantly improved. However, some models cannot construct collaborative relationships across images optimally and lack effective retention of collaborative features in the top-down decoding process. In this paper, we propose a novel group attention retention network (GARNet), which captures excellent collaborative features and retains them. First, a group attention module is designed to construct the inter-image relationships. Second, an attention retention module and a spatial attention module are designed to retain inter-image relationships for protecting them from being diluted and filter out the cluttered context during feature fusion, respectively. Finally, considering the intra-group consistency and inter-group separability of images, an embedding loss is additionally designed to discriminate between real collaborative objects and distracting objects. The experiments on four datasets (iCoSeg, CoSal2015, CoSoD3k, and CoCA) show that our GARNet outperforms previous state-of-the-art methods. The source code is available at https://github.com/TJUMMG/GARNet . |
ArticleNumber | 107 |
Author | Wang, Weikang Fan, Zhiwei Yu, Jiexiao Liu, Jing Yuan, Min Wang, Jiaxiang |
Author_xml | – sequence: 1 givenname: Jing surname: Liu fullname: Liu, Jing organization: The School of Electrical and Information Engineering, Tianjin University, Key Laboratory of Artificial Intelligence, Ministry of Education – sequence: 2 givenname: Jiaxiang surname: Wang fullname: Wang, Jiaxiang organization: The School of Electrical and Information Engineering, Tianjin University – sequence: 3 givenname: Zhiwei surname: Fan fullname: Fan, Zhiwei organization: The School of Electrical and Information Engineering, Tianjin University – sequence: 4 givenname: Min surname: Yuan fullname: Yuan, Min organization: The School of Electrical and Information Engineering, Tianjin University – sequence: 5 givenname: Weikang surname: Wang fullname: Wang, Weikang email: wwk_19970307@tju.edu.cn organization: The School of Electrical and Information Engineering, Tianjin University – sequence: 6 givenname: Jiexiao surname: Yu fullname: Yu, Jiexiao organization: The School of Electrical and Information Engineering, Tianjin University |
BookMark | eNp9kE1LAzEQhoNUsK3-AU8LnqOTj83HsRStQsGLnkM2m0hr3dQkRfz3pq7izdO8MM87A88MTYY4eIQuCVwTAHmTAQhTGCjDQLigWJ6gKeGMYiKFnqAp6JoVaHqGZjlvAYBLyadosUrxsG9sKX4omzg0yf-mwZePmF6bEFPjIs52t6mbJnZb70rTV84duXN0Guwu-4ufOUfPd7dPy3u8flw9LBdr7KiEgp3WQnSWCEUEY1pLCKK1gSkmNesI89Y6TRznvQuh7VrNQbS9pqL1Vsreszm6Gu_uU3w_-FzMNh7SUF8aqoQQTCnZVoqOlEsx5-SD2afNm02fhoA5qjKjKlNVmW9VRtYSG0u5wsOLT3-n_2l9AV2ZbWo |
Cites_doi | 10.1109/CVPR46437.2021.00199 10.1016/j.neucom.2019.12.109 10.1109/CVPR.2018.00813 10.1609/aaai.v33i01.33018917 10.1109/LSP.2015.2458434 10.1109/TPAMI.2023.3234586 10.1109/LSP.2014.2364896 10.1109/CVPR.2019.00231 10.1109/CVPR.2011.5995415 10.1109/ICASSP43922.2022.9746218 10.1109/TMM.2021.3054526 10.1109/TNNLS.2015.2495161 10.1016/j.knosys.2022.109356 10.1109/CVPR52688.2022.00105 10.1109/CVPR.2018.00081 10.1109/CVPR.2015.7298918 10.1145/3505244 10.24963/ijcai.2019/115 10.1007/s00138-021-01172-y 10.1109/TPAMI.2015.2420556 10.1109/ICCV48922.2021.00707 10.1007/978-3-319-10602-1_48 10.1016/j.dsp.2022.103425 10.1109/ACCESS.2022.3197752 10.24963/ijcai.2018/97 10.1007/s00138-022-01325-7 10.1109/ICME.2019.00065 10.1109/TMM.2011.2162399 10.1016/j.imavis.2020.103973 10.1109/CVPR46437.2021.00162 10.1109/CVPR42600.2020.00943 10.1109/ICCV48922.2021.00060 10.1109/CVPR.2019.00321 10.1109/TPAMI.2021.3060412 10.1109/TIP.2020.2988568 10.1023/B:VISI.0000029664.99615.94 10.1007/s00138-020-01065-6 10.1007/978-3-319-10599-4_17 10.1109/ICCV.2019.00887 10.1109/TIP.2011.2156803 10.1109/ICCV.2019.00861 10.1109/ICCV48922.2021.00468 10.1109/TIP.2017.2694222 10.1109/TPAMI.2015.2392783 10.1109/CVPR42600.2020.00907 10.1109/TMM.2023.3264883 10.1109/TIP.2020.3028289 10.1109/CVPR46437.2021.01211 10.1109/TNNLS.2020.2978386 10.1007/s00138-022-01312-y 10.1007/978-3-030-58452-8_13 10.1109/TMM.2021.3138246 10.1109/TCSVT.2022.3150923 10.1109/TIP.2013.2260166 10.1109/CVPR46437.2021.01655 10.1109/TCSVT.2017.2706264 10.5244/C.31.17 10.1007/978-3-030-58610-2_27 10.1007/978-3-030-01228-1_30 10.1109/CVPR.2017.404 10.1109/ICCV.2019.00736 10.1109/CVPR46437.2021.00198 10.1109/CVPR46437.2021.00700 10.1109/TPAMI.2023.3264571 10.1109/CVPR.2010.5540080 10.1109/CVPR46437.2021.00863 10.1109/CVPR42600.2020.00916 10.1109/TPAMI.2016.2567393 10.1109/TPAMI.2022.3152247 10.1109/ICCV48922.2021.00986 10.1109/CVPR46437.2021.00681 10.1109/LSP.2013.2292873 10.1109/TCSVT.2021.3127149 10.1109/TMM.2022.3198848 |
ContentType | Journal Article |
Copyright | The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. |
Copyright_xml | – notice: The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. |
DBID | AAYXX CITATION 8FE 8FG ABJCF AFKRA ARAPS BENPR BGLVJ CCPQU DWQXO HCIFZ L6V M7S P5Z P62 PQEST PQQKQ PQUKI PTHSS |
DOI | 10.1007/s00138-023-01462-7 |
DatabaseName | CrossRef ProQuest SciTech Collection ProQuest Technology Collection Materials Science & Engineering Collection ProQuest Central UK/Ireland Advanced Technologies & Aerospace Database (1962 - current) AUTh Library subscriptions: ProQuest Central Technology Collection ProQuest One Community College ProQuest Central Korea SciTech Premium Collection (Proquest) (PQ_SDU_P3) ProQuest Engineering Collection Engineering Database Advanced Technologies & Aerospace Database ProQuest Advanced Technologies & Aerospace Collection ProQuest One Academic Eastern Edition (DO NOT USE) ProQuest One Academic ProQuest One Academic UKI Edition Engineering Collection |
DatabaseTitle | CrossRef Advanced Technologies & Aerospace Collection Engineering Database Technology Collection ProQuest Advanced Technologies & Aerospace Collection ProQuest One Academic Eastern Edition SciTech Premium Collection ProQuest One Community College ProQuest Technology Collection ProQuest SciTech Collection ProQuest Central Advanced Technologies & Aerospace Database ProQuest Engineering Collection ProQuest One Academic UKI Edition ProQuest Central Korea Materials Science & Engineering Collection ProQuest One Academic Engineering Collection |
DatabaseTitleList | Advanced Technologies & Aerospace Collection |
Database_xml | – sequence: 1 dbid: 8FG name: ProQuest Technology Collection url: https://search.proquest.com/technologycollection1 sourceTypes: Aggregation Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Applied Sciences Engineering Computer Science |
EISSN | 1432-1769 |
ExternalDocumentID | 10_1007_s00138_023_01462_7 |
GrantInformation_xml | – fundername: National Science Foundation of China grantid: 6170134 |
GroupedDBID | -4Z -59 -5G -BR -EM -Y2 -~C .4S .86 .DC .VR 06D 0R~ 0VY 199 1N0 1SB 203 28- 29M 2J2 2JN 2JY 2KG 2KM 2LR 2P1 2VQ 2~H 30V 4.4 406 408 409 40D 40E 5GY 5QI 5VS 67Z 6NX 78A 8FE 8FG 8UJ 95- 95. 95~ 96X AAAVM AABHQ AABYN AAFGU AAHNG AAIAL AAJKR AANZL AAOBN AAPBV AARHV AARTL AATNV AATVU AAUYE AAWCG AAWWR AAYFA AAYIU AAYQN AAYTO ABBBX ABBXA ABDBF ABDZT ABECU ABFGW ABFTD ABFTV ABHLI ABHQN ABJCF ABJNI ABJOX ABKAS ABKCH ABKTR ABMNI ABMQK ABNWP ABPTK ABQBU ABSXP ABTAH ABTEG ABTHY ABTKH ABTMW ABULA ABWNU ABXPI ACBMV ACBRV ACBXY ACBYP ACGFS ACHSB ACHXU ACIGE ACIPQ ACIWK ACKNC ACMDZ ACMLO ACOKC ACOMO ACTTH ACVWB ACWMK ADGRI ADHHG ADHIR ADIMF ADINQ ADKNI ADKPE ADMDM ADOXG ADRFC ADTPH ADURQ ADYFF ADZKW AEBTG AEEQQ AEFIE AEFTE AEGAL AEGNC AEJHL AEJRE AEKMD AENEX AEOHA AEPYU AESKC AESTI AETLH AEVLU AEVTX AEXYK AEYWE AFEXP AFGCZ AFKRA AFLOW AFNRJ AFQWF AFWTZ AFZKB AGAYW AGDGC AGGBP AGGDS AGJBK AGMZJ AGQMX AGWIL AGWZB AGYKE AHAVH AHBYD AHKAY AHSBF AHYZX AIAKS AIIXL AILAN AIMYW AITGF AJBLW AJDOV AJRNO AJZVZ AKQUC ALMA_UNASSIGNED_HOLDINGS ALWAN AMKLP AMXSW AMYLF AMYQR AOCGG ARAPS ARCSS ARMRJ ASPBG AVWKF AXYYD AYJHY AZFZN B-. B0M BA0 BBWZM BDATZ BENPR BGLVJ BGNMA CAG CCPQU COF CS3 CSCUP DDRTE DL5 DNIVK DPUIP DU5 EAD EAP EBLON EBS EDO EIOEI EJD EMK EPL ESBYG ESX FEDTE FERAY FFXSO FIGPU FINBP FNLPD FRRFC FSGXE FWDCC GGCAI GGRSB GJIRD GNWQR GQ6 GQ7 GQ8 GXS HCIFZ HF~ HG5 HG6 HMJXF HQYDN HRMNR HVGLF HZ~ I-F I09 IHE IJ- IKXTQ ITM IWAJR IXC IZIGR IZQ I~X I~Z J-C J0Z JBSCW JCJTX JZLTJ KDC KOV KOW L6V LAS LLZTM M4Y M7S MA- N2Q N9A NB0 NDZJH NPVJJ NQJWS NU0 O9- O93 O9G O9I O9J OAM P19 P62 P9O PF0 PT4 PT5 PTHSS QOK QOS R4E R89 R9I RHV RIG RNI RNS ROL RPX RSV RZK S16 S1Z S26 S27 S28 S3B SAP SCJ SCLPG SCO SDH SDM SHX SISQX SJYHP SNE SNPRN SNX SOHCF SOJ SPISZ SRMVM SSLCW STPWE SZN T13 T16 TSG TSK TSV TUC TUS U2A UG4 UNUBA UOJIU UTJUX UZXMN VC2 VFIZW W23 W48 WK8 YLTOR Z45 Z7R Z7S Z7X Z7Z Z83 Z88 Z8M Z8N Z8R Z8T Z8W Z92 ZMTXR ZY4 ~8M ~EX AACDK AAEOY AAGNY AAJBT AASML AAYXX AAYZH ABAKF ACAOD ACDTI ACZOJ AEFQL AEMSY AFBBN AGQEE AGRTI AIGIU CITATION H13 DWQXO PQEST PQQKQ PQUKI |
ID | FETCH-LOGICAL-c270t-c9966ba16816339970f65af383793b13eaac91c44dcff5b594065d9265ea77de3 |
IEDL.DBID | BENPR |
ISSN | 0932-8092 |
IngestDate | Fri Nov 01 04:29:58 EDT 2024 Wed Oct 30 12:27:48 EDT 2024 Sat Dec 16 12:07:00 EST 2023 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 6 |
Keywords | Vision transformer Attention retention Co-salient object detection Group attention |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c270t-c9966ba16816339970f65af383793b13eaac91c44dcff5b594065d9265ea77de3 |
PQID | 2866638875 |
PQPubID | 2043753 |
ParticipantIDs | proquest_journals_2866638875 crossref_primary_10_1007_s00138_023_01462_7 springer_journals_10_1007_s00138_023_01462_7 |
PublicationCentury | 2000 |
PublicationDate | 2023-11-01 |
PublicationDateYYYYMMDD | 2023-11-01 |
PublicationDate_xml | – month: 11 year: 2023 text: 2023-11-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | Berlin/Heidelberg |
PublicationPlace_xml | – name: Berlin/Heidelberg – name: New York |
PublicationTitle | Machine vision and applications |
PublicationTitleAbbrev | Machine Vision and Applications |
PublicationYear | 2023 |
Publisher | Springer Berlin Heidelberg Springer Nature B.V |
Publisher_xml | – name: Springer Berlin Heidelberg – name: Springer Nature B.V |
References | Fan, Q., Fan, D.-P., Fu, H., Tang, C.-K., Shao, L., Tai, Y.-W.: Group collaborative learning for co-salient object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 12288–12298 (2021) FuHCaoXTuZCluster-based co-saliency detectionIEEE Trans. Image Process.2013221037663778310513910.1109/TIP.2013.22601661373.94120 YeLLiuZLiJZhaoWLShenLCo-saliency detection via co-salient object discovery and recoveryIEEE Signal Process. Lett.201522112073207710.1109/LSP.2015.2458434 Zhang, K., Li, T., Liu, B., Liu, Q.: Co-saliency detection via mask-guided fully convolutional networks with multi-scale label smoothing. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019a) LiuJYuanMHuangXSuYYangXDiponet: Dual-information progressive optimization network for salient object detectionDigit. Signal Process.202212610.1016/j.dsp.2022.103425 LuoYJiangMWongYZhaoQMulti-camera saliencyIEEE Trans. Pattern Anal. Mach. Intell.201537102057207010.1109/TPAMI.2015.2392783 Wang, C., Zha, Z.-J., Liu, D., Xie, H.: Robust deep co-saliency detection with group semantic. In: Proceedings of the AAAI Conference on Artificial Intelligence vol. 33, pp. 8917–8924 (2019) Wang, N., Zhou, W., Wang, J., Li, H.: Transformer meets tracker: exploiting temporal context for robust visual tracking. In: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1571–1580 (2021a) Kolesnikov, A., Dosovitskiy, A., Weissenborn, D., Heigold, G., Uszkoreit, J., Beyer, L., Minderer, M., Dehghani, M., Houlsby, N., Gelly, S., Unterthiner, T., Zhai, X.: An image is worth 16×16\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$16\times 16$$\end{document} words: transformers for image recognition at scale (2021) HanJChengGLiZZhangDA unified metric learning-based framework for co-saliency detectionIEEE Trans. Circuits Syst. Video Technol.2017282473248310.1109/TCSVT.2017.2706264 ZhangDHanJHanJLingSCosaliency detection based on intrasaliency prior transfer and deep intersaliency miningIEEE Trans. Neural Netw. Learn. Syst.201627611631176350723210.1109/TNNLS.2015.2495161 Zhao, J., Liu, J.-J., Fan, D.-P., Cao, Y., Yang, J., Cheng, M.-M.: EGNet: edge guidance network for salient object detection. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 8778–8787 (2019) ChenH-YLinY-YChenB-YCo-segmentation guided hough transform for robust feature matchingIEEE Trans. Pattern Anal. Mach. Intell.201537122388240110.1109/TPAMI.2015.2420556 JinW-DXuJChengM-MZhangYGuoWLarochelleHRanzatoMHadsellRBalcanMFLinHIcnet: Intra-saliency correlation network for co-saliency detectionAdvances in Neural Information Processing Systems2020Red HookCurran Associates Inc.1874918759 ZhangDMengDHanJCo-saliency detection via a self-paced multiple-instance learning frameworkIEEE Trans. Pattern Anal. Mach. Intell.201739586587810.1109/TPAMI.2016.2567393 LiuZTanYHeQXiaoYSwinNet: Swin transformer drives edge-aware RGB-D and RGB-T salient object detectionIEEE Trans. Circuits Syst. Video Technol.20223274486449710.1109/TCSVT.2021.3127149 HeHWangJLiXHongMHuangSZhouTEAF-Net: an enhancement and aggregation-feedback network for RGB-T salient object detectionMach. Vis. Appl.202233411510.1007/s00138-022-01312-y Touvron, H., Cord, M., Douze, M., Massa, F., Sablayrolles, A., Jegou, H.: Training data-efficient image transformers and distillation through attention. In: Meila, M., Zhang, T. (eds) Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pp. 10347–10357. PMLR (2021) Khan, S., Naseer, M., Hayat, M., Zamir, S. W., Khan, F. S., Shah, M.: Transformers in vision: a survey. ACM Comput. Surv. (2021) Yuan, L., Chen, Y., Wang, T., Yu, W., Shi, Y., Jiang, Z.-H., Tay, F. E., Feng, J., Yan, S.: Tokens-to-token vit: training vision transformers from scratch on imagenet. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 558–567 (October 2021) Pang, Y., Zhao, X., Zhang, L., Lu, H.: Multi-scale interactive network for salient object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020) SrivastavaGSrivastavaRUser-interactive salient object detection using YOLOv2, lazy snapping, and gabor filtersMach. Vis. Appl.20203131710.1007/s00138-020-01065-6 Ahn, J., Cho, S., Kwak, S.: Weakly supervised learning of instance segmentation with inter-pixel relations. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2204–2213 (2019) Li, B., Sun, Z., Li, Q., Wu, Y., Anqi, H.: Group-wise deep object co-segmentation with co-attention recurrent neural network. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 8518–8527 (2019a). https://doi.org/10.1109/ICCV.2019.00861 KorczakowskiJSarwasGCzajewskiWCoU2Net and CoLDF: two novel methods built on basis of double-branch co-salient object detection frameworkIEEE Access202210849898500110.1109/ACCESS.2022.3197752 ZhengPFuHFanD-PFanQQinJTaiY-WTangC-KVan GoolLGCoNEt+: A stronger group collaborative co-salient object detectorIEEE Trans. Pattern Anal. Mach. Intell.202310.1109/TPAMI.2023.3264571 Zheng, S., Lu, J., Zhao, H., Zhu, X., Luo, Z., Wang, Y., Fu, Y., Feng, J., Xiang, T., Torr, P. H., Zhang, L.: Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6881–6890 (2021) LiYFuKLiuZYangJEfficient saliency-model-guided visual co-saliency detectionIEEE Signal Process. Lett.201522558859210.1109/LSP.2014.2364896 GongXLiuXLiYLiHA novel co-attention computation block for deep learning based image co-segmentationImage Vis. Comput.202010110.1016/j.imavis.2020.103973 CaoXTaoZZhangBFuHFengWSelf-adaptively weighted co-saliency detection via rank constraintIEEE Trans. Image Process.20142394175418633000331374.94054 Li, K., Wang, S., Zhang, X., Xu, Y., Xu, W., Tu, Z.: Pose recognition with cascade transformers. In: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1944–1953 (2021) JoulinATangKFei-FeiLFleetDPajdlaTSchieleBTuytelaarsTEfficient image and video co-localization with Frank–Wolfe algorithmComputer Vision–ECCV 20142014ChamSpringer International Publishing25326810.1007/978-3-319-10599-4_17 Zhou, H., Xie, X., Lai, J.-H., Chen, Z., Yang, L.: Interactive two-stream decoder for accurate and fast saliency detection. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9138–9147 (2020) LiTSongHZhangKLiuQRecurrent reverse attention guided residual learning for saliency object detectionNeurocomputing202038917017810.1016/j.neucom.2019.12.109 Wang, L., Lu, H., Wang, Y., Feng, M., Xiang, R.: Learning to detect salient objects with image-level supervision. In: IEEE Conference on Computer Vision and Pattern Recognition (2017) Chang, K.-Y., Liu, T.-L., Lai, S.-H.: From co-saliency to co-segmentation: an efficient and fully unsupervised energy minimization model. In: CVPR 2011, pp. 2129–2136 (2011) YanXChenZWuQLuMSunL3MNet: multi-task, multi-level and multi-channel feature aggregation network for salient object detectionMach. Vis. Appl.202132211310.1007/s00138-021-01172-y Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. Comput. Sci. (2014) Jiang, B., Jiang, X., Tang, J., Luo, B., Huang, S.: Multiple graph convolutional networks for co-saliency detection. In: 2019 IEEE International Conference on Multimedia and Expo (ICME), pp. 332–337 (2019) Zhang, D., Han, J., Li, C., Wang, J., Li, X.: Detection of co-salient objects by looking deep and wide. Int. J. Comput. Vis. (2016b) ChenZCongRXuQHuangQDPANet: Depth potentiality-aware gated attention network for RGB-D salient object detectionIEEE Trans. Image Process.2021307012702410.1109/TIP.2020.3028289 FanD-PLiTLinZJiG-PZhangDChengM-MFuHShenJRe-thinking co-salient object detectionIEEE Trans. Pattern Anal. Mach. Intell.20224484339435410.1109/TPAMI.2021.3060412 Ren, G., Dai, T., Stathaki, T.: Adaptive intra-group aggregation for co-saliency detection. In: ICASSP 2022–2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2520–2524 (2022). https://doi.org/10.1109/ICASSP43922.2022.9746218 LiHNganKNA co-saliency model of image pairsIEEE Trans. Image Process.2011201233653375285048210.1109/TIP.2011.21568031374.94203 ZhangQCongRHouJLiCZhaoYLarochelleHRanzatoMHadsellRBalcanMLinHCoadnet: collaborative aggregation-and-distribution networks for co-salient object detectionAdvances in Neural Information Processing Systems2020Red HookCurran Associates Inc.69596970 Zhang, K., Li, T., Liu, B., Liu, Q.: Co-saliency detection via mask-guided fully convolutional networks with multi-scale label smoothing. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3090–3099 (2019b) Zhang, D., Han, J., Li, C., Wang, J.: Co-saliency detection via looking deep and wide. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2994–3002 (2015) HanKWangYChenHChenXGuoJLiuZTangYXiaoAXuCXuYYangZZhangYTaoDA survey on vision transformerIEEE Trans. Pattern Anal. Mach. Intell.202210.1109/TPAMI.2022.3152247 LiuZDongHZhangZXiaoYGlobal-guided cross-reference network for co-salient object detectionMach. Vis. Appl.202233511310.1007/s00138-022-01325-7 Wu, Z., Su, L., Huang, Q. (2019) Stacked cross refinement network for edge-aware salient object detection. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 7264–7273 (2019) Wang, Y., Xu, Z., Wang, X., Shen, C., Cheng, B., Shen, H., Xia, H.: End-to-end video instance segmentation with transformers. In: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognit H He (1462_CR18) 2022; 33 1462_CR25 1462_CR69 1462_CR24 Y Li (1462_CR32) 2015; 22 1462_CR68 1462_CR23 G Srivastava (1462_CR50) 2020; 31 1462_CR29 Z Zhang (1462_CR80) 2020 1462_CR28 T Li (1462_CR33) 2020; 389 1462_CR27 X Yan (1462_CR64) 2021; 32 X Yao (1462_CR66) 2017; 26 Z Liu (1462_CR41) 2022; 33 A Vaswani (1462_CR56) 2017 H Fu (1462_CR13) 2013; 22 DG Lowe (1462_CR43) 2004; 60 J Han (1462_CR16) 2017; 28 H Li (1462_CR31) 2011; 20 1462_CR62 W-D Jin (1462_CR21) 2020 1462_CR61 1462_CR60 P Zheng (1462_CR85) 2023 1462_CR20 1462_CR37 Z Liu (1462_CR42) 2022; 32 1462_CR36 1462_CR35 H-Y Chen (1462_CR7) 2015; 37 1462_CR38 X Gong (1462_CR14) 2020; 101 A Joulin (1462_CR22) 2014 D Zhang (1462_CR78) 2017; 39 Q Zhang (1462_CR79) 2020 1462_CR73 1462_CR72 1462_CR71 Z Chen (1462_CR8) 2021; 30 J Liu (1462_CR40) 2022; 126 1462_CR76 1462_CR75 1462_CR30 T Li (1462_CR34) 2022; 24 1462_CR74 1462_CR48 Z Tan (1462_CR53) 2022; 252 1462_CR47 1462_CR46 1462_CR45 1462_CR49 D-P Fan (1462_CR12) 2022; 44 N Liu (1462_CR39) 2020; 99 N Carion (1462_CR5) 2020 Y Su (1462_CR52) 2023 L Yang (1462_CR65) 2011; 13 L Tang (1462_CR54) 2022; 32 Z Wu (1462_CR63) 2021; 32 X Cao (1462_CR4) 2014; 23 1462_CR84 1462_CR83 1462_CR82 Z Zhu (1462_CR88) 2023 1462_CR81 1462_CR87 1462_CR86 1462_CR15 1462_CR59 1462_CR58 1462_CR57 1462_CR19 Y Luo (1462_CR44) 2015; 37 Z Bai (1462_CR2) 2023; 25 L Ye (1462_CR67) 2015; 22 1462_CR1 1462_CR3 1462_CR6 J Korczakowski (1462_CR26) 2022; 10 1462_CR51 Z-J Zha (1462_CR70) 2020; 31 K Han (1462_CR17) 2022 D Zhang (1462_CR77) 2016; 27 1462_CR9 1462_CR11 1462_CR55 1462_CR10 |
References_xml | – start-page: 18749 volume-title: Advances in Neural Information Processing Systems year: 2020 ident: 1462_CR21 contributor: fullname: W-D Jin – ident: 1462_CR36 doi: 10.1109/CVPR46437.2021.00199 – volume: 389 start-page: 170 year: 2020 ident: 1462_CR33 publication-title: Neurocomputing doi: 10.1016/j.neucom.2019.12.109 contributor: fullname: T Li – ident: 1462_CR57 doi: 10.1109/CVPR.2018.00813 – ident: 1462_CR60 doi: 10.1609/aaai.v33i01.33018917 – volume: 22 start-page: 2073 issue: 11 year: 2015 ident: 1462_CR67 publication-title: IEEE Signal Process. Lett. doi: 10.1109/LSP.2015.2458434 contributor: fullname: L Ye – year: 2023 ident: 1462_CR88 publication-title: IEEE Trans. Pattern Anal. Mach. Intell. doi: 10.1109/TPAMI.2023.3234586 contributor: fullname: Z Zhu – volume: 22 start-page: 588 issue: 5 year: 2015 ident: 1462_CR32 publication-title: IEEE Signal Process. Lett. doi: 10.1109/LSP.2014.2364896 contributor: fullname: Y Li – ident: 1462_CR1 doi: 10.1109/CVPR.2019.00231 – ident: 1462_CR6 doi: 10.1109/CVPR.2011.5995415 – ident: 1462_CR47 doi: 10.1109/ICASSP43922.2022.9746218 – volume: 24 start-page: 492 year: 2022 ident: 1462_CR34 publication-title: IEEE Trans. Multimed. doi: 10.1109/TMM.2021.3054526 contributor: fullname: T Li – volume: 27 start-page: 1163 issue: 6 year: 2016 ident: 1462_CR77 publication-title: IEEE Trans. Neural Netw. Learn. Syst. doi: 10.1109/TNNLS.2015.2495161 contributor: fullname: D Zhang – start-page: 6959 volume-title: Advances in Neural Information Processing Systems year: 2020 ident: 1462_CR79 contributor: fullname: Q Zhang – volume: 252 year: 2022 ident: 1462_CR53 publication-title: Knowl. Based Syst. doi: 10.1016/j.knosys.2022.109356 contributor: fullname: Z Tan – ident: 1462_CR68 doi: 10.1109/CVPR52688.2022.00105 – ident: 1462_CR76 doi: 10.1109/CVPR.2018.00081 – ident: 1462_CR72 doi: 10.1109/CVPR.2015.7298918 – ident: 1462_CR23 doi: 10.1145/3505244 – ident: 1462_CR29 doi: 10.24963/ijcai.2019/115 – volume: 32 start-page: 1 issue: 2 year: 2021 ident: 1462_CR64 publication-title: Mach. Vis. Appl. doi: 10.1007/s00138-021-01172-y contributor: fullname: X Yan – volume: 37 start-page: 2388 issue: 12 year: 2015 ident: 1462_CR7 publication-title: IEEE Trans. Pattern Anal. Mach. Intell. doi: 10.1109/TPAMI.2015.2420556 contributor: fullname: H-Y Chen – ident: 1462_CR15 doi: 10.1109/ICCV48922.2021.00707 – ident: 1462_CR35 doi: 10.1007/978-3-319-10602-1_48 – volume: 126 year: 2022 ident: 1462_CR40 publication-title: Digit. Signal Process. doi: 10.1016/j.dsp.2022.103425 contributor: fullname: J Liu – volume: 10 start-page: 84989 year: 2022 ident: 1462_CR26 publication-title: IEEE Access doi: 10.1109/ACCESS.2022.3197752 contributor: fullname: J Korczakowski – ident: 1462_CR11 doi: 10.24963/ijcai.2018/97 – ident: 1462_CR87 – volume: 33 start-page: 1 issue: 5 year: 2022 ident: 1462_CR41 publication-title: Mach. Vis. Appl. doi: 10.1007/s00138-022-01325-7 contributor: fullname: Z Liu – ident: 1462_CR20 doi: 10.1109/ICME.2019.00065 – ident: 1462_CR24 – volume: 13 start-page: 1295 issue: 6 year: 2011 ident: 1462_CR65 publication-title: IEEE Trans. Multimed. doi: 10.1109/TMM.2011.2162399 contributor: fullname: L Yang – volume: 101 year: 2020 ident: 1462_CR14 publication-title: Image Vis. Comput. doi: 10.1016/j.imavis.2020.103973 contributor: fullname: X Gong – ident: 1462_CR49 – ident: 1462_CR9 – ident: 1462_CR61 doi: 10.1109/CVPR46437.2021.00162 – ident: 1462_CR45 doi: 10.1109/CVPR42600.2020.00943 – ident: 1462_CR69 doi: 10.1109/ICCV48922.2021.00060 – ident: 1462_CR74 doi: 10.1109/CVPR.2019.00321 – ident: 1462_CR55 – volume: 44 start-page: 4339 issue: 8 year: 2022 ident: 1462_CR12 publication-title: IEEE Trans. Pattern Anal. Mach. Intell. doi: 10.1109/TPAMI.2021.3060412 contributor: fullname: D-P Fan – volume: 99 start-page: 6438 year: 2020 ident: 1462_CR39 publication-title: IEEE Trans. Image Process. doi: 10.1109/TIP.2020.2988568 contributor: fullname: N Liu – volume: 60 start-page: 91 issue: 2 year: 2004 ident: 1462_CR43 publication-title: Int. J. Comput. Vis. doi: 10.1023/B:VISI.0000029664.99615.94 contributor: fullname: DG Lowe – volume: 31 start-page: 1 issue: 3 year: 2020 ident: 1462_CR50 publication-title: Mach. Vis. Appl. doi: 10.1007/s00138-020-01065-6 contributor: fullname: G Srivastava – start-page: 253 volume-title: Computer Vision–ECCV 2014 year: 2014 ident: 1462_CR22 doi: 10.1007/978-3-319-10599-4_17 contributor: fullname: A Joulin – ident: 1462_CR82 doi: 10.1109/ICCV.2019.00887 – volume: 20 start-page: 3365 issue: 12 year: 2011 ident: 1462_CR31 publication-title: IEEE Trans. Image Process. doi: 10.1109/TIP.2011.2156803 contributor: fullname: H Li – ident: 1462_CR28 doi: 10.1109/ICCV.2019.00861 – ident: 1462_CR38 doi: 10.1109/ICCV48922.2021.00468 – volume: 26 start-page: 3196 issue: 7 year: 2017 ident: 1462_CR66 publication-title: IEEE Trans. Image Process. doi: 10.1109/TIP.2017.2694222 contributor: fullname: X Yao – volume: 37 start-page: 2057 issue: 10 year: 2015 ident: 1462_CR44 publication-title: IEEE Trans. Pattern Anal. Mach. Intell. doi: 10.1109/TPAMI.2015.2392783 contributor: fullname: Y Luo – ident: 1462_CR75 doi: 10.1109/CVPR42600.2020.00907 – ident: 1462_CR51 doi: 10.1109/TMM.2023.3264883 – volume-title: Advances in Neural Information Processing Systems year: 2017 ident: 1462_CR56 contributor: fullname: A Vaswani – volume: 23 start-page: 4175 issue: 9 year: 2014 ident: 1462_CR4 publication-title: IEEE Trans. Image Process. contributor: fullname: X Cao – volume: 30 start-page: 7012 year: 2021 ident: 1462_CR8 publication-title: IEEE Trans. Image Process. doi: 10.1109/TIP.2020.3028289 contributor: fullname: Z Chen – ident: 1462_CR10 doi: 10.1109/CVPR46437.2021.01211 – volume: 32 start-page: 4 issue: 1 year: 2021 ident: 1462_CR63 publication-title: IEEE Trans. Neural Netw. Learn. Syst. doi: 10.1109/TNNLS.2020.2978386 contributor: fullname: Z Wu – volume: 31 start-page: 2398 issue: 7 year: 2020 ident: 1462_CR70 publication-title: IEEE Trans. Neural Netw. Learn. Syst. contributor: fullname: Z-J Zha – volume: 33 start-page: 1 issue: 4 year: 2022 ident: 1462_CR18 publication-title: Mach. Vis. Appl. doi: 10.1007/s00138-022-01312-y contributor: fullname: H He – start-page: 213 volume-title: Computer Vision–ECCV 2020 year: 2020 ident: 1462_CR5 doi: 10.1007/978-3-030-58452-8_13 contributor: fullname: N Carion – ident: 1462_CR25 – volume: 25 start-page: 764 year: 2023 ident: 1462_CR2 publication-title: IEEE Trans. Multimed. doi: 10.1109/TMM.2021.3138246 contributor: fullname: Z Bai – volume: 32 start-page: 5453 issue: 8 year: 2022 ident: 1462_CR54 publication-title: IEEE Trans. Circuits Syst. Video Technol. doi: 10.1109/TCSVT.2022.3150923 contributor: fullname: L Tang – volume: 22 start-page: 3766 issue: 10 year: 2013 ident: 1462_CR13 publication-title: IEEE Trans. Image Process. doi: 10.1109/TIP.2013.2260166 contributor: fullname: H Fu – ident: 1462_CR83 doi: 10.1109/CVPR46437.2021.01655 – volume: 28 start-page: 2473 year: 2017 ident: 1462_CR16 publication-title: IEEE Trans. Circuits Syst. Video Technol. doi: 10.1109/TCSVT.2017.2706264 contributor: fullname: J Han – ident: 1462_CR48 doi: 10.5244/C.31.17 – start-page: 455 volume-title: Computer Vision—ECCV 2020 year: 2020 ident: 1462_CR80 doi: 10.1007/978-3-030-58610-2_27 contributor: fullname: Z Zhang – ident: 1462_CR19 doi: 10.1007/978-3-030-01228-1_30 – ident: 1462_CR58 doi: 10.1109/CVPR.2017.404 – ident: 1462_CR62 doi: 10.1109/ICCV.2019.00736 – ident: 1462_CR30 doi: 10.1109/CVPR46437.2021.00198 – ident: 1462_CR46 doi: 10.1109/CVPR46437.2021.00700 – year: 2023 ident: 1462_CR85 publication-title: IEEE Trans. Pattern Anal. Mach. Intell. doi: 10.1109/TPAMI.2023.3264571 contributor: fullname: P Zheng – ident: 1462_CR3 doi: 10.1109/CVPR.2010.5540080 – ident: 1462_CR59 doi: 10.1109/CVPR46437.2021.00863 – ident: 1462_CR86 doi: 10.1109/CVPR42600.2020.00916 – volume: 39 start-page: 865 issue: 5 year: 2017 ident: 1462_CR78 publication-title: IEEE Trans. Pattern Anal. Mach. Intell. doi: 10.1109/TPAMI.2016.2567393 contributor: fullname: D Zhang – year: 2022 ident: 1462_CR17 publication-title: IEEE Trans. Pattern Anal. Mach. Intell. doi: 10.1109/TPAMI.2022.3152247 contributor: fullname: K Han – ident: 1462_CR37 doi: 10.1109/ICCV48922.2021.00986 – ident: 1462_CR84 doi: 10.1109/CVPR46437.2021.00681 – ident: 1462_CR71 doi: 10.1109/CVPR.2015.7298918 – ident: 1462_CR73 doi: 10.1109/CVPR.2019.00321 – ident: 1462_CR27 doi: 10.1109/LSP.2013.2292873 – volume: 32 start-page: 4486 issue: 7 year: 2022 ident: 1462_CR42 publication-title: IEEE Trans. Circuits Syst. Video Technol. doi: 10.1109/TCSVT.2021.3127149 contributor: fullname: Z Liu – ident: 1462_CR81 doi: 10.1109/TMM.2022.3198848 – year: 2023 ident: 1462_CR52 publication-title: IEEE Trans. Multimed. doi: 10.1109/TMM.2023.3264883 contributor: fullname: Y Su |
SSID | ssj0004774 |
Score | 2.3862271 |
Snippet | The co-salient object detection (Co-SOD) aims to discover common, salient objects from a group of images. With the development of convolutional neural... |
SourceID | proquest crossref springer |
SourceType | Aggregation Database Publisher |
StartPage | 107 |
SubjectTerms | Artificial neural networks Collaboration Communications Engineering Computer Science Decoding Garnets Image Processing and Computer Vision Modules Networks Object recognition Original Paper Pattern Recognition Retention Salience Source code Vision systems |
SummonAdditionalLinks | – databaseName: SpringerLink Journals (ICM) dbid: U2A link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV07T8MwED5BWWDgUUAECvLABpbysON6rBBVxcBEpW5W4seYojb8f85O3ACCgS2yIw-X-O47-7vvAO5NVUjOp5byVEvKMF2jsnReHs84Z3lmjAtsi9dysWQvK74a6rgD2T3eSAZHvat1C3dqFEMM9XonCAv34QDBA_M8rmU-G4ohRSe9jMAE3a_M-0qZ39f4Ho0GiPnjVjQEm_kpHPcokcy6z3oGe7YZw0mPGEm_H7c4FJsyxLExHH1RGDyHWThbIl5EM9AaycbGp6YjgBNErUSv6RYBOc6Qde1PZoixbSBpNRewnD-_PS1o3zWB6lykLdU-g6mrrJwi1EL4IVJX8sr5TFQWdVbYqtIy04wZ7RyvucSQzo3MS24rIYwtLmHUrBt7BUTqMrXWFUIbyTKmJatTJ3wKmHMjqiyBh2g99d6JY6idDHKwtUJbq2BrJRKYRAOrfqNsVT7F_KlAT8cTeIxGH6b_Xu36f6_fwKFvFN9VEU5g1G4-7C3Ciba-C7_PJ03jv_I priority: 102 providerName: Springer Nature |
Title | Group attention retention network for co-salient object detection |
URI | https://link.springer.com/article/10.1007/s00138-023-01462-7 https://www.proquest.com/docview/2866638875 |
Volume | 34 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV1LT8MwDLbYdoEDjwFiMKYcuEFEX2mWExpoD4E0IcSkcaraPI7d2Mb_x0lTBkhwqxIpB6e2Pzv2Z4ArlceCsb6mLJCCJhiuUZEaS4-njNEsVMq4aotpOpklj3M29wm3tS-rrG2iM9RqIW2O_DbqI9COUSXY3fKd2qlR9nXVj9BoQCvCSCFqQut-OH1-2XZG8oqHGVEK2mIR-bYZ1zznHuko-ixqCVQQZ_50TVu8-euJ1Hme0SHse8hIBtUdH8GOLttw4OEj8cq5bsPeN27BYxi4rBKx9JmuoJGsdP1VVqXfBPEqkQu6RiiOO2RR2JwMUXrjyrPKE5iNhq8PE-rnJVAZ8WBDpY1dijxM-wiyEHjwwKQsNzYGFXERxjrPpQhlkihpDCuYQGfOlIhSpnPOlY5PoVkuSn0GRMg00NrEXCqRhIkUSREYboO_iCmehx24rkWVLStajOyLANkJNkPBZk6wGe9At5Zm5lVknW0vtAM3tYS323-fdv7_aRewa0fCV_2CXWhuVh_6EoHDpuhBoz8a96A1GL89DXv-X8HVWTT4BEJ4who |
link.rule.ids | 315,783,787,12779,21402,27938,27939,33387,33758,41095,41537,42164,42606,43614,43819,52125,52248 |
linkProvider | ProQuest |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV07T8MwED5BGYCBRwFRKOCBDSzysON6QhWiFCidWqlblPgxJqUJ_x_bcSggwRbJkodz7u6713cA1zKLOaUDhWkgOCYmXMM80ZYeT2qtaCildt0W02Q8Jy8LuvAJt8q3VbY20RlqWQqbI7-LBgZox0Yl6P3yHdutUba66ldobMIWiY2jsZPio6f1XCRrWJgNRjGWmEd-aMaNzrkSHTYeC1v6FIMyfzqmNdr8VSB1fmd0AHseMKJh88KHsKGKLux78Ii8alZd2P3GLHgEQ5dTQpY807UzopVqv4qm8RsZtIpEiSsDxM0JKnObkUFS1a45qziG-ehx9jDGflsCFhELaixs5JJnYTIwEMvADhbohGbaRqA8zsNYZZngoSBECq1pTrlx5VTyKKEqY0yq-AQ6RVmoU0BcJIFSOmZCchISwUkeaGZDv4hKloU9uGlFlS4bUoz0i_7YCTY1gk2dYFPWg34rzdQrSJWun7MHt62E18d_33b2_21XsD2evU3SyfP09Rx27HL4ZnKwD5169aEuDISo80v3n3wCDzvAQg |
linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV07T8MwED5BKyEYKBQQhQIe2MCQl-N6rIBSHkIMIMEUJX4sSGnVpgu_nrOTtIBgQGyRbVnJnXP-znf3GeBYpaFgrKcp86SgEbprVMTG0uMpYzTzlTIu2-IhHj5Hty_s5VMVv8t2r0OSZU2DZWnKi_OxMufzwjcXYKO431BLfoIYcRmakY9woQHN_vXr3dWiNpKXTMyIU9Aai6AqnPl5lq-b0wJxfguSur1n0IK0fusy5eTtbFZkZ_L9G6Hjfz5rA9YrYEr65UrahCWdt6FVgVRSmYApNtX3QNRtbVj7RGq4BX13nEUsb6fLpCQTXT_lZc45QaBM5IhO0QfAHjLK7GEQUbpweWH5NjwPrp4uhrS6qIHKgHsFldZpylI_7iG6Q8TDPROz1FjnV4SZH-o0lcKXUaSkMSxjAlEEUyKImU45VzrcgUY-yvUuECFjT2sTcqlE5EdSRJlnuPU6A6Z46nfgpNZQMi75OJI587ITX4LiS5z4Et6Bbq3EpPo3p0nQQ5ctROPKOnBa62TR_ftse38bfgQrj5eD5P7m4W4fVu019WUNYxcaxWSmDxDMFNlhtV4_AD5w6R4 |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Group+attention+retention+network+for+co-salient+object+detection&rft.jtitle=Machine+vision+and+applications&rft.au=Liu%2C+Jing&rft.au=Wang%2C+Jiaxiang&rft.au=Fan%2C+Zhiwei&rft.au=Yuan%2C+Min&rft.date=2023-11-01&rft.pub=Springer+Nature+B.V&rft.issn=0932-8092&rft.eissn=1432-1769&rft.volume=34&rft.issue=6&rft.spage=107&rft_id=info:doi/10.1007%2Fs00138-023-01462-7 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0932-8092&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0932-8092&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0932-8092&client=summon |