Edge-Guided Non-Local Fully Convolutional Network for Salient Object Detection
Fully Convolutional Neural Network (FCN) has been widely applied to salient object detection recently by virtue of high-level semantic feature extraction, but existing FCN-based methods still suffer from continuous striding and pooling operations leading to loss of spatial structure and blurred edge...
Saved in:
Published in | IEEE transactions on circuits and systems for video technology Vol. 31; no. 2; pp. 582 - 593 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.02.2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Fully Convolutional Neural Network (FCN) has been widely applied to salient object detection recently by virtue of high-level semantic feature extraction, but existing FCN-based methods still suffer from continuous striding and pooling operations leading to loss of spatial structure and blurred edges. To maintain the clear edge structure of salient objects, we propose a novel Edge-guided Non-local FCN (ENFNet) to perform edge-guided feature learning for accurate salient object detection. In a specific, we extract hierarchical global and local information in FCN to incorporate non-local features for effective feature representations. To preserve good boundaries of salient objects, we propose a guidance block to embed edge prior knowledge into hierarchical feature maps. The guidance block not only performs feature-wise manipulation but also spatial-wise transformation for effective edge embeddings. Our model is trained on the MSRA-B dataset and tested on five popular benchmark datasets. Comparing with the state-of-the-art methods, the proposed method performance well on five datasets. |
---|---|
AbstractList | Fully Convolutional Neural Network (FCN) has been widely applied to salient object detection recently by virtue of high-level semantic feature extraction, but existing FCN-based methods still suffer from continuous striding and pooling operations leading to loss of spatial structure and blurred edges. To maintain the clear edge structure of salient objects, we propose a novel Edge-guided Non-local FCN (ENFNet) to perform edge-guided feature learning for accurate salient object detection. In a specific, we extract hierarchical global and local information in FCN to incorporate non-local features for effective feature representations. To preserve good boundaries of salient objects, we propose a guidance block to embed edge prior knowledge into hierarchical feature maps. The guidance block not only performs feature-wise manipulation but also spatial-wise transformation for effective edge embeddings. Our model is trained on the MSRA-B dataset and tested on five popular benchmark datasets. Comparing with the state-of-the-art methods, the proposed method performance well on five datasets. |
Author | Tu, Zhengzheng Luo, Bin Tang, Jin Ma, Yan Li, Chenglong |
Author_xml | – sequence: 1 givenname: Zhengzheng surname: Tu fullname: Tu, Zhengzheng email: zhengzhengahu@163.com organization: Key Laboratory of Intelligent Computing and Signal Processing of Ministry of Education, School of Computer Science and Technology, Anhui University, Hefei, China – sequence: 2 givenname: Yan surname: Ma fullname: Ma, Yan email: m17856174397@163.com organization: Key Laboratory of Intelligent Computing and Signal Processing of Ministry of Education, School of Computer Science and Technology, Anhui University, Hefei, China – sequence: 3 givenname: Chenglong orcidid: 0000-0002-7233-2739 surname: Li fullname: Li, Chenglong email: lcl1314@foxmail.com organization: Key Laboratory of Intelligent Computing and Signal Processing of Ministry of Education, School of Computer Science and Technology, Anhui University, Hefei, China – sequence: 4 givenname: Jin orcidid: 0000-0001-8375-3590 surname: Tang fullname: Tang, Jin email: tangjin@ahu.edu.cn organization: Key Laboratory of Intelligent Computing and Signal Processing of Ministry of Education, School of Computer Science and Technology, Anhui University, Hefei, China – sequence: 5 givenname: Bin orcidid: 0000-0001-5948-5055 surname: Luo fullname: Luo, Bin organization: Key Laboratory of Intelligent Computing and Signal Processing of Ministry of Education, School of Computer Science and Technology, Anhui University, Hefei, China |
BookMark | eNp9kE1PAjEQhhuDiYD-Ab1s4nlx2t2W9mgQ0ITAAfTadPthFtctdrsa_r2LEA8ePL2TyftMJs8A9WpfW4SuMYwwBnG3maxfNiMCBEZEcOA0O0N9TClPCQHa62agOOUE0ws0aJotAM55Pu6j5dS82nTelsaaZOnrdOG1qpJZW1X7ZOLrT1-1sfR1t1va-OXDW-J8SNaqKm0dk1WxtTomDzZ20dUu0blTVWOvTjlEz7PpZvKYLlbzp8n9ItVE0JhaVlBjmCiUYxqIcsYVxOFcaI6xMBwKI7IxFUBzzAB0rgxzDgzhBQfCdDZEt8e7u-A_WttEufVt6L5sJMk5oyLjDHctfmzp4JsmWCd1GdXhzxhUWUkM8mBP_tiTB3vyZK9DyR90F8p3Ffb_QzdHqLTW_gICMiZAZN-bnX09 |
CODEN | ITCTEM |
CitedBy_id | crossref_primary_10_1016_j_bspc_2024_106291 crossref_primary_10_1109_TCSVT_2021_3099120 crossref_primary_10_1109_ACCESS_2022_3225918 crossref_primary_10_1016_j_neucom_2023_126916 crossref_primary_10_1016_j_cviu_2024_104063 crossref_primary_10_1109_TCSVT_2023_3275252 crossref_primary_10_1016_j_isprsjprs_2023_03_013 crossref_primary_10_3390_jimaging10080186 crossref_primary_10_3934_era_2023066 crossref_primary_10_1109_TCSVT_2022_3204740 crossref_primary_10_1016_j_neucom_2025_129558 crossref_primary_10_1109_TCSVT_2022_3150923 crossref_primary_10_1016_j_neucom_2021_04_078 crossref_primary_10_1109_TPAMI_2022_3166451 crossref_primary_10_1088_1402_4896_ad76e8 crossref_primary_10_3390_coatings12111730 crossref_primary_10_1109_TCSVT_2023_3241196 crossref_primary_10_3390_rs14030533 crossref_primary_10_1007_s00371_022_02499_x crossref_primary_10_1016_j_neucom_2024_129153 crossref_primary_10_1007_s10489_023_04648_8 crossref_primary_10_1007_s10489_023_04645_x crossref_primary_10_1007_s10489_023_04587_4 crossref_primary_10_1088_1361_6501_ad9856 crossref_primary_10_1007_s10489_021_02713_8 crossref_primary_10_1016_j_engappai_2023_106530 crossref_primary_10_1007_s00521_023_08502_3 crossref_primary_10_1016_j_compbiomed_2021_104625 crossref_primary_10_1016_j_patcog_2024_110903 crossref_primary_10_1109_TCSVT_2022_3199780 crossref_primary_10_1109_TGRS_2021_3101359 crossref_primary_10_1007_s00034_024_02983_w crossref_primary_10_1109_TCSVT_2022_3212088 crossref_primary_10_1016_j_aej_2022_07_006 crossref_primary_10_1109_TCSVT_2021_3127149 crossref_primary_10_1109_TCSVT_2022_3197643 crossref_primary_10_1016_j_imavis_2025_105487 crossref_primary_10_1007_s11042_023_15794_z crossref_primary_10_1109_TCSVT_2022_3157828 crossref_primary_10_1016_j_eswa_2023_121649 crossref_primary_10_1016_j_imavis_2024_105302 crossref_primary_10_1109_TIP_2023_3314285 crossref_primary_10_1007_s11760_022_02253_9 crossref_primary_10_1109_TIP_2022_3176540 crossref_primary_10_1007_s00371_023_03076_6 crossref_primary_10_1109_TCSVT_2023_3307693 crossref_primary_10_1109_TCSVT_2023_3331780 crossref_primary_10_1016_j_jksuci_2023_101838 crossref_primary_10_1109_TCSVT_2021_3069848 crossref_primary_10_1007_s10489_022_04062_6 crossref_primary_10_1186_s12880_024_01277_6 crossref_primary_10_1109_TIV_2023_3314527 crossref_primary_10_1109_TCSVT_2023_3287167 crossref_primary_10_1109_TCSVT_2023_3268217 crossref_primary_10_1007_s11517_024_03050_x crossref_primary_10_1109_TCSVT_2023_3312859 crossref_primary_10_1109_TNNLS_2021_3113657 crossref_primary_10_1109_TIP_2022_3214092 crossref_primary_10_3390_rs17040707 crossref_primary_10_1109_ACCESS_2021_3138782 crossref_primary_10_1109_TCSVT_2022_3144852 crossref_primary_10_1109_TCSVT_2021_3104932 crossref_primary_10_1109_TIP_2021_3106798 crossref_primary_10_1007_s11760_024_03489_3 crossref_primary_10_3390_rs17020342 crossref_primary_10_1109_TCSVT_2021_3049408 crossref_primary_10_1109_TCSVT_2023_3253685 crossref_primary_10_1109_TCSVT_2022_3203595 crossref_primary_10_3389_fnbot_2023_1234129 crossref_primary_10_1109_ACCESS_2022_3148201 crossref_primary_10_1109_TIM_2024_3507047 crossref_primary_10_1016_j_neucom_2023_126742 crossref_primary_10_1109_TCSVT_2021_3082939 crossref_primary_10_1016_j_compeleceng_2024_109296 crossref_primary_10_1016_j_dsp_2023_104155 crossref_primary_10_1016_j_jvcir_2024_104186 crossref_primary_10_1109_TCSVT_2024_3403264 crossref_primary_10_1109_TIM_2022_3200114 crossref_primary_10_1016_j_knosys_2022_108901 crossref_primary_10_1109_TCSVT_2022_3233060 crossref_primary_10_1109_TCSVT_2023_3318672 crossref_primary_10_1117_1_JEI_34_1_013005 crossref_primary_10_3389_fnins_2023_1194713 crossref_primary_10_1016_j_eswa_2022_118833 crossref_primary_10_1109_TCSVT_2022_3164093 crossref_primary_10_1016_j_jii_2022_100403 crossref_primary_10_1109_TCSVT_2023_3283705 crossref_primary_10_3390_e22101174 crossref_primary_10_3390_rs16162978 crossref_primary_10_1109_ACCESS_2024_3525303 crossref_primary_10_1109_TCSVT_2021_3098763 crossref_primary_10_1109_TCSVT_2021_3126591 crossref_primary_10_1109_TCSVT_2023_3284076 crossref_primary_10_1016_j_knosys_2023_111243 crossref_primary_10_1016_j_eswa_2024_125562 |
Cites_doi | 10.1109/CVPR.2018.00070 10.1007/11758501_76 10.1109/CVPR.2019.00244 10.1109/TIP.2016.2537211 10.1109/CVPR.2009.5206596 10.1109/CVPR.2014.43 10.1109/ICCV.2017.119 10.1109/CVPR.2017.698 10.1109/CVPR.2013.271 10.1109/TPAMI.2018.2864965 10.1109/CVPR.2016.58 10.1109/ICCV.2017.31 10.1109/APSIPA.2017.8282222 10.1109/CVPR.2014.360 10.1109/ICIP.2016.7532516 10.1109/ICCV.2009.5459296 10.1109/CVPR.2019.00404 10.1109/CVPR.2017.404 10.1109/ICCV.2019.00389 10.1109/TCSVT.2013.2280096 10.1109/TPAMI.2010.70 10.1109/TIP.2016.2579306 10.1109/CVPR.2016.80 10.1109/ICCV.2017.32 10.1109/ICIP.2017.8296717 10.1109/TIP.2016.2614135 10.1109/ICCV.2015.164 10.1109/CVPR.2015.7298731 10.1109/CVPR.2013.460 10.1109/CVPR.2017.25 10.1109/3DV.2016.79 10.1016/j.patcog.2019.106977 10.1109/CVPRW.2012.6239191 10.1109/CVPR.2013.153 10.1109/LSP.2018.2881835 10.1109/CVPR.2018.00330 10.1109/CVPR.2017.563 10.1109/CVPR.2016.78 10.1109/ICCV.2017.433 10.1109/CVPR.2013.407 10.1109/ICCV.2001.937655 10.1109/CVPR.2015.7298938 10.1002/cpa.3160420503 10.1109/TIP.2017.2756825 10.1007/s11263-014-0733-5 10.1109/CVPR.2012.6247743 10.1109/CVPR.2016.399 10.1186/s12880-015-0068-x |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021 |
DBID | 97E RIA RIE AAYXX CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D |
DOI | 10.1109/TCSVT.2020.2980853 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
DatabaseTitle | CrossRef Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional |
DatabaseTitleList | Technology Research Database |
Database_xml | – sequence: 1 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering |
EISSN | 1558-2205 |
EndPage | 593 |
ExternalDocumentID | 10_1109_TCSVT_2020_2980853 9036909 |
Genre | orig-research |
GrantInformation_xml | – fundername: Natural Science Foundation of Anhui Province grantid: 1808085QF187; 1908085QF264 funderid: 10.13039/501100003995 – fundername: National Natural Science Foundation of China grantid: 61602006; 61702002; 61976003; 61976002 funderid: 10.13039/501100001809 – fundername: Natural Science Foundation of Anhui Higher Education Institution of China grantid: KJ2019A0026 – fundername: NSFC Key Projects in International (Regional) Cooperation and Exchanges grantid: 61860206004 funderid: 10.13039/501100001809 – fundername: Open fund for Discipline Construction, Institute of Physical Science and Information Technology, Anhui University |
GroupedDBID | -~X 0R~ 29I 4.4 5GY 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AETIX AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS EJD HZ~ H~9 ICLAB IFIPE IFJZH IPLJI JAVBF LAI M43 O9- OCL P2P RIA RIE RNS RXW TAE TN5 VH1 AAYXX CITATION RIG 7SC 7SP 8FD JQ2 L7M L~C L~D |
ID | FETCH-LOGICAL-c295t-e6b5dd69baf6c02afdfb2f149c8119d80bd937590541600c4ad6ff0d28b8026c3 |
IEDL.DBID | RIE |
ISSN | 1051-8215 |
IngestDate | Mon Jun 30 06:22:43 EDT 2025 Thu Apr 24 23:07:40 EDT 2025 Tue Jul 01 00:41:13 EDT 2025 Wed Aug 27 02:30:25 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 2 |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c295t-e6b5dd69baf6c02afdfb2f149c8119d80bd937590541600c4ad6ff0d28b8026c3 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0001-8375-3590 0000-0002-7233-2739 0000-0001-5948-5055 |
PQID | 2486593861 |
PQPubID | 85433 |
PageCount | 12 |
ParticipantIDs | ieee_primary_9036909 crossref_citationtrail_10_1109_TCSVT_2020_2980853 proquest_journals_2486593861 crossref_primary_10_1109_TCSVT_2020_2980853 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2021-02-01 |
PublicationDateYYYYMMDD | 2021-02-01 |
PublicationDate_xml | – month: 02 year: 2021 text: 2021-02-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | New York |
PublicationPlace_xml | – name: New York |
PublicationTitle | IEEE transactions on circuits and systems for video technology |
PublicationTitleAbbrev | TCSVT |
PublicationYear | 2021 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref13 ref12 ref14 li (ref61) 2019 ref53 ref52 ref55 ref11 ref54 ref10 ref17 ref16 ref19 ref18 ref51 ref50 shen (ref15) 2012 ref46 ref45 ref48 ref47 ref41 ref44 ref49 ref8 ref7 ref9 ref4 ref6 ref5 ref40 ref35 ref37 li (ref23) 2015 ref36 ref31 ref30 ref33 liu (ref58) 2011; 33 ref2 ref1 ref38 borji (ref59) 2012 pinheiro (ref43) 2016 kingma (ref57) 2015 abadi (ref56) 2016 ref24 chen (ref34) 2018 ref25 deng (ref42) 2018 ref20 ref22 zitnick (ref21) 2014 simonyan (ref32) 2015 ref28 ref27 ref29 wang (ref26) 2016 li (ref3) 2017; 27 ref60 zhang (ref39) 2017 ref62 |
References_xml | – ident: ref22 doi: 10.1109/CVPR.2018.00070 – ident: ref38 doi: 10.1007/11758501_76 – ident: ref54 doi: 10.1109/CVPR.2019.00244 – ident: ref2 doi: 10.1109/TIP.2016.2537211 – ident: ref60 doi: 10.1109/CVPR.2009.5206596 – year: 2019 ident: ref61 article-title: Segmenting objects in day and night: Edge-conditioned CNN for thermal image semantic segmentation publication-title: arXiv 1907 10303 – start-page: 1 year: 2015 ident: ref32 article-title: Very deep convolutional networks for large-scale image recognition publication-title: Proc 3rd Int Conf Learn Represent – ident: ref24 doi: 10.1109/CVPR.2014.43 – ident: ref18 doi: 10.1109/ICCV.2017.119 – start-page: 234 year: 2018 ident: ref34 article-title: Reverse attention for salient object detection publication-title: Proc IEEE Eur Conf Comput Vis – ident: ref19 doi: 10.1109/CVPR.2017.698 – ident: ref53 doi: 10.1109/CVPR.2013.271 – ident: ref7 doi: 10.1109/TPAMI.2018.2864965 – ident: ref14 doi: 10.1109/CVPR.2016.58 – ident: ref17 doi: 10.1109/ICCV.2017.31 – ident: ref29 doi: 10.1109/APSIPA.2017.8282222 – ident: ref52 doi: 10.1109/CVPR.2014.360 – start-page: 75 year: 2016 ident: ref43 article-title: Learning to refine object segments publication-title: Proc IEEE Conf Eur Conf Comput Vis – ident: ref36 doi: 10.1109/ICIP.2016.7532516 – ident: ref1 doi: 10.1109/ICCV.2009.5459296 – start-page: 391 year: 2014 ident: ref21 article-title: Edge boxes: Locating object proposals from edges publication-title: Proc IEEE Eur Conf Comput Vis – start-page: 825 year: 2016 ident: ref26 article-title: Saliency detection with recurrent fully convolutional networks publication-title: Proc IEEE Eur Conf Comput Vis – ident: ref40 doi: 10.1109/CVPR.2019.00404 – ident: ref50 doi: 10.1109/CVPR.2017.404 – ident: ref47 doi: 10.1109/ICCV.2019.00389 – ident: ref4 doi: 10.1109/TCSVT.2013.2280096 – start-page: 5455 year: 2015 ident: ref23 article-title: Visual saliency based on multiscale deep features publication-title: Proc IEEE Conf Comput Vis Pattern Recognit (CVPR) – volume: 33 start-page: 353 year: 2011 ident: ref58 article-title: Learning to detect a salient object publication-title: IEEE Trans Pattern Anal Mach Intell doi: 10.1109/TPAMI.2010.70 – ident: ref49 doi: 10.1109/TIP.2016.2579306 – ident: ref48 doi: 10.1109/CVPR.2016.80 – ident: ref51 doi: 10.1109/ICCV.2017.32 – ident: ref20 doi: 10.1109/ICIP.2017.8296717 – ident: ref6 doi: 10.1109/TIP.2016.2614135 – year: 2016 ident: ref56 article-title: TensorFlow: Large-scale machine learning on heterogeneous distributed systems publication-title: arXiv 1603 04467 – start-page: 853 year: 2012 ident: ref15 article-title: A unified approach to salient object detection via low rank matrix recovery publication-title: Proc IEEE Conf Comput Vis Pattern Recognit – ident: ref33 doi: 10.1109/ICCV.2015.164 – ident: ref30 doi: 10.1109/CVPR.2015.7298731 – ident: ref8 doi: 10.1109/CVPR.2013.460 – ident: ref28 doi: 10.1109/CVPR.2017.25 – ident: ref45 doi: 10.1109/3DV.2016.79 – ident: ref62 doi: 10.1016/j.patcog.2019.106977 – ident: ref5 doi: 10.1109/CVPRW.2012.6239191 – ident: ref11 doi: 10.1109/CVPR.2013.153 – ident: ref41 doi: 10.1109/LSP.2018.2881835 – volume: 27 start-page: 725 year: 2017 ident: ref3 article-title: Weighted low-rank decomposition for robust grayscale-thermal foreground detection publication-title: IEEE Trans Circuits Syst Video Technol – year: 2017 ident: ref39 article-title: Deep edge-aware saliency detection publication-title: arXiv 1708 04366 – ident: ref31 doi: 10.1109/CVPR.2018.00330 – ident: ref16 doi: 10.1109/CVPR.2017.563 – start-page: 414 year: 2012 ident: ref59 article-title: Salient object detection: A benchmark publication-title: Proc IEEE Conf Eur Conf Comput Vis – ident: ref12 doi: 10.1109/CVPR.2016.78 – start-page: 684 year: 2018 ident: ref42 article-title: R3net: Recurrent residual refinement network for saliency detection publication-title: Proc Int Joint Artif Intell Conf – ident: ref35 doi: 10.1109/ICCV.2017.433 – start-page: 1 year: 2015 ident: ref57 article-title: Adam: A method for stochastic optimization publication-title: Proc 3rd Int Conf Learn Represent – ident: ref10 doi: 10.1109/CVPR.2013.407 – ident: ref25 doi: 10.1109/ICCV.2001.937655 – ident: ref13 doi: 10.1109/CVPR.2015.7298938 – ident: ref44 doi: 10.1002/cpa.3160420503 – ident: ref37 doi: 10.1109/TIP.2017.2756825 – ident: ref55 doi: 10.1007/s11263-014-0733-5 – ident: ref9 doi: 10.1109/CVPR.2012.6247743 – ident: ref27 doi: 10.1109/CVPR.2016.399 – ident: ref46 doi: 10.1186/s12880-015-0068-x |
SSID | ssj0014847 |
Score | 2.624388 |
Snippet | Fully Convolutional Neural Network (FCN) has been widely applied to salient object detection recently by virtue of high-level semantic feature extraction, but... |
SourceID | proquest crossref ieee |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 582 |
SubjectTerms | Artificial neural networks Context modeling Convolution Datasets Deep learning edge guidance Feature extraction Feature maps fully convolutional neural network Image edge detection Machine learning non-local features Object detection Object recognition Salience Saliency detection Salient object detection |
Title | Edge-Guided Non-Local Fully Convolutional Network for Salient Object Detection |
URI | https://ieeexplore.ieee.org/document/9036909 https://www.proquest.com/docview/2486593861 |
Volume | 31 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV07T8MwELagEwy8CqJQkAc2SInd2NgjKhSEoAwUxBb5iRAoRZAiwa_n7KQVLyE2K7EV687xfZ99D4R2mDhQYApMwsF8JZmHlmLBwQbMFet6Q50LgcIXA356nZ3dstsZtDeNhYF30fnMdUIz3uXbkRmHo7J9CdutDNF6s0Dcqlit6Y1BJmIxMYALJBFgxyYBMqncH_auboZABWnaoVIAxuh-MUKxqsqPrTjal_4iupjMrHIreeiMS90x79-SNv536ktooQaa-LBaGctoxhUraP5T-sEmGhzbO5ecjO-ts3gwKpLzYNhwYKVvuDcqXutlCc8Glbc4BoiLrwC6w8fwpQ5nOPjIldGdq1hF1_3jYe80qesrJIZKViaOa2Ytl1p5blKqvPWaeqBMRhAirUi1BfDCJKA6ArjIZMpy71NLhRZA3Ux3DTWKUeHWEQb5O0a9YBYYIxdcKUU8YUIRq623pIXIROC5qZOPhxoYj3kkIanMo5LyoKS8VlIL7U7HPFWpN_7s3QxSn_asBd5C7Yle8_rvfMlpJjiTXcHJxu-jNtEcDb4r0Tu7jRrl89htAfgo9XZcdR_URdTU |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3NTxQxFH8hcEAPIqBxAaEHbzjLtDst7ZGs4IK7w4HFcJv0kxjNrNFZE_nree3MbgSN4dbMtJnmvU5_v9e-D4B3XB5rhAKbCYSvrAjY0jw62CBc8UGwzPsYKDwpxei6uLjhNyvwfhkLg--S85nvx2a6y3czO49HZUcKt1sVo_XWEPc5a6O1lncGhUzlxJAw0Ewiki1CZHJ1NB1efZ6iMcjyPlMSWcbgAQyluip_bcYJYc42YLKYW-tY8rU_b0zf3j1K2_jUyb-EFx3VJCft2tiEFV9vwfM_EhBuQ3nqbn32cf7FeUfKWZ2NI7SRaJf-JsNZ_atbmPisbP3FCZJccoXkHT9GLk08xSEffJMcuupXcH12Oh2Osq7CQmaZ4k3mheHOCWV0EDZnOrhgWECjyUpKlZO5cUhfuEJeR5EZ2UI7EULumDQSjTc7eA2r9az2b4Cg_D1nQXKHNqOQQmtNA-VSU2dccLQHdCHwynbpx2MVjG9VMkNyVSUlVVFJVaekHhwux3xvk2_8t_d2lPqyZyfwHuwt9Fp1_-fPihVScDWQgu78e9QBrI-mk3E1Pi8_7cIzFj1Zkq_2Hqw2P-b-LVKRxuynFXgPHBTYHg |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Edge-Guided+Non-Local+Fully+Convolutional+Network+for+Salient+Object+Detection&rft.jtitle=IEEE+transactions+on+circuits+and+systems+for+video+technology&rft.au=Tu%2C+Zhengzheng&rft.au=Ma%2C+Yan&rft.au=Li%2C+Chenglong&rft.au=Tang%2C+Jin&rft.date=2021-02-01&rft.issn=1051-8215&rft.eissn=1558-2205&rft.volume=31&rft.issue=2&rft.spage=582&rft.epage=593&rft_id=info:doi/10.1109%2FTCSVT.2020.2980853&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TCSVT_2020_2980853 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1051-8215&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1051-8215&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1051-8215&client=summon |