Boundary-enhanced dual-stream network for semantic segmentation of high-resolution remote sensing images
Deep convolutional neural networks (DCNNs) have been successfully used in semantic segmentation of high-resolution remote sensing images (HRSIs). However, this task still suffers from intra-class inconsistency and boundary blur due to high intra-class heterogeneity and inter-class homogeneity, consi...
Saved in:
Published in | GIScience and remote sensing Vol. 61; no. 1 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
Taylor & Francis
31.12.2024
Taylor & Francis Group |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Deep convolutional neural networks (DCNNs) have been successfully used in semantic segmentation of high-resolution remote sensing images (HRSIs). However, this task still suffers from intra-class inconsistency and boundary blur due to high intra-class heterogeneity and inter-class homogeneity, considerable scale variance, and spatial information loss in conventional DCNN-based methods. Therefore, a novel boundary-enhanced dual-stream network (BEDSN) is proposed, in which an edge detection branch stream (EDBS) with a composite loss function is introduced to compensate for boundary loss in semantic segmentation branch stream (SSBS). EDBS and SSBS are integrated by highly coupled encoder and feature extractor. A lightweight multilevel information fusion module guided by channel attention mechanism is designed to reuse intermediate boundary information effectively. For aggregating multiscale contextual information, SSBS is enhanced by multiscale feature extraction module and hybrid atrous convolution module. Extensive experiments have been tested on ISPRS Vaihingen and Potsdam datasets. Results show that BEDSN can achieve significant improvements in intra-class consistency and boundary refinement. Compared with 11 state-of-the-art methods, BEDSN exhibits higher-level performance in both quantitative and visual assessments with low model complexity. The code will be available at
https://github.com/lixinghua5540/BEDSN
. |
---|---|
AbstractList | Deep convolutional neural networks (DCNNs) have been successfully used in semantic segmentation of high-resolution remote sensing images (HRSIs). However, this task still suffers from intra-class inconsistency and boundary blur due to high intra-class heterogeneity and inter-class homogeneity, considerable scale variance, and spatial information loss in conventional DCNN-based methods. Therefore, a novel boundary-enhanced dual-stream network (BEDSN) is proposed, in which an edge detection branch stream (EDBS) with a composite loss function is introduced to compensate for boundary loss in semantic segmentation branch stream (SSBS). EDBS and SSBS are integrated by highly coupled encoder and feature extractor. A lightweight multilevel information fusion module guided by channel attention mechanism is designed to reuse intermediate boundary information effectively. For aggregating multiscale contextual information, SSBS is enhanced by multiscale feature extraction module and hybrid atrous convolution module. Extensive experiments have been tested on ISPRS Vaihingen and Potsdam datasets. Results show that BEDSN can achieve significant improvements in intra-class consistency and boundary refinement. Compared with 11 state-of-the-art methods, BEDSN exhibits higher-level performance in both quantitative and visual assessments with low model complexity. The code will be available at https://github.com/lixinghua5540/BEDSN. Deep convolutional neural networks (DCNNs) have been successfully used in semantic segmentation of high-resolution remote sensing images (HRSIs). However, this task still suffers from intra-class inconsistency and boundary blur due to high intra-class heterogeneity and inter-class homogeneity, considerable scale variance, and spatial information loss in conventional DCNN-based methods. Therefore, a novel boundary-enhanced dual-stream network (BEDSN) is proposed, in which an edge detection branch stream (EDBS) with a composite loss function is introduced to compensate for boundary loss in semantic segmentation branch stream (SSBS). EDBS and SSBS are integrated by highly coupled encoder and feature extractor. A lightweight multilevel information fusion module guided by channel attention mechanism is designed to reuse intermediate boundary information effectively. For aggregating multiscale contextual information, SSBS is enhanced by multiscale feature extraction module and hybrid atrous convolution module. Extensive experiments have been tested on ISPRS Vaihingen and Potsdam datasets. Results show that BEDSN can achieve significant improvements in intra-class consistency and boundary refinement. Compared with 11 state-of-the-art methods, BEDSN exhibits higher-level performance in both quantitative and visual assessments with low model complexity. The code will be available at https://github.com/lixinghua5540/BEDSN . |
Author | Shen, Huanfeng Zhang, Liangpei Wang, Caifeng Li, Xinghua Xie, Linglin Miao, Jianhao |
Author_xml | – sequence: 1 givenname: Xinghua surname: Li fullname: Li, Xinghua organization: Wuhan University – sequence: 2 givenname: Linglin surname: Xie fullname: Xie, Linglin email: xll@img.net organization: Ministry of Natural Resources – sequence: 3 givenname: Caifeng surname: Wang fullname: Wang, Caifeng organization: Wuhan University – sequence: 4 givenname: Jianhao surname: Miao fullname: Miao, Jianhao organization: Wuhan University – sequence: 5 givenname: Huanfeng surname: Shen fullname: Shen, Huanfeng organization: Collaborative Innovation Center of Geospatial Technology – sequence: 6 givenname: Liangpei surname: Zhang fullname: Zhang, Liangpei organization: Wuhan University |
BookMark | eNqFkc9uEzEQxi1UJNrCIyDtC2zqP2tnLS6FqoVKlbjA2ZrY443Lro1sRyhvXydpOXAopxmNvu8bzW8uyFlMEQn5yOiK0ZFeMTmMTFGx4pQPKy6kElK-IedMD6Jfc67OWt80_UH0jlyU8kipkIzJc7L9knbRQd73GLcQLbrO7WDuS80ISxex_kn5V-dT7gouEGuwrZkWjBVqSLFLvtuGadtnLGneHUcZl1SxyWIJcerCAhOW9-Sth7ngh-d6SX7e3f64-dY_fP96f_P5obcDF7VXjlmrgDq2EQ6HkUvuuPBaa-A4KslgDcLLUTmhqQDtmXZc2REFMk2lFZfk_pTrEjya37ltz3uTIJjjIOXJQG5XzGioB_CjZnJD2YCDg7UWfoRBblBuuMKWJU9ZNqdSMvq_eYyaA3rzgt4c0Jtn9M336R-fDSdcNUOY_-u-PrlDbNgXaA-Ynamwn1P2uf0oFCNej3gCTdKg3A |
CitedBy_id | crossref_primary_10_1145_3721984 crossref_primary_10_1016_j_dsp_2024_104885 crossref_primary_10_1109_JSTARS_2025_3525634 crossref_primary_10_1109_JSTARS_2024_3471638 crossref_primary_10_1109_JSTARS_2024_3444773 crossref_primary_10_1007_s10489_025_06433_1 crossref_primary_10_1080_15481603_2024_2426589 crossref_primary_10_1109_JSTARS_2025_3528650 crossref_primary_10_1016_j_image_2024_117238 crossref_primary_10_1109_JSTARS_2024_3470316 crossref_primary_10_1109_TCSVT_2024_3495769 crossref_primary_10_1109_JSTARS_2024_3456854 crossref_primary_10_1109_TGRS_2025_3526247 crossref_primary_10_1364_JOSAA_526142 crossref_primary_10_1109_TGRS_2024_3477749 crossref_primary_10_1109_TGRS_2024_3502401 crossref_primary_10_1016_j_ecoinf_2024_102818 crossref_primary_10_1016_j_jag_2024_104083 crossref_primary_10_1109_TGRS_2024_3492715 crossref_primary_10_1109_TGRS_2024_3507784 crossref_primary_10_1109_JSTARS_2024_3485239 crossref_primary_10_1109_TGRS_2024_3507274 |
Cites_doi | 10.1109/CVPR.2017.549 10.1007/978-3-319-24574-4_28 10.3390/ijgi10010022 10.1109/CVPR.2018.00199 10.1109/TGRS.2004.843193 10.3390/rs12040701 10.1007/978-3-030-00889-5_1 10.48550/arXiv.1906.11428 10.1007/s11263-021-01515-2 10.1080/17476938708814211 10.1109/ICCV.2015.164 10.1109/ICCV.2011.6126474 10.3390/rs9060522 10.1007/s11063-019-10174-x 10.1109/TPAMI.2016.2644615 10.1109/WACV.2018.00163 10.1016/j.jag.2011.06.008 10.1109/CVPRW.2018.00050 10.1109/CVPR.2017.660 10.1109/JSTARS.2021.3076035 10.1109/CVPR.2018.00813 10.1109/tits.2020.2972974 10.1109/LGRS.2023.3234257 10.1007/978-3-030-01234-2_1 10.48550/arXiv.1511.07122 10.1109/JSTARS.2021.3071353 10.1016/j.isprsjprs.2017.06.001 10.1109/CVPR.2016.492 10.3390/rs10091339 10.48550/arXiv.1706.05587 10.1109/ACCESS.2021.3065695 10.3390/rs11151774 10.1109/LGRS.2020.2988294 10.1109/TGRS.2020.2964675 10.1016/j.isprsjprs.2020.01.013 10.1109/CVPR.2016.90 10.1109/ICECCT.2017.8117946 10.1109/ICIP.2019.8803132 10.1109/JSTARS.2021.3073935 10.3390/ijgi9100601 10.1109/CVPR.2019.00326 10.1109/TPAMI.2017.2699184 10.1016/j.isprsjprs.2017.11.009 10.1109/JSTARS.2018.2833382 10.1109/JSTARS.2017.2747599 10.1109/CVPR.2015.7299173 10.23915/distill.00003 10.1109/CVPR.2017.353 10.1109/TGRS.2020.2994150 10.1109/TGRS.2003.813271 10.1109/TGRS.2021.3119537 10.3390/app122111248 10.1109/CVPR.2018.00745 10.1109/ISM46123.2019.00049 10.3390/rs11070830 10.3390/rs12040633 10.1080/01431161.2020.1871100 10.1109/cvpr.2015.7298594 10.1109/CVPR.2017.243 10.1016/j.neucom.2019.02.003 10.1061/(ASCE)CP.1943-5487.0000947 10.1109/TPAMI.2020.2983686 10.1016/j.rse.2010.12.017 10.1109/ACCESS.2019.2917952 10.1109/TPAMI.2018.2878849 10.1109/TPAMI.2016.2572683 10.1109/TGRS.2021.3065112 10.1109/TPAMI.2020.3007032 10.1007/978-3-030-01234-2_49 10.1080/17538947.2020.1831087 |
ContentType | Journal Article |
Copyright | 2024 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group. 2024 |
Copyright_xml | – notice: 2024 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group. 2024 |
DBID | 0YH AAYXX CITATION DOA |
DOI | 10.1080/15481603.2024.2356355 |
DatabaseName | Taylor & Francis Open Access CrossRef DOAJ Directory of Open Access Journals |
DatabaseTitle | CrossRef |
DatabaseTitleList | |
Database_xml | – sequence: 1 dbid: DOA name: DOAJ Directory of Open Access Journals url: https://www.doaj.org/ sourceTypes: Open Website – sequence: 2 dbid: 0YH name: Taylor & Francis Open Access url: https://www.tandfonline.com sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Astronomy & Astrophysics |
EISSN | 1943-7226 |
ExternalDocumentID | oai_doaj_org_article_0faaf8915b014e4da793f8a45be5b26e 10_1080_15481603_2024_2356355 2356355 |
Genre | Method |
GrantInformation_xml | – fundername: Hubei Luojia Laboratory grantid: 220100055 – fundername: National Natural Science Foundation of China grantid: 42171302 – fundername: Open Fund of Key Laboratory of Natural Resources Monitoring and Supervision in Southern Hilly Region, Ministry of Natural Resources grantid: NRMSSHR2022Z03 |
GroupedDBID | 0YH 30N 4.4 5GY AAHBH AAJMT ABCCY ABFIM ABPEM ABTAI ACGFS ACTIO ADCVX AEISY AENEX AEYOC AIJEM ALMA_UNASSIGNED_HOLDINGS ALQZU AQRUH AVBZW BLEHA CCCUG CS3 DGEBU DKSSO DU5 EBS E~A E~B GROUPED_DOAJ GTTXZ H13 HZ~ H~P IPNFZ KYCEM LJTGL M4Z O9- OK1 RIG S-T SNACF TDBHL TEI TFL TFT TFW TTHFI UT5 ~02 AAYXX AIYEW CITATION |
ID | FETCH-LOGICAL-c423t-6d1cc6a0d1b3de48252d23f999a2e8651a7a3f586d3903a9f19d26c8e3e1905c3 |
IEDL.DBID | DOA |
ISSN | 1548-1603 |
IngestDate | Wed Aug 27 01:27:32 EDT 2025 Tue Jul 01 02:27:29 EDT 2025 Thu Apr 24 23:10:22 EDT 2025 Wed Dec 25 09:04:03 EST 2024 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 1 |
Language | English |
License | open-access: http://creativecommons.org/licenses/by-nc/4.0/: This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial License (http://creativecommons.org/licenses/by-nc/4.0/), which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited. The terms on which this article has been published allow the posting of the Accepted Manuscript in a repository by the author(s) or with their consent. |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c423t-6d1cc6a0d1b3de48252d23f999a2e8651a7a3f586d3903a9f19d26c8e3e1905c3 |
OpenAccessLink | https://doaj.org/article/0faaf8915b014e4da793f8a45be5b26e |
ParticipantIDs | informaworld_taylorfrancis_310_1080_15481603_2024_2356355 crossref_citationtrail_10_1080_15481603_2024_2356355 doaj_primary_oai_doaj_org_article_0faaf8915b014e4da793f8a45be5b26e crossref_primary_10_1080_15481603_2024_2356355 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2024-12-31 |
PublicationDateYYYYMMDD | 2024-12-31 |
PublicationDate_xml | – month: 12 year: 2024 text: 2024-12-31 day: 31 |
PublicationDecade | 2020 |
PublicationTitle | GIScience and remote sensing |
PublicationYear | 2024 |
Publisher | Taylor & Francis Taylor & Francis Group |
Publisher_xml | – name: Taylor & Francis – name: Taylor & Francis Group |
References | e_1_3_3_52_1 e_1_3_3_50_1 e_1_3_3_71_1 Molchanov P. (e_1_3_3_41_1) 2016 e_1_3_3_18_1 e_1_3_3_39_1 e_1_3_3_14_1 e_1_3_3_37_1 e_1_3_3_16_1 e_1_3_3_35_1 e_1_3_3_58_1 e_1_3_3_10_1 e_1_3_3_33_1 e_1_3_3_56_1 e_1_3_3_12_1 e_1_3_3_31_1 e_1_3_3_54_1 e_1_3_3_73_1 e_1_3_3_40_1 e_1_3_3_63_1 e_1_3_3_61_1 e_1_3_3_7_1 e_1_3_3_9_1 e_1_3_3_29_1 e_1_3_3_25_1 e_1_3_3_48_1 e_1_3_3_27_1 e_1_3_3_46_1 e_1_3_3_69_1 e_1_3_3_3_1 e_1_3_3_21_1 e_1_3_3_44_1 e_1_3_3_67_1 e_1_3_3_5_1 e_1_3_3_23_1 e_1_3_3_42_1 e_1_3_3_65_1 e_1_3_3_30_1 e_1_3_3_51_1 e_1_3_3_70_1 e_1_3_3_17_1 e_1_3_3_19_1 e_1_3_3_13_1 e_1_3_3_38_1 e_1_3_3_59_1 e_1_3_3_15_1 e_1_3_3_36_1 e_1_3_3_57_1 e_1_3_3_34_1 e_1_3_3_55_1 e_1_3_3_72_1 e_1_3_3_11_1 e_1_3_3_32_1 e_1_3_3_53_1 e_1_3_3_74_1 e_1_3_3_62_1 e_1_3_3_60_1 e_1_3_3_6_1 e_1_3_3_8_1 e_1_3_3_28_1 e_1_3_3_24_1 e_1_3_3_49_1 e_1_3_3_26_1 e_1_3_3_47_1 e_1_3_3_68_1 e_1_3_3_2_1 e_1_3_3_20_1 e_1_3_3_45_1 e_1_3_3_66_1 e_1_3_3_4_1 e_1_3_3_22_1 e_1_3_3_43_1 e_1_3_3_64_1 |
References_xml | – ident: e_1_3_3_31_1 doi: 10.1109/CVPR.2017.549 – ident: e_1_3_3_48_1 doi: 10.1007/978-3-319-24574-4_28 – ident: e_1_3_3_68_1 doi: 10.3390/ijgi10010022 – ident: e_1_3_3_67_1 doi: 10.1109/CVPR.2018.00199 – ident: e_1_3_3_27_1 doi: 10.1109/TGRS.2004.843193 – ident: e_1_3_3_69_1 doi: 10.3390/rs12040701 – ident: e_1_3_3_73_1 doi: 10.1007/978-3-030-00889-5_1 – ident: e_1_3_3_71_1 doi: 10.48550/arXiv.1906.11428 – ident: e_1_3_3_24_1 – ident: e_1_3_3_65_1 doi: 10.1007/s11263-021-01515-2 – ident: e_1_3_3_5_1 doi: 10.1080/17476938708814211 – ident: e_1_3_3_61_1 doi: 10.1109/ICCV.2015.164 – ident: e_1_3_3_16_1 doi: 10.1109/ICCV.2011.6126474 – ident: e_1_3_3_36_1 doi: 10.3390/rs9060522 – ident: e_1_3_3_54_1 doi: 10.1007/s11063-019-10174-x – ident: e_1_3_3_2_1 doi: 10.1109/TPAMI.2016.2644615 – ident: e_1_3_3_55_1 doi: 10.1109/WACV.2018.00163 – ident: e_1_3_3_59_1 doi: 10.1016/j.jag.2011.06.008 – ident: e_1_3_3_49_1 doi: 10.1109/CVPRW.2018.00050 – ident: e_1_3_3_70_1 doi: 10.1109/CVPR.2017.660 – ident: e_1_3_3_51_1 – ident: e_1_3_3_44_1 doi: 10.1109/JSTARS.2021.3076035 – ident: e_1_3_3_56_1 doi: 10.1109/CVPR.2018.00813 – ident: e_1_3_3_15_1 doi: 10.1109/tits.2020.2972974 – ident: e_1_3_3_28_1 doi: 10.1109/LGRS.2023.3234257 – ident: e_1_3_3_60_1 doi: 10.1007/978-3-030-01234-2_1 – ident: e_1_3_3_66_1 doi: 10.48550/arXiv.1511.07122 – ident: e_1_3_3_72_1 doi: 10.1109/JSTARS.2021.3071353 – ident: e_1_3_3_39_1 doi: 10.1016/j.isprsjprs.2017.06.001 – ident: e_1_3_3_3_1 doi: 10.1109/CVPR.2016.492 – ident: e_1_3_3_34_1 doi: 10.3390/rs10091339 – ident: e_1_3_3_7_1 doi: 10.48550/arXiv.1706.05587 – ident: e_1_3_3_57_1 doi: 10.1109/ACCESS.2021.3065695 – ident: e_1_3_3_64_1 doi: 10.3390/rs11151774 – ident: e_1_3_3_32_1 doi: 10.1109/LGRS.2020.2988294 – ident: e_1_3_3_11_1 doi: 10.1109/TGRS.2020.2964675 – ident: e_1_3_3_9_1 doi: 10.1016/j.isprsjprs.2020.01.013 – ident: e_1_3_3_19_1 doi: 10.1109/CVPR.2016.90 – ident: e_1_3_3_50_1 doi: 10.1109/ICECCT.2017.8117946 – start-page: 1 volume-title: International Conference on Learning Representations (ICLR) year: 2016 ident: e_1_3_3_41_1 – ident: e_1_3_3_38_1 doi: 10.1109/ICIP.2019.8803132 – ident: e_1_3_3_22_1 doi: 10.1109/JSTARS.2021.3073935 – ident: e_1_3_3_52_1 doi: 10.3390/ijgi9100601 – ident: e_1_3_3_17_1 doi: 10.1109/CVPR.2019.00326 – ident: e_1_3_3_6_1 doi: 10.1109/TPAMI.2017.2699184 – ident: e_1_3_3_40_1 doi: 10.1016/j.isprsjprs.2017.11.009 – ident: e_1_3_3_30_1 doi: 10.1109/JSTARS.2018.2833382 – ident: e_1_3_3_4_1 doi: 10.1109/JSTARS.2017.2747599 – ident: e_1_3_3_18_1 doi: 10.1109/CVPR.2015.7299173 – ident: e_1_3_3_45_1 doi: 10.23915/distill.00003 – ident: e_1_3_3_47_1 doi: 10.1109/CVPR.2017.353 – ident: e_1_3_3_10_1 doi: 10.1109/TGRS.2020.2994150 – ident: e_1_3_3_14_1 doi: 10.1109/TGRS.2003.813271 – ident: e_1_3_3_74_1 doi: 10.1109/TGRS.2021.3119537 – ident: e_1_3_3_26_1 doi: 10.3390/app122111248 – ident: e_1_3_3_23_1 doi: 10.1109/CVPR.2018.00745 – ident: e_1_3_3_25_1 doi: 10.1109/ISM46123.2019.00049 – ident: e_1_3_3_35_1 doi: 10.3390/rs11070830 – ident: e_1_3_3_63_1 doi: 10.3390/rs12040633 – ident: e_1_3_3_62_1 doi: 10.1080/01431161.2020.1871100 – ident: e_1_3_3_53_1 doi: 10.1109/cvpr.2015.7298594 – ident: e_1_3_3_20_1 doi: 10.1109/CVPR.2017.243 – ident: e_1_3_3_29_1 doi: 10.1016/j.neucom.2019.02.003 – ident: e_1_3_3_46_1 doi: 10.1061/(ASCE)CP.1943-5487.0000947 – ident: e_1_3_3_58_1 doi: 10.1109/TPAMI.2020.2983686 – ident: e_1_3_3_42_1 doi: 10.1016/j.rse.2010.12.017 – ident: e_1_3_3_12_1 doi: 10.1109/ACCESS.2019.2917952 – ident: e_1_3_3_33_1 doi: 10.1109/TPAMI.2018.2878849 – ident: e_1_3_3_37_1 doi: 10.1109/TPAMI.2016.2572683 – ident: e_1_3_3_43_1 doi: 10.1109/TGRS.2021.3065112 – ident: e_1_3_3_21_1 doi: 10.1109/TPAMI.2020.3007032 – ident: e_1_3_3_8_1 doi: 10.1007/978-3-030-01234-2_49 – ident: e_1_3_3_13_1 doi: 10.1080/17538947.2020.1831087 |
SSID | ssj0035115 |
Score | 2.5432281 |
Snippet | Deep convolutional neural networks (DCNNs) have been successfully used in semantic segmentation of high-resolution remote sensing images (HRSIs). However, this... |
SourceID | doaj crossref informaworld |
SourceType | Open Website Enrichment Source Index Database Publisher |
SubjectTerms | boundary blur CNN HRSIs intra-class inconsistency Semantic segmentation |
SummonAdditionalLinks | – databaseName: Taylor & Francis Open Access dbid: 0YH link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1LT9wwELYoXHqpeFVsC8gHxM008SMbH6ECrThwYiU4RX4CUpOtNtvD_vvOOM4KKrU9cEsiO4k843l55htCzsBjwD5TBTOl90waK5hRXoIwlIILEXXtUrbFXTWby9sHNWYT9jmtEn3oOABFJFmNm9vYfsyI-4ZWNnZHBu-OywsuFCrND2SHI7cCSxePs1EY4zGZSpCpEpwlmDMW8fztNW_UU0Lx_wPD9JX2udkln7LZSC8HOu-RrdDtk6PLHgPZi3ZNz2m6HuIU_QF5vkrtkpZrFrrndMhPseiKYWmIaWk3JH9T-CDtQwur--Lg4qnNlUgdXUSKSMYMvPHMnHQZgKwBhnUYX6AvLYii_pDMb67vv89YbqrAHFhOK1b50rnKFL60wgcJDiL3HGiiteGhrlRppkZEVVde6EIYHUvteeXqIALYDsqJz2S7W3ThiFBYPKfKOPUqcBntVHvQiDVoXVlxbp2ZEDmuZeMy4jg2vvjRlBmYdCRBgyRoMgkm5GIz7ecAufG_CVdIqM1gRMxODxbLpyZvwKaIxsRal8qCUxikNyCYYm2kskFZXoUJ0a_J3KxSwCQO3U0a8c8f-PKOuV_JR7wdsCOPyfZq-SucgJ2zsqeJk38Dg4HyXg priority: 102 providerName: Taylor & Francis |
Title | Boundary-enhanced dual-stream network for semantic segmentation of high-resolution remote sensing images |
URI | https://www.tandfonline.com/doi/abs/10.1080/15481603.2024.2356355 https://doaj.org/article/0faaf8915b014e4da793f8a45be5b26e |
Volume | 61 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV07T8MwELaAiQXxFOUlD4jNkPjVeAQEqhiYQIIpcvwAJJKitgz8e-7sFBUGWNgsy06su9O97PuOkGOIGLDPVMFs6T2TthHMKi9BGUrBhYimcum1xa0e3cubB_Ww0OoL34RleOBMuLMiWhsrU6oGnPkgvQWBipWVqgmq4Tqg9gWbNw-msg7G2zGVkFIlxEi6EPPanao4wzmcgtiQy1MuFJrcb1Ypgff_gC5dMDrX62St9xbpeT7lBlkK3SbZPZ9i_nrcftATmsY5PTHdIs8XqUvS5IOF7jnd7VOstWJYEWJb2uU33xR-SKehBaK-OBg8tX0BUkfHkSKAMYMgvJdJOgnAzQDLOkwr0JcWNNB0m9xfX91djljfS4E5cJhmTPvSOW0LXzbCBwlxIfccWGGM5aHSqrRDK6KqtBemENbE0niuXRVEAJdBObFDVrpxF3YJBeI5VcahV4HL2AyNB0NYgbGVmvPG2QGRc1rWrgcax34Xr3XZ45HOWVAjC-qeBQNy-rXtLSNt_LXhAhn1tRiBstMEiE_di0_9l_gMiFlkcz1LeZKYm5rU4tcD7P3HAfbJKn4zY0cekJXZ5D0cgp8za47IcvE4OkqC_Qlt7_dY |
linkProvider | Directory of Open Access Journals |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1Lb9QwELagHOAC5aUuTx8QNy-JXxsfW0S1QNlTK_UW-dlWbbLVbnoov54ZJ6mWSpRDb1ZiR_bYnldmviHkE1gMWGeqYLYMgUnrBLMqSGCGUnAhkql8jrZY6PmR_HGsjjdyYTCsEm3o1ANFZF6Nlxud0WNI3BdUs7E8Mph3XE65UCg1H5JHyugZVjEQxWLkxvifTGXMVAnWEowZs3j-9Zm_5FOG8b8FYrohfvafET9OvI86OZ9edW7qf9_CdLzfyrbJ00E7pbv9cXpOHsT2BdnZXaO_fNlc0880t3t3yPolOd3LVZlW1yy2pzmWgGJuF8MMFNvQto8xp7Asuo4NbOKZh8ZJMyQ8tXSZKAImMzD6hztAVxFOT4RuLbox6FkDHG_9ihztfzv8OmdD7QbmQUHrmA6l99oWoXQiRAl2KA8ctt4Yy2OlVWlnViRV6SBMIaxJpQlc-yqKCCqK8uI12WqXbdwhFOjhVZlmQUUuk5uZAIK3AuEuNefO2wmR447VfgA2x_oaF3U54J-OVK2RqvVA1QmZ3gy77JE9_jdgD4_DTWcE5s4PlquTerjndZGsTZUplQPbM8pggf-lykrlonJcxwkxm4ep7rJfJvVFVGpx5wTe3GPsR_J4fvjroD74vvj5ljzBVz1c5Tuy1a2u4ntQrTr3Id-dPzoJFeY |
linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1Jb9UwELagSIgLlE19LMUHxM2PxEtecmyhT2XREwcqcbO8thUkr3pJD-XXM-M4VakEPfRmJZ7Iy3gWZ-YbQt6Cx4B1pgpmSu-ZNFYwo7wEYSgFFyI2tUvRFqvq8Eh-_qGmaMI-h1WiDx1HoIgkq_Fwn_k4RcS9RysbqyODd8flnAuFSvMuuVcheDhmcRSrSRjjbzKVIFMlOEtAMyXx_Oszf6mnhOJ_DcP0ivZZPiJ2GvcYdPJzfj7Yuft9DdLxVhPbJg-zbUr3RmZ6TO6E7gnZ2evxtnzdXtB3NLXHy5D-KTnZTzWZNhcsdCcpkoBiZhfD_BPT0m6MMKcwK9qHFrbw1EHjuM3pTh1dR4pwyQxc_nwC6CYA7wTo1uElBj1tQd71z8jR8uD7h0OWKzcwB-bZwCpfOleZwpdW-CDBC-Wew8Y3jeGhrlRpFkZEVVdeNIUwTSwbzytXBxHAQFFOPCdb3boLO4TCejhVxoVXgctoF40HtVuDapcV59aZGZHThmmXYc2xusYvXWb002lVNa6qzqs6I_NLsrMR1-Mmgn3khsvOCMudHqw3xzqfcl1EY2LdlMqC5xmkNyD9Ym2kskFZXoUZaa7ykh7SrUwcS6ho8d8BvLgF7Rty_9vHpf76afXlJXmAb0asyldka9ich9dgVw12N52cP7TlFIo |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Boundary-enhanced+dual-stream+network+for+semantic+segmentation+of+high-resolution+remote+sensing+images&rft.jtitle=GIScience+and+remote+sensing&rft.au=Xinghua+Li&rft.au=Linglin+Xie&rft.au=Caifeng+Wang&rft.au=Jianhao+Miao&rft.date=2024-12-31&rft.pub=Taylor+%26+Francis+Group&rft.issn=1548-1603&rft.eissn=1943-7226&rft.volume=61&rft.issue=1&rft_id=info:doi/10.1080%2F15481603.2024.2356355&rft.externalDBID=DOA&rft.externalDocID=oai_doaj_org_article_0faaf8915b014e4da793f8a45be5b26e |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1548-1603&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1548-1603&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1548-1603&client=summon |