Classification with an edge: Improving semantic image segmentation with boundary detection
We present an end-to-end trainable deep convolutional neural network (DCNN) for semantic segmentation with built-in awareness of semantically meaningful boundaries. Semantic segmentation is a fundamental remote sensing task, and most state-of-the-art methods rely on DCNNs as their workhorse. A major...
Saved in:
Published in | ISPRS journal of photogrammetry and remote sensing Vol. 135; pp. 158 - 172 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
Elsevier B.V
01.01.2018
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | We present an end-to-end trainable deep convolutional neural network (DCNN) for semantic segmentation with built-in awareness of semantically meaningful boundaries. Semantic segmentation is a fundamental remote sensing task, and most state-of-the-art methods rely on DCNNs as their workhorse. A major reason for their success is that deep networks learn to accumulate contextual information over very large receptive fields. However, this success comes at a cost, since the associated loss of effective spatial resolution washes out high-frequency details and leads to blurry object boundaries. Here, we propose to counter this effect by combining semantic segmentation with semantically informed edge detection, thus making class boundaries explicit in the model. First, we construct a comparatively simple, memory-efficient model by adding boundary detection to the segnet encoder-decoder architecture. Second, we also include boundary detection in fcn-type models and set up a high-end classifier ensemble. We show that boundary detection significantly improves semantic segmentation with CNNs in an end-to-end training scheme. Our best model achieves >90% overall accuracy on the ISPRS Vaihingen benchmark. |
---|---|
AbstractList | We present an end-to-end trainable deep convolutional neural network (DCNN) for semantic segmentation with built-in awareness of semantically meaningful boundaries. Semantic segmentation is a fundamental remote sensing task, and most state-of-the-art methods rely on DCNNs as their workhorse. A major reason for their success is that deep networks learn to accumulate contextual information over very large receptive fields. However, this success comes at a cost, since the associated loss of effective spatial resolution washes out high-frequency details and leads to blurry object boundaries. Here, we propose to counter this effect by combining semantic segmentation with semantically informed edge detection, thus making class boundaries explicit in the model. First, we construct a comparatively simple, memory-efficient model by adding boundary detection to the segnet encoder-decoder architecture. Second, we also include boundary detection in fcn-type models and set up a high-end classifier ensemble. We show that boundary detection significantly improves semantic segmentation with CNNs in an end-to-end training scheme. Our best model achieves >90% overall accuracy on the ISPRS Vaihingen benchmark. |
Author | Schindler, K. Wegner, J.D. Datcu, M. Stilla, U. Galliani, S. Marmanis, D. |
Author_xml | – sequence: 1 givenname: D. surname: Marmanis fullname: Marmanis, D. email: dimitrios.marmanis@dlr.de organization: DLR-IMF Department, German Aerospace Center, Oberpfaffenhofen, Germany – sequence: 2 givenname: K. surname: Schindler fullname: Schindler, K. email: konrad.schindler@geod.baug.ethz.ch organization: Photogrammetry and Remote Sensing, ETH Zurich, Switzerland – sequence: 3 givenname: J.D. surname: Wegner fullname: Wegner, J.D. email: jan.wegner@geod.baug.ethz.ch organization: Photogrammetry and Remote Sensing, ETH Zurich, Switzerland – sequence: 4 givenname: S. surname: Galliani fullname: Galliani, S. email: silvano.galliani@geod.baug.ethz.ch organization: Photogrammetry and Remote Sensing, ETH Zurich, Switzerland – sequence: 5 givenname: M. surname: Datcu fullname: Datcu, M. email: mihai.datcu@dlr.de organization: DLR-IMF Department, German Aerospace Center, Oberpfaffenhofen, Germany – sequence: 6 givenname: U. surname: Stilla fullname: Stilla, U. email: stilla@tum.de organization: Photogrammetry and Remote Sensing, TU München, Germany |
BookMark | eNqNkM9LwzAUx4NMcJv-DfbopTVJ27QTPIzhj8HAi168hCR9qSltOpNusv_ejIqIFz2ERx7v8-W9zwxNbG8BoUuCE4IJu24S47fON-ElFJMiISTBeHGCpqQsaFzSNJ-gKV7QLKYFYWdo5n2DMSY5K6foddUK7402Sgymt9GHGd4iYSOoariJ1t3W9Xtj68hDJ-xgVGQ6UUP41h3Y4Qcj-52thDtEFQygjv1zdKpF6-Hiq87Ry_3d8-ox3jw9rFfLTawoS4d4IVkqINelDNcwRanOyjIPqxcEUsWkzDKNhagyWlFW4RxTDYRoKbGURQp5OkdXY27Y9X0HfuCd8QraVljod55TzELYguZpGL0dR5XrvXeguTLjEYMTpuUE86NT3vBvp_zolBPCg9PAF7_4rQtC3OEf5HIkIZjYG3DcKwNWQWVc0MWr3vyZ8Qk6Lptx |
CitedBy_id | crossref_primary_10_3390_rs16173248 crossref_primary_10_3390_rs14061353 crossref_primary_10_3390_s21051794 crossref_primary_10_1186_s12880_021_00650_z crossref_primary_10_1109_ACCESS_2020_3028445 crossref_primary_10_1109_LGRS_2022_3199395 crossref_primary_10_3390_jmse8010023 crossref_primary_10_3390_rs13010119 crossref_primary_10_3390_s18113921 crossref_primary_10_1080_01431161_2021_1876272 crossref_primary_10_1109_TGRS_2021_3055950 crossref_primary_10_1016_j_rse_2018_11_032 crossref_primary_10_1109_TGRS_2023_3239042 crossref_primary_10_1111_jfr3_12827 crossref_primary_10_1155_2023_3301106 crossref_primary_10_1080_10106049_2020_1856199 crossref_primary_10_1109_TGRS_2021_3089623 crossref_primary_10_1016_j_rse_2019_111253 crossref_primary_10_1109_TGRS_2021_3108352 crossref_primary_10_1016_j_isprsjprs_2018_07_004 crossref_primary_10_1016_j_rse_2019_04_014 crossref_primary_10_1039_C9AN00964G crossref_primary_10_1109_LGRS_2019_2947783 crossref_primary_10_1109_LGRS_2022_3143368 crossref_primary_10_1109_MGRS_2021_3136100 crossref_primary_10_1117_1_JRS_12_045008 crossref_primary_10_3390_rs11060684 crossref_primary_10_1109_JSTARS_2020_3043442 crossref_primary_10_1080_07038992_2021_1922880 crossref_primary_10_1016_j_isprsjprs_2019_01_015 crossref_primary_10_3390_rs11101158 crossref_primary_10_1109_TITS_2024_3492383 crossref_primary_10_1109_TGRS_2021_3110314 crossref_primary_10_1016_j_isprsjprs_2020_04_002 crossref_primary_10_3390_rs11131617 crossref_primary_10_1109_LGRS_2020_3036823 crossref_primary_10_1109_JSTARS_2024_3412369 crossref_primary_10_1016_j_isprsjprs_2020_12_009 crossref_primary_10_1109_LGRS_2020_3021210 crossref_primary_10_3390_rs11060690 crossref_primary_10_1088_1742_6596_2858_1_012008 crossref_primary_10_3390_rs13152986 crossref_primary_10_1109_TGRS_2020_2994150 crossref_primary_10_1080_01431161_2023_2292016 crossref_primary_10_3390_app14135443 crossref_primary_10_1016_j_eswa_2020_114417 crossref_primary_10_1109_JSTARS_2021_3071353 crossref_primary_10_1016_j_foreco_2020_118397 crossref_primary_10_1109_ACCESS_2020_2964111 crossref_primary_10_1016_j_autcon_2021_103685 crossref_primary_10_1016_j_eswa_2023_121999 crossref_primary_10_3390_ijgi10010023 crossref_primary_10_3390_rs10091440 crossref_primary_10_1109_LGRS_2021_3091284 crossref_primary_10_3390_rs17030496 crossref_primary_10_3390_rs12030513 crossref_primary_10_1016_j_adapen_2021_100057 crossref_primary_10_1002_mp_16467 crossref_primary_10_1007_s11277_021_08860_y crossref_primary_10_1109_TGRS_2024_3443420 crossref_primary_10_1109_JSTARS_2024_3393531 crossref_primary_10_1016_j_isprsjprs_2019_02_006 crossref_primary_10_1109_LGRS_2022_3183828 crossref_primary_10_1016_j_isprsjprs_2023_04_019 crossref_primary_10_1117_1_JRS_15_028501 crossref_primary_10_1080_01431161_2021_1897185 crossref_primary_10_3390_rs11151774 crossref_primary_10_1109_TGRS_2021_3121471 crossref_primary_10_1109_TGRS_2018_2870871 crossref_primary_10_1109_JSTARS_2021_3073994 crossref_primary_10_1109_TGRS_2020_3006872 crossref_primary_10_3390_rs15133283 crossref_primary_10_1109_TGRS_2020_2973720 crossref_primary_10_1109_LGRS_2020_3031339 crossref_primary_10_1186_s13007_024_01208_0 crossref_primary_10_1016_j_isprsjprs_2019_02_019 crossref_primary_10_1109_TGRS_2022_3144165 crossref_primary_10_1016_j_patcog_2020_107705 crossref_primary_10_1117_1_JRS_17_034509 crossref_primary_10_1080_19479832_2019_1655489 crossref_primary_10_1109_LGRS_2024_3433034 crossref_primary_10_3389_fenvs_2022_991173 crossref_primary_10_1016_j_jenvman_2024_120773 crossref_primary_10_1109_ACCESS_2019_2935816 crossref_primary_10_1109_ACCESS_2019_2957825 crossref_primary_10_3390_rs15102585 crossref_primary_10_3390_rs16244701 crossref_primary_10_1587_transinf_2023EDP7279 crossref_primary_10_1080_15481603_2022_2101728 crossref_primary_10_1007_s00521_022_07894_y crossref_primary_10_1109_ACCESS_2019_2908216 crossref_primary_10_1016_j_rse_2022_113014 crossref_primary_10_1007_s41064_024_00316_9 crossref_primary_10_3390_rs11111262 crossref_primary_10_3390_s24030730 crossref_primary_10_1080_15481603_2024_2356355 crossref_primary_10_1007_s12145_019_00380_5 crossref_primary_10_1109_JSTARS_2023_3346185 crossref_primary_10_3390_rs14112630 crossref_primary_10_3390_rs11242970 crossref_primary_10_1016_j_isprsjprs_2022_11_011 crossref_primary_10_1007_s12652_020_02025_8 crossref_primary_10_1080_01431161_2020_1871100 crossref_primary_10_1080_01431161_2023_2261153 crossref_primary_10_3390_ijgi11010023 crossref_primary_10_3390_rs14205133 crossref_primary_10_3390_rs11050597 crossref_primary_10_46932_sfjdv5n10_038 crossref_primary_10_1016_j_eswa_2021_115530 crossref_primary_10_1515_nanoph_2021_0823 crossref_primary_10_1109_TGRS_2021_3106697 crossref_primary_10_1109_TGRS_2021_3109844 crossref_primary_10_1080_20964471_2021_1964879 crossref_primary_10_1080_01431161_2024_2313991 crossref_primary_10_1109_ACCESS_2020_3028174 crossref_primary_10_1016_j_rse_2019_111411 crossref_primary_10_3390_rs11202380 crossref_primary_10_1109_TGRS_2020_3009143 crossref_primary_10_1109_JSTARS_2023_3316307 crossref_primary_10_3390_rs13183755 crossref_primary_10_1109_JSTARS_2022_3180558 crossref_primary_10_1109_TGRS_2021_3119537 crossref_primary_10_3390_rs13183630 crossref_primary_10_1016_j_jag_2021_102379 crossref_primary_10_1016_j_isprsjprs_2024_06_011 crossref_primary_10_1109_TGRS_2024_3396813 crossref_primary_10_1016_j_geoderma_2020_114552 crossref_primary_10_3390_s23041991 crossref_primary_10_3390_drones8070297 crossref_primary_10_1007_s00521_021_06564_9 crossref_primary_10_1109_LGRS_2020_2983464 crossref_primary_10_1109_TGRS_2022_3168697 crossref_primary_10_1109_TGRS_2021_3064606 crossref_primary_10_3390_rs14194722 crossref_primary_10_1109_JSTARS_2025_3538488 crossref_primary_10_1109_JSTARS_2022_3178470 crossref_primary_10_1016_j_catena_2021_105388 crossref_primary_10_1016_j_isprsjprs_2019_07_007 crossref_primary_10_3390_rs13030475 crossref_primary_10_1016_j_isprsjprs_2018_12_003 crossref_primary_10_1109_JSTARS_2018_2849363 crossref_primary_10_3390_rs13214361 crossref_primary_10_1007_s12145_018_00376_7 crossref_primary_10_2112_SI99_007_1 crossref_primary_10_1109_TGRS_2020_3034123 crossref_primary_10_1109_TGRS_2023_3251659 crossref_primary_10_1109_JSTARS_2020_3018862 crossref_primary_10_1109_TGRS_2022_3144894 crossref_primary_10_1117_1_JRS_14_016502 crossref_primary_10_1016_j_jag_2020_102237 crossref_primary_10_1109_TCSVT_2020_3001267 crossref_primary_10_1016_j_patrec_2024_07_011 crossref_primary_10_1177_1094342020945026 crossref_primary_10_3390_rs14112673 crossref_primary_10_1016_j_neucom_2020_09_016 crossref_primary_10_1109_TITS_2021_3098855 crossref_primary_10_1109_TGRS_2024_3453501 crossref_primary_10_3390_rs14030533 crossref_primary_10_3390_app10155364 crossref_primary_10_3390_w12061771 crossref_primary_10_1016_j_rse_2019_111347 crossref_primary_10_1109_ACCESS_2022_3182370 crossref_primary_10_1016_j_isprsjprs_2018_06_007 crossref_primary_10_1109_ACCESS_2020_2964760 crossref_primary_10_3390_rs15215123 crossref_primary_10_3390_rs13061176 crossref_primary_10_1109_JSTARS_2022_3221860 crossref_primary_10_1109_TGRS_2023_3285659 crossref_primary_10_3390_rs12020311 crossref_primary_10_1016_j_isprsjprs_2020_09_019 crossref_primary_10_1007_s42979_023_02531_4 crossref_primary_10_1109_TGRS_2020_2979552 crossref_primary_10_1109_JSTARS_2024_3424831 crossref_primary_10_3390_bioengineering11060575 crossref_primary_10_1007_s00500_023_08616_9 crossref_primary_10_1109_LGRS_2023_3315687 crossref_primary_10_1016_j_asoc_2025_113052 crossref_primary_10_1109_JSTARS_2018_2825099 crossref_primary_10_3390_rs13193836 crossref_primary_10_1016_j_isprsjprs_2020_01_023 crossref_primary_10_1016_j_neucom_2023_126469 crossref_primary_10_3390_rs13061049 crossref_primary_10_1016_j_isprsjprs_2023_06_014 crossref_primary_10_1109_TGRS_2022_3176028 crossref_primary_10_1145_3687918 crossref_primary_10_1109_TGRS_2021_3050885 crossref_primary_10_1016_j_compag_2021_106014 crossref_primary_10_1016_j_isprsjprs_2022_04_012 crossref_primary_10_1109_JSTARS_2025_3530714 crossref_primary_10_1109_TGRS_2019_2913861 crossref_primary_10_1155_2020_6854260 crossref_primary_10_3390_rs11171986 crossref_primary_10_1109_TGRS_2021_3117851 crossref_primary_10_3390_rs14010207 crossref_primary_10_4018_IJSWIS_342087 crossref_primary_10_1109_JSTARS_2021_3078631 crossref_primary_10_1016_j_mtcomm_2023_106397 crossref_primary_10_3390_rs14236057 crossref_primary_10_3390_rs13030528 crossref_primary_10_3390_rs11202427 crossref_primary_10_3390_rs14133059 crossref_primary_10_1109_TGRS_2021_3081582 crossref_primary_10_3390_rs10071119 crossref_primary_10_1080_00396265_2024_2338641 crossref_primary_10_1109_TGRS_2018_2841808 crossref_primary_10_3390_rs15082153 crossref_primary_10_1016_j_isprsjprs_2020_01_013 crossref_primary_10_1038_s41598_024_64231_0 crossref_primary_10_1109_JSTARS_2021_3052495 crossref_primary_10_1016_j_jobe_2025_112176 crossref_primary_10_1016_j_compag_2020_105446 crossref_primary_10_1016_j_asoc_2020_106452 crossref_primary_10_1109_JSTARS_2023_3346454 crossref_primary_10_3389_fevo_2023_1201125 crossref_primary_10_1109_TGRS_2022_3225843 crossref_primary_10_3390_rs14020265 crossref_primary_10_1063_1_5095557 crossref_primary_10_1109_JSTARS_2020_3021098 crossref_primary_10_1117_1_JRS_13_016522 crossref_primary_10_5194_essd_13_1829_2021 crossref_primary_10_1016_j_envsoft_2020_104665 crossref_primary_10_1007_s11036_020_01703_3 crossref_primary_10_1109_JSTARS_2023_3244207 crossref_primary_10_3390_rs14010102 crossref_primary_10_1007_s00521_020_05341_4 crossref_primary_10_1016_j_isprsjprs_2020_09_025 crossref_primary_10_1109_TVCG_2020_2973477 crossref_primary_10_1109_TGRS_2021_3095832 crossref_primary_10_3390_app10124207 crossref_primary_10_1016_j_commatsci_2022_111391 crossref_primary_10_1007_s10489_021_02542_9 crossref_primary_10_1016_j_cropro_2024_106762 crossref_primary_10_1007_s42979_023_02434_4 crossref_primary_10_3390_rs15082135 crossref_primary_10_1109_JSTARS_2019_2902375 crossref_primary_10_26833_ijeg_1587798 crossref_primary_10_1109_TGRS_2022_3174399 crossref_primary_10_3390_rs13163065 crossref_primary_10_1109_JSTARS_2022_3197937 crossref_primary_10_1080_01431161_2019_1643937 crossref_primary_10_1016_j_isprsjprs_2019_05_013 crossref_primary_10_1117_1_JRS_14_014507 crossref_primary_10_1109_TGRS_2024_3360701 crossref_primary_10_1016_j_rse_2021_112339 crossref_primary_10_1016_j_isprsjprs_2020_02_014 crossref_primary_10_1016_j_isprsjprs_2021_04_022 crossref_primary_10_3390_rs14081835 crossref_primary_10_1109_JSTARS_2020_3017934 crossref_primary_10_1016_j_isprsjprs_2021_09_005 crossref_primary_10_1016_j_asoc_2019_105714 crossref_primary_10_1109_TGRS_2024_3376352 crossref_primary_10_1007_s12145_019_00413_z crossref_primary_10_1016_j_jobe_2023_107155 crossref_primary_10_1080_01431161_2022_2135410 crossref_primary_10_1016_j_jhydrol_2021_126371 crossref_primary_10_1002_ima_23011 crossref_primary_10_1088_1361_6501_abfbfd crossref_primary_10_3390_rs11242916 crossref_primary_10_1109_ACCESS_2021_3085218 crossref_primary_10_1109_JSTARS_2023_3335891 crossref_primary_10_1109_TCSVT_2022_3227172 crossref_primary_10_1016_j_eij_2024_100500 crossref_primary_10_1016_j_compag_2023_107683 crossref_primary_10_1109_ACCESS_2019_2903543 crossref_primary_10_1007_s00506_019_0581_1 crossref_primary_10_3390_s19122792 crossref_primary_10_3390_s21217316 crossref_primary_10_1109_TGRS_2021_3119852 crossref_primary_10_1109_JSTARS_2021_3109439 crossref_primary_10_1080_17538947_2020_1831087 crossref_primary_10_3390_rs11121443 crossref_primary_10_1109_TGRS_2022_3176670 crossref_primary_10_3390_rs14215527 crossref_primary_10_3390_rs11242912 crossref_primary_10_1109_JSTARS_2018_2810320 crossref_primary_10_1016_j_isprsjprs_2021_04_006 crossref_primary_10_3390_rs14051294 crossref_primary_10_3390_rs13163275 crossref_primary_10_1016_j_isprsjprs_2018_11_011 crossref_primary_10_1109_TGRS_2019_2963364 crossref_primary_10_1111_phor_12275 crossref_primary_10_3390_ijgi11090494 crossref_primary_10_1016_j_eswa_2024_124751 crossref_primary_10_1016_j_inpa_2019_07_003 crossref_primary_10_1109_LGRS_2018_2868880 crossref_primary_10_1109_JSTARS_2021_3066791 crossref_primary_10_1049_ipr2_12444 crossref_primary_10_1016_j_isprsjprs_2024_01_022 crossref_primary_10_3390_rs14194770 crossref_primary_10_1016_j_isprsjprs_2024_01_026 crossref_primary_10_1016_j_cviu_2019_102795 crossref_primary_10_1109_JSTARS_2021_3055784 crossref_primary_10_1016_j_isprsjprs_2022_06_008 crossref_primary_10_1016_j_rsase_2021_100491 crossref_primary_10_1007_s11831_023_10000_7 crossref_primary_10_1109_TMM_2022_3197369 crossref_primary_10_3390_rs13163146 crossref_primary_10_1109_TGRS_2023_3268159 crossref_primary_10_1177_09544062221095687 crossref_primary_10_1109_TGRS_2024_3373033 crossref_primary_10_1016_j_jag_2022_103087 crossref_primary_10_1109_JSTARS_2021_3049905 crossref_primary_10_1109_TGRS_2020_2991006 crossref_primary_10_1109_TGRS_2021_3087159 crossref_primary_10_1080_01431161_2020_1788742 crossref_primary_10_3390_rs14092252 crossref_primary_10_3390_rs14235911 crossref_primary_10_1016_j_asoc_2024_112061 crossref_primary_10_21595_mme_2018_19840 crossref_primary_10_1007_s00170_022_09242_9 crossref_primary_10_1109_TGRS_2023_3300706 crossref_primary_10_1109_TGRS_2021_3133258 crossref_primary_10_1080_01431161_2019_1681604 crossref_primary_10_34133_remotesensing_0078 crossref_primary_10_1016_j_isprsjprs_2021_08_009 crossref_primary_10_1109_TGRS_2019_2906689 crossref_primary_10_1109_TGRS_2023_3276081 crossref_primary_10_1155_2022_6010912 crossref_primary_10_3390_rs10091487 crossref_primary_10_1109_TGRS_2020_2964675 crossref_primary_10_1109_TGRS_2021_3065112 crossref_primary_10_3390_rs12040739 crossref_primary_10_3390_s20216062 crossref_primary_10_1109_JSTARS_2021_3053067 crossref_primary_10_1109_ACCESS_2021_3111899 crossref_primary_10_1109_TGRS_2022_3174651 crossref_primary_10_1080_14498596_2022_2037473 crossref_primary_10_1109_TGRS_2023_3234549 crossref_primary_10_52547_jist_9_36_285 crossref_primary_10_1016_j_rse_2021_112480 crossref_primary_10_1007_s12530_019_09304_6 crossref_primary_10_1109_LGRS_2023_3234257 crossref_primary_10_12677_AIRR_2022_114048 crossref_primary_10_3390_rs10091339 crossref_primary_10_3390_rs12223764 crossref_primary_10_1016_j_rse_2021_112479 crossref_primary_10_1016_j_isprsjprs_2021_05_004 crossref_primary_10_3390_rs15010231 crossref_primary_10_1080_01431161_2021_1998716 crossref_primary_10_1016_j_engappai_2024_109540 crossref_primary_10_1016_j_isprsjprs_2021_12_007 crossref_primary_10_1109_JSTARS_2021_3076035 crossref_primary_10_1016_j_isprsjprs_2020_06_016 crossref_primary_10_1109_TVT_2022_3144358 crossref_primary_10_1155_2022_1420946 crossref_primary_10_1016_j_isprsjprs_2019_03_014 crossref_primary_10_3390_rs13142656 crossref_primary_10_1016_j_isprsjprs_2019_03_015 crossref_primary_10_1117_1_JEI_32_1_013001 crossref_primary_10_1007_s12065_024_01012_8 crossref_primary_10_1109_JSTARS_2021_3139017 crossref_primary_10_1109_ACCESS_2019_2923753 crossref_primary_10_3390_ijgi10100672 crossref_primary_10_1117_1_JRS_18_034504 crossref_primary_10_28979_jarnas_911130 crossref_primary_10_1016_j_isprsjprs_2022_07_001 crossref_primary_10_1109_TGRS_2023_3268362 crossref_primary_10_1007_s11042_020_09825_2 crossref_primary_10_1016_j_procs_2019_09_318 crossref_primary_10_1109_TGRS_2021_3103517 crossref_primary_10_1016_j_isprsjprs_2019_11_006 crossref_primary_10_1016_j_isprsjprs_2020_10_008 crossref_primary_10_1155_2022_4831223 crossref_primary_10_1016_j_srs_2022_100047 crossref_primary_10_1007_s11277_023_10628_5 crossref_primary_10_1109_ACCESS_2019_2955560 crossref_primary_10_1109_JSTARS_2022_3205609 crossref_primary_10_3390_rs13142788 crossref_primary_10_1007_s12145_024_01267_w crossref_primary_10_1109_TGRS_2023_3273818 crossref_primary_10_1109_TGRS_2024_3425708 crossref_primary_10_1155_2022_7733196 crossref_primary_10_3390_rs14040818 crossref_primary_10_1007_s11269_024_04033_1 crossref_primary_10_1080_15481603_2018_1564499 crossref_primary_10_1109_JSTARS_2021_3070786 crossref_primary_10_1109_JSTARS_2021_3073935 crossref_primary_10_1080_10106049_2023_2190622 crossref_primary_10_1016_j_isprsjprs_2023_09_007 crossref_primary_10_3390_rs11141678 crossref_primary_10_3390_app14093712 crossref_primary_10_3390_rs12010174 crossref_primary_10_1007_s12524_024_02021_x crossref_primary_10_3390_s23115323 crossref_primary_10_1016_j_rse_2024_114073 crossref_primary_10_1007_s10994_021_06027_1 crossref_primary_10_3390_rs12213501 crossref_primary_10_3390_s24227266 crossref_primary_10_3390_rs13040808 crossref_primary_10_1080_01431161_2022_2030071 crossref_primary_10_1021_acsphotonics_9b01465 crossref_primary_10_1109_TGRS_2020_3033816 crossref_primary_10_3390_rs12040701 crossref_primary_10_1109_JSTARS_2023_3342453 crossref_primary_10_3390_rs14071638 crossref_primary_10_3390_s25051380 crossref_primary_10_1109_TIP_2020_2992893 crossref_primary_10_1109_TGRS_2022_3161337 crossref_primary_10_3390_rs12050852 crossref_primary_10_1016_j_infrared_2020_103233 crossref_primary_10_3390_rs15245631 crossref_primary_10_1109_JSTARS_2023_3273726 crossref_primary_10_1109_LGRS_2022_3190507 crossref_primary_10_1109_JSTARS_2020_3023645 crossref_primary_10_3390_rs15071836 crossref_primary_10_1109_LGRS_2021_3070426 crossref_primary_10_1080_01431161_2023_2225710 crossref_primary_10_1109_TMI_2022_3203022 crossref_primary_10_1364_JOSAA_525577 crossref_primary_10_1109_JSTARS_2023_3286912 crossref_primary_10_3390_ijgi8060258 crossref_primary_10_1051_e3sconf_202126003012 crossref_primary_10_1016_j_eswa_2022_117346 crossref_primary_10_1109_TGRS_2023_3297092 crossref_primary_10_1016_j_jhydrol_2020_124631 crossref_primary_10_1109_ACCESS_2020_3024111 crossref_primary_10_1109_TGRS_2021_3074289 crossref_primary_10_1109_JSTARS_2019_2919317 |
Cites_doi | 10.3390/rs8040329 10.1109/TGRS.2010.2048116 10.1109/CVPR.2016.343 10.2352/ISSN.2470-1173.2016.10.ROBVIS-392 10.1109/CVPR.2015.7298965 10.1109/PROC.1969.7019 10.1109/ISSPIT.2015.7394333 10.1109/TPAMI.2016.2644615 10.1007/978-3-642-15567-3_16 10.1109/36.142926 10.1109/CVPRW.2015.7301381 10.1109/ICCV.2015.65 10.1109/TGRS.2014.2321423 10.1109/CVPR.2016.28 10.1109/TPAMI.2012.231 10.1109/IGARSS.2017.8128163 10.1007/978-3-319-10590-1_53 10.1007/978-1-84882-935-0 10.1109/TGRS.2016.2616585 10.1109/CVPRW.2016.90 10.1109/ICCV.2015.164 10.1109/IGARSS.2016.7729468 10.1109/ICCV.2015.316 10.5194/isprsannals-III-3-473-2016 10.1109/ICCV.2015.178 10.1007/978-3-319-46448-0_5 10.1007/978-3-319-54181-5_12 10.1109/CVPR.2016.492 10.1080/01431169308954040 |
ContentType | Journal Article |
Copyright | 2017 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS) |
Copyright_xml | – notice: 2017 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS) |
DBID | AAYXX CITATION 7S9 L.6 |
DOI | 10.1016/j.isprsjprs.2017.11.009 |
DatabaseName | CrossRef AGRICOLA AGRICOLA - Academic |
DatabaseTitle | CrossRef AGRICOLA AGRICOLA - Academic |
DatabaseTitleList | AGRICOLA |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Geography Engineering |
EISSN | 1872-8235 |
EndPage | 172 |
ExternalDocumentID | 10_1016_j_isprsjprs_2017_11_009 S092427161630572X |
GroupedDBID | --K --M .~1 0R~ 1B1 1RT 1~. 1~5 29J 4.4 457 4G. 5GY 5VS 7-5 71M 8P~ 9JN AACTN AAEDT AAEDW AAIAV AAIKC AAIKJ AAKOC AALRI AAMNW AAOAW AAQFI AAQXK AAXUO AAYFN ABBOA ABFNM ABJNI ABMAC ABQEM ABQYD ABXDB ABYKQ ACDAQ ACGFS ACLVX ACNNM ACRLP ACSBN ACZNC ADBBV ADEZE ADJOM ADMUD AEBSH AEKER AENEX AFKWA AFTJW AGHFR AGUBO AGYEJ AHHHB AHZHX AIALX AIEXJ AIKHN AITUG AJBFU AJOXV ALMA_UNASSIGNED_HOLDINGS AMFUW AMRAJ AOUOD ASPBG ATOGT AVWKF AXJTR AZFZN BKOJK BLXMC CS3 DU5 EBS EFJIC EFLBG EJD EO8 EO9 EP2 EP3 FDB FEDTE FGOYB FIRID FNPLU FYGXN G-2 G-Q G8K GBLVA GBOLZ HMA HVGLF HZ~ H~9 IHE IMUCA J1W KOM LY3 M41 MO0 N9A O-L O9- OAUVE OZT P-8 P-9 P2P PC. Q38 R2- RIG RNS ROL RPZ SDF SDG SEP SES SEW SPC SPCBC SSE SSV SSZ T5K T9H WUQ ZMT ~02 ~G- AAHBH AATTM AAXKI AAYWO AAYXX ABDPE ABWVN ACRPL ACVFH ADCNI ADNMO AEIPS AEUPX AFJKZ AFPUW AFXIZ AGCQF AGQPQ AGRNS AIGII AIIUN AKBMS AKRWK AKYEP ANKPU APXCP BNPGV CITATION SSH 7S9 L.6 |
ID | FETCH-LOGICAL-c263t-9b63ae5f8b0166c22f488587271e3c6bb44f0aad42d26d0502fe11fbb0bb73e53 |
IEDL.DBID | .~1 |
ISSN | 0924-2716 |
IngestDate | Fri Jul 11 12:25:17 EDT 2025 Tue Jul 01 03:46:38 EDT 2025 Thu Apr 24 23:13:14 EDT 2025 Fri Feb 23 02:28:02 EST 2024 |
IsPeerReviewed | true |
IsScholarly | true |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c263t-9b63ae5f8b0166c22f488587271e3c6bb44f0aad42d26d0502fe11fbb0bb73e53 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
PQID | 2067279253 |
PQPubID | 24069 |
PageCount | 15 |
ParticipantIDs | proquest_miscellaneous_2067279253 crossref_citationtrail_10_1016_j_isprsjprs_2017_11_009 crossref_primary_10_1016_j_isprsjprs_2017_11_009 elsevier_sciencedirect_doi_10_1016_j_isprsjprs_2017_11_009 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | January 2018 2018-01-00 20180101 |
PublicationDateYYYYMMDD | 2018-01-01 |
PublicationDate_xml | – month: 01 year: 2018 text: January 2018 |
PublicationDecade | 2010 |
PublicationTitle | ISPRS journal of photogrammetry and remote sensing |
PublicationYear | 2018 |
Publisher | Elsevier B.V |
Publisher_xml | – name: Elsevier B.V |
References | Farabet, Couprie, Najman, LeCun (b0060) 2013; 35 Bertasius, G., Shi, J., Torresani, L., 2015. High-for-low and low-for-high: Efficient boundary detection from deep object features and its applications to high-level vision. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 504–512. Hinton, G., Vinyals, O., Dean, J., 2015. Distilling the knowledge in a neural network. Available from: arXiv preprint arXiv Malmgren-Hansen, D., Nobel-J, M., et al., 2015. Convolutional neural networks for sar image segmentation. In: 2015 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT), IEEE, pp. 231–236. Pinheiro, P.O., Lin, T.Y., Collobert, R., Dollár, P., 2016. Learning to refine object segments. In: European Conference on Computer Vision, Springer, pp. 75–91. Chen, L.C., Barron, J.T., Papandreou, G., Murphy, K., Yuille, A.L., 2016b. Semantic image segmentation with task-specific edge detection using cnns and a discriminatively trained domain transform. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4545–4554. Noh, H., Hong, S., Han, B., 2015. Learning deconvolution network for semantic segmentation. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1520–1528. Zeiler, M.D., Fergus, R., 2014. Visualizing and understanding convolutional networks. In: European Conference on Computer Vision, Springer, pp. 818–833. Lee, C.Y., Xie, S., Gallagher, P., Zhang, Z., Tu, Z., 2015. Deeply-supervised nets. In: AISTATS. Längkvist, Kiselev, Alirezaie, Loutfi (b0115) 2016; 8 Volpi, Tuia (b0225) 2017; 55 Audebert, N., Le Saux, B., Lefèvre, S., 2016. Semantic segmentation of earth observation data using multimodal and multi-scale deep networks. In: Asian Conference on Computer Vision, Springer, pp. 180–196. Barnsley, Barr (b0015) 1996; 62 Dosovitskiy, A., Fischer, P., Ilg, E., Häusser, P., Hazırbaş, C., Golkov, V., v.d. Smagt, P., Cremers, D., Brox, T., 2015. FlowNet: Learning optical flow with convolutional networks. In: IEEE International Conference on Computer Vision (ICCV). Xie, S., Tu, Z., 2015. Holistically-nested edge detection. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1395–1403. Saito, Yamashita, Aoki (b0195) 2016; 2016 Paisitkriangkrai, S., Sherrah, J., Janney, P., Hengel, V.D., et al., 2015. Effective semantic pixel labelling with convolutional networks and conditional random fields. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 36–43. . Mayer, Hinz, Bacher, Baltsavias (b0150) 2006; 36 Yu, F., Koltun, V., 2016. Multi-scale context aggregation by dilated convolutions. In: International Conference on Learning Representations (ICLR). Szeliski, R., 2010. Computer Vision: Algorithms and Applications. Springer. Socher, R., Huval, B., Bhat, B., Manning, C.D., Ng, A.Y., 2012. Convolutional-recursive deep learning for 3d object classification. In: Advances in Neural Information Processing Systems, 25. Pinheiro, P., Collobert, R., 2014. Recurrent convolutional neural networks for scene labeling. In: International Conference on Machine Learning, pp. 82–90. Simonyan, K., Zisserman, A., 2014. Very deep convolutional networks for large-scale image recognition. Available from: arXiv preprint arXiv Fu, Landgrebe, Phillips (b0070) 1969; 57 Mou, L., Zhu, X., 2016. Spatiotemporal scene interpretation of space videos via deep neural network and tracklet analysis. In: 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), IEEE, pp. 4959–4962. Russell, B., Efros, A., Sivic, J., Freeman, B., Zisserman, A., 2009. Segmenting scenes by matching image composites. In: Advances in Neural Information Processing Systems, pp. 1580–1588. Badrinarayanan, Kendall, Cipolla (b0010) 2017 Tokarczyk, Wegner, Walk, Schindler (b0220) 2015; 53 Maggiori, E., Tarabalka, Y., Charpiat, G., Alliez, P., 2016. High-resolution semantic labeling with convolutional neural networks. Available from: arXiv preprint arXiv Gerke, M., 2014. Use of the Stair Vision Library within the ISPRS 2D Semantic Labeling Benchmark (Vaihingen). Technical Report. ITC, University of Twente. Long, J., Shelhamer, E., Darrell, T., 2015. Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440. Sherrah, J., 2016. Fully convolutional networks for dense semantic labelling of high-resolution aerial imagery. Available from: arXiv preprint arXiv Kampffmeyer, M., Salberg, A.B., Jenssen, R., 2016. Semantic segmentation of small objects and modeling of uncertainty in urban remote sensing images using deep convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 1–9. Mnih, V., Hinton, G.E., 2010. Learning to detect roads in high-resolution aerial images. In: European Conference on Computer Vision, Springer, pp. 210–223. Glorot, X., Bengio, Y., 2010. Understanding the difficulty of training deep feedforward neural networks. In: International Conference on Artificial Intelligence and Statistics (AISTATS). Dai, J., He, K., Sun, J., 2016. Instance-aware semantic segmentation via multi-task network cascades. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3150–3158. Franklin, McDermid (b0065) 1993; 14 Bischof, Schneider, Pinz (b0025) 1992; 30 Grangier, D., Bottou, L., Collobert, R., 2009. Deep convolutional networks for scene parsing. In: ICML 2009 Deep Learning Workshop, Citeseer. Krähenbühl, P., Koltun, V., 2011. Efficient inference in fully connected crfs with gaussian edge potentials. In: Advances in Neural Information Processing Systems. Richards (b0185) 2013 Chen, L., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L., 2016a. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. CoRR abs/1606.00915. Available from: arXiv Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A., 2015. Semantic image segmentation with deep convolutional nets and fully connected CRFs. In: International Conference on Learning Representations, San Diego, United States. Marcu, A., Leordeanu, M., 2016. Dual local-global contextual pathways for recognition in aerial imagery. Available from: arXiv preprint arXiv Yang, J., Price, B., Cohen, S., Lee, H., Yang, M.H., 2016. Object contour detection with a fully convolutional encoder-decoder network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 193–202. Dalla Mura, Benediktsson, Waske, Bruzzone (b0050) 2010; 48 Gal, Y., Ghahramani, Z., 2016. Dropout as a bayesian approximation: representing model uncertainty in deep learning. In: International Conference on Machine Learning, pp. 1050–1059. Kokkinos, I., 2016. Pushing the boundaries of boundary detection using deep learning. In: International Conference on Learning Representations (ICLR). Marmanis, Wegner, Galliani, Schindler, Datcu, Stilla (b0145) 2016 10.1016/j.isprsjprs.2017.11.009_b0180 10.1016/j.isprsjprs.2017.11.009_b0020 Marmanis (10.1016/j.isprsjprs.2017.11.009_b0145) 2016 Saito (10.1016/j.isprsjprs.2017.11.009_b0195) 2016; 2016 10.1016/j.isprsjprs.2017.11.009_b0140 10.1016/j.isprsjprs.2017.11.009_b0100 Richards (10.1016/j.isprsjprs.2017.11.009_b0185) 2013 10.1016/j.isprsjprs.2017.11.009_b0105 Fu (10.1016/j.isprsjprs.2017.11.009_b0070) 1969; 57 Mayer (10.1016/j.isprsjprs.2017.11.009_b0150) 2006; 36 10.1016/j.isprsjprs.2017.11.009_b0190 10.1016/j.isprsjprs.2017.11.009_b0230 10.1016/j.isprsjprs.2017.11.009_b0075 10.1016/j.isprsjprs.2017.11.009_b0030 10.1016/j.isprsjprs.2017.11.009_b0035 10.1016/j.isprsjprs.2017.11.009_b0155 10.1016/j.isprsjprs.2017.11.009_b0110 10.1016/j.isprsjprs.2017.11.009_b0235 Franklin (10.1016/j.isprsjprs.2017.11.009_b0065) 1993; 14 Dalla Mura (10.1016/j.isprsjprs.2017.11.009_b0050) 2010; 48 Längkvist (10.1016/j.isprsjprs.2017.11.009_b0115) 2016; 8 Volpi (10.1016/j.isprsjprs.2017.11.009_b0225) 2017; 55 10.1016/j.isprsjprs.2017.11.009_b0160 10.1016/j.isprsjprs.2017.11.009_b0080 10.1016/j.isprsjprs.2017.11.009_b0120 10.1016/j.isprsjprs.2017.11.009_b0240 10.1016/j.isprsjprs.2017.11.009_b0085 10.1016/j.isprsjprs.2017.11.009_b0040 10.1016/j.isprsjprs.2017.11.009_b0245 10.1016/j.isprsjprs.2017.11.009_b0200 10.1016/j.isprsjprs.2017.11.009_b0045 10.1016/j.isprsjprs.2017.11.009_b0165 10.1016/j.isprsjprs.2017.11.009_b0205 10.1016/j.isprsjprs.2017.11.009_b0005 10.1016/j.isprsjprs.2017.11.009_b0125 Barnsley (10.1016/j.isprsjprs.2017.11.009_b0015) 1996; 62 Tokarczyk (10.1016/j.isprsjprs.2017.11.009_b0220) 2015; 53 10.1016/j.isprsjprs.2017.11.009_b0170 10.1016/j.isprsjprs.2017.11.009_b0175 10.1016/j.isprsjprs.2017.11.009_b0130 10.1016/j.isprsjprs.2017.11.009_b0095 10.1016/j.isprsjprs.2017.11.009_b0135 10.1016/j.isprsjprs.2017.11.009_b0210 10.1016/j.isprsjprs.2017.11.009_b0055 10.1016/j.isprsjprs.2017.11.009_b0215 Badrinarayanan (10.1016/j.isprsjprs.2017.11.009_b0010) 2017 Bischof (10.1016/j.isprsjprs.2017.11.009_b0025) 1992; 30 10.1016/j.isprsjprs.2017.11.009_b0090 Farabet (10.1016/j.isprsjprs.2017.11.009_b0060) 2013; 35 |
References_xml | – start-page: 473 year: 2016 end-page: 480 ident: b0145 article-title: Semantic segmentation of aerial images with an ensemble of cnns publication-title: ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci. – reference: Hinton, G., Vinyals, O., Dean, J., 2015. Distilling the knowledge in a neural network. Available from: arXiv preprint arXiv: – reference: Yu, F., Koltun, V., 2016. Multi-scale context aggregation by dilated convolutions. In: International Conference on Learning Representations (ICLR). – reference: Paisitkriangkrai, S., Sherrah, J., Janney, P., Hengel, V.D., et al., 2015. Effective semantic pixel labelling with convolutional networks and conditional random fields. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 36–43. – reference: Noh, H., Hong, S., Han, B., 2015. Learning deconvolution network for semantic segmentation. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1520–1528. – reference: Pinheiro, P.O., Lin, T.Y., Collobert, R., Dollár, P., 2016. Learning to refine object segments. In: European Conference on Computer Vision, Springer, pp. 75–91. – reference: Audebert, N., Le Saux, B., Lefèvre, S., 2016. Semantic segmentation of earth observation data using multimodal and multi-scale deep networks. In: Asian Conference on Computer Vision, Springer, pp. 180–196. – volume: 53 start-page: 280 year: 2015 end-page: 295 ident: b0220 article-title: Features, color spaces, and boosting: new insights on semantic classification of remote sensing images publication-title: IEEE Trans. Geosci. Remote Sens. – reference: Malmgren-Hansen, D., Nobel-J, M., et al., 2015. Convolutional neural networks for sar image segmentation. In: 2015 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT), IEEE, pp. 231–236. – reference: Zeiler, M.D., Fergus, R., 2014. Visualizing and understanding convolutional networks. In: European Conference on Computer Vision, Springer, pp. 818–833. – reference: Chen, L., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L., 2016a. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. CoRR abs/1606.00915. Available from: arXiv: – reference: Gal, Y., Ghahramani, Z., 2016. Dropout as a bayesian approximation: representing model uncertainty in deep learning. In: International Conference on Machine Learning, pp. 1050–1059. – reference: Sherrah, J., 2016. Fully convolutional networks for dense semantic labelling of high-resolution aerial imagery. Available from: arXiv preprint arXiv: – reference: Yang, J., Price, B., Cohen, S., Lee, H., Yang, M.H., 2016. Object contour detection with a fully convolutional encoder-decoder network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 193–202. – volume: 48 start-page: 3747 year: 2010 end-page: 3762 ident: b0050 article-title: Morphological attribute profiles for the analysis of very high resolution images publication-title: IEEE Trans. Geosci. Remote Sens. – volume: 8 start-page: 329 year: 2016 ident: b0115 article-title: Classification and segmentation of satellite orthoimagery using convolutional neural networks publication-title: Remote Sens. – year: 2017 ident: b0010 article-title: Segnet: A deep convolutional encoder-decoder architecture for scene segmentation publication-title: IEEE Trans. Pattern Anal. Mach. Intell. – reference: Grangier, D., Bottou, L., Collobert, R., 2009. Deep convolutional networks for scene parsing. In: ICML 2009 Deep Learning Workshop, Citeseer. – volume: 14 start-page: 2331 year: 1993 end-page: 2348 ident: b0065 article-title: Empirical relations between digital SPOT HRV and CASI spectral resonse and lodgepole pine (pinus contorta) forest stand parameters publication-title: Int. J. Remote Sens. – reference: Krähenbühl, P., Koltun, V., 2011. Efficient inference in fully connected crfs with gaussian edge potentials. In: Advances in Neural Information Processing Systems. – reference: Socher, R., Huval, B., Bhat, B., Manning, C.D., Ng, A.Y., 2012. Convolutional-recursive deep learning for 3d object classification. In: Advances in Neural Information Processing Systems, 25. – reference: Xie, S., Tu, Z., 2015. Holistically-nested edge detection. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1395–1403. – reference: Mnih, V., Hinton, G.E., 2010. Learning to detect roads in high-resolution aerial images. In: European Conference on Computer Vision, Springer, pp. 210–223. – reference: Bertasius, G., Shi, J., Torresani, L., 2015. High-for-low and low-for-high: Efficient boundary detection from deep object features and its applications to high-level vision. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 504–512. – reference: Pinheiro, P., Collobert, R., 2014. Recurrent convolutional neural networks for scene labeling. In: International Conference on Machine Learning, pp. 82–90. – reference: Russell, B., Efros, A., Sivic, J., Freeman, B., Zisserman, A., 2009. Segmenting scenes by matching image composites. In: Advances in Neural Information Processing Systems, pp. 1580–1588. – reference: Glorot, X., Bengio, Y., 2010. Understanding the difficulty of training deep feedforward neural networks. In: International Conference on Artificial Intelligence and Statistics (AISTATS). – volume: 55 start-page: 881 year: 2017 end-page: 893 ident: b0225 article-title: Dense semantic labeling of subdecimeter resolution images with convolutional neural networks publication-title: IEEE Trans. Geosci. Remote Sens. – reference: Mou, L., Zhu, X., 2016. Spatiotemporal scene interpretation of space videos via deep neural network and tracklet analysis. In: 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), IEEE, pp. 4959–4962. – volume: 30 start-page: 482 year: 1992 end-page: 490 ident: b0025 article-title: Multispectral classification of landsat-images using neural networks publication-title: IEEE Trans. Geosci. Remote Sens. – reference: Marcu, A., Leordeanu, M., 2016. Dual local-global contextual pathways for recognition in aerial imagery. Available from: arXiv preprint arXiv: – volume: 2016 start-page: 1 year: 2016 end-page: 9 ident: b0195 article-title: Multiple object extraction from aerial imagery with convolutional neural networks publication-title: Electron. Imag. – reference: Lee, C.Y., Xie, S., Gallagher, P., Zhang, Z., Tu, Z., 2015. Deeply-supervised nets. In: AISTATS. – reference: Dosovitskiy, A., Fischer, P., Ilg, E., Häusser, P., Hazırbaş, C., Golkov, V., v.d. Smagt, P., Cremers, D., Brox, T., 2015. FlowNet: Learning optical flow with convolutional networks. In: IEEE International Conference on Computer Vision (ICCV). – year: 2013 ident: b0185 article-title: Remote Sensing Digital Image Analysis – reference: Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A., 2015. Semantic image segmentation with deep convolutional nets and fully connected CRFs. In: International Conference on Learning Representations, San Diego, United States. – reference: Kokkinos, I., 2016. Pushing the boundaries of boundary detection using deep learning. In: International Conference on Learning Representations (ICLR). – volume: 36 start-page: 209 year: 2006 end-page: 214 ident: b0150 article-title: A test of automatic road extraction approaches publication-title: Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. – volume: 35 start-page: 1915 year: 2013 end-page: 1929 ident: b0060 article-title: Learning hierarchical features for scene labeling publication-title: IEEE Trans. Pattern Anal. Mach. Intell. – reference: Szeliski, R., 2010. Computer Vision: Algorithms and Applications. Springer. – volume: 62 start-page: 949 year: 1996 end-page: 958 ident: b0015 article-title: Inferring urban land use from satellite sensor images using kernel-based spatial reclassification publication-title: Photogramm. Eng. Remote Sens. – reference: Kampffmeyer, M., Salberg, A.B., Jenssen, R., 2016. Semantic segmentation of small objects and modeling of uncertainty in urban remote sensing images using deep convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 1–9. – reference: Dai, J., He, K., Sun, J., 2016. Instance-aware semantic segmentation via multi-task network cascades. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3150–3158. – reference: Gerke, M., 2014. Use of the Stair Vision Library within the ISPRS 2D Semantic Labeling Benchmark (Vaihingen). Technical Report. ITC, University of Twente. – reference: . – reference: Chen, L.C., Barron, J.T., Papandreou, G., Murphy, K., Yuille, A.L., 2016b. Semantic image segmentation with task-specific edge detection using cnns and a discriminatively trained domain transform. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4545–4554. – reference: Long, J., Shelhamer, E., Darrell, T., 2015. Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440. – reference: Simonyan, K., Zisserman, A., 2014. Very deep convolutional networks for large-scale image recognition. Available from: arXiv preprint arXiv: – reference: Maggiori, E., Tarabalka, Y., Charpiat, G., Alliez, P., 2016. High-resolution semantic labeling with convolutional neural networks. Available from: arXiv preprint arXiv: – volume: 57 start-page: 639 year: 1969 end-page: 653 ident: b0070 article-title: Information processing of remotely sensed agricultural data publication-title: Proc. IEEE – volume: 8 start-page: 329 year: 2016 ident: 10.1016/j.isprsjprs.2017.11.009_b0115 article-title: Classification and segmentation of satellite orthoimagery using convolutional neural networks publication-title: Remote Sens. doi: 10.3390/rs8040329 – ident: 10.1016/j.isprsjprs.2017.11.009_b0240 – volume: 48 start-page: 3747 year: 2010 ident: 10.1016/j.isprsjprs.2017.11.009_b0050 article-title: Morphological attribute profiles for the analysis of very high resolution images publication-title: IEEE Trans. Geosci. Remote Sens. doi: 10.1109/TGRS.2010.2048116 – ident: 10.1016/j.isprsjprs.2017.11.009_b0045 doi: 10.1109/CVPR.2016.343 – ident: 10.1016/j.isprsjprs.2017.11.009_b0080 – ident: 10.1016/j.isprsjprs.2017.11.009_b0205 – volume: 2016 start-page: 1 year: 2016 ident: 10.1016/j.isprsjprs.2017.11.009_b0195 article-title: Multiple object extraction from aerial imagery with convolutional neural networks publication-title: Electron. Imag. doi: 10.2352/ISSN.2470-1173.2016.10.ROBVIS-392 – ident: 10.1016/j.isprsjprs.2017.11.009_b0125 doi: 10.1109/CVPR.2015.7298965 – volume: 57 start-page: 639 year: 1969 ident: 10.1016/j.isprsjprs.2017.11.009_b0070 article-title: Information processing of remotely sensed agricultural data publication-title: Proc. IEEE doi: 10.1109/PROC.1969.7019 – ident: 10.1016/j.isprsjprs.2017.11.009_b0090 – ident: 10.1016/j.isprsjprs.2017.11.009_b0135 doi: 10.1109/ISSPIT.2015.7394333 – volume: 36 start-page: 209 year: 2006 ident: 10.1016/j.isprsjprs.2017.11.009_b0150 article-title: A test of automatic road extraction approaches publication-title: Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. – ident: 10.1016/j.isprsjprs.2017.11.009_b0190 – ident: 10.1016/j.isprsjprs.2017.11.009_b0085 – year: 2017 ident: 10.1016/j.isprsjprs.2017.11.009_b0010 article-title: Segnet: A deep convolutional encoder-decoder architecture for scene segmentation publication-title: IEEE Trans. Pattern Anal. Mach. Intell. doi: 10.1109/TPAMI.2016.2644615 – ident: 10.1016/j.isprsjprs.2017.11.009_b0155 doi: 10.1007/978-3-642-15567-3_16 – volume: 30 start-page: 482 year: 1992 ident: 10.1016/j.isprsjprs.2017.11.009_b0025 article-title: Multispectral classification of landsat-images using neural networks publication-title: IEEE Trans. Geosci. Remote Sens. doi: 10.1109/36.142926 – ident: 10.1016/j.isprsjprs.2017.11.009_b0120 – ident: 10.1016/j.isprsjprs.2017.11.009_b0170 doi: 10.1109/CVPRW.2015.7301381 – ident: 10.1016/j.isprsjprs.2017.11.009_b0020 doi: 10.1109/ICCV.2015.65 – ident: 10.1016/j.isprsjprs.2017.11.009_b0095 – volume: 53 start-page: 280 year: 2015 ident: 10.1016/j.isprsjprs.2017.11.009_b0220 article-title: Features, color spaces, and boosting: new insights on semantic classification of remote sensing images publication-title: IEEE Trans. Geosci. Remote Sens. doi: 10.1109/TGRS.2014.2321423 – ident: 10.1016/j.isprsjprs.2017.11.009_b0110 – ident: 10.1016/j.isprsjprs.2017.11.009_b0235 doi: 10.1109/CVPR.2016.28 – ident: 10.1016/j.isprsjprs.2017.11.009_b0030 – volume: 35 start-page: 1915 year: 2013 ident: 10.1016/j.isprsjprs.2017.11.009_b0060 article-title: Learning hierarchical features for scene labeling publication-title: IEEE Trans. Pattern Anal. Mach. Intell. doi: 10.1109/TPAMI.2012.231 – ident: 10.1016/j.isprsjprs.2017.11.009_b0130 doi: 10.1109/IGARSS.2017.8128163 – ident: 10.1016/j.isprsjprs.2017.11.009_b0245 doi: 10.1007/978-3-319-10590-1_53 – ident: 10.1016/j.isprsjprs.2017.11.009_b0175 – year: 2013 ident: 10.1016/j.isprsjprs.2017.11.009_b0185 – ident: 10.1016/j.isprsjprs.2017.11.009_b0215 doi: 10.1007/978-1-84882-935-0 – volume: 55 start-page: 881 year: 2017 ident: 10.1016/j.isprsjprs.2017.11.009_b0225 article-title: Dense semantic labeling of subdecimeter resolution images with convolutional neural networks publication-title: IEEE Trans. Geosci. Remote Sens. doi: 10.1109/TGRS.2016.2616585 – ident: 10.1016/j.isprsjprs.2017.11.009_b0040 – ident: 10.1016/j.isprsjprs.2017.11.009_b0100 doi: 10.1109/CVPRW.2016.90 – ident: 10.1016/j.isprsjprs.2017.11.009_b0230 doi: 10.1109/ICCV.2015.164 – ident: 10.1016/j.isprsjprs.2017.11.009_b0075 – ident: 10.1016/j.isprsjprs.2017.11.009_b0160 doi: 10.1109/IGARSS.2016.7729468 – ident: 10.1016/j.isprsjprs.2017.11.009_b0055 doi: 10.1109/ICCV.2015.316 – start-page: 473 year: 2016 ident: 10.1016/j.isprsjprs.2017.11.009_b0145 article-title: Semantic segmentation of aerial images with an ensemble of cnns publication-title: ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci. doi: 10.5194/isprsannals-III-3-473-2016 – ident: 10.1016/j.isprsjprs.2017.11.009_b0165 doi: 10.1109/ICCV.2015.178 – ident: 10.1016/j.isprsjprs.2017.11.009_b0180 doi: 10.1007/978-3-319-46448-0_5 – ident: 10.1016/j.isprsjprs.2017.11.009_b0005 doi: 10.1007/978-3-319-54181-5_12 – ident: 10.1016/j.isprsjprs.2017.11.009_b0105 – ident: 10.1016/j.isprsjprs.2017.11.009_b0140 – volume: 62 start-page: 949 year: 1996 ident: 10.1016/j.isprsjprs.2017.11.009_b0015 article-title: Inferring urban land use from satellite sensor images using kernel-based spatial reclassification publication-title: Photogramm. Eng. Remote Sens. – ident: 10.1016/j.isprsjprs.2017.11.009_b0035 doi: 10.1109/CVPR.2016.492 – ident: 10.1016/j.isprsjprs.2017.11.009_b0210 – volume: 14 start-page: 2331 year: 1993 ident: 10.1016/j.isprsjprs.2017.11.009_b0065 article-title: Empirical relations between digital SPOT HRV and CASI spectral resonse and lodgepole pine (pinus contorta) forest stand parameters publication-title: Int. J. Remote Sens. doi: 10.1080/01431169308954040 – ident: 10.1016/j.isprsjprs.2017.11.009_b0200 |
SSID | ssj0001568 |
Score | 2.6611042 |
Snippet | We present an end-to-end trainable deep convolutional neural network (DCNN) for semantic segmentation with built-in awareness of semantically meaningful... |
SourceID | proquest crossref elsevier |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 158 |
SubjectTerms | accuracy artificial intelligence classification image analysis neural networks remote sensing |
Title | Classification with an edge: Improving semantic image segmentation with boundary detection |
URI | https://dx.doi.org/10.1016/j.isprsjprs.2017.11.009 https://www.proquest.com/docview/2067279253 |
Volume | 135 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV07T8MwELYqGIABQQFRXjISa6jjxI7LhhCogGABpIolshMHUtG06mPowm_nLnHKQ0gMDBkc-ZTkYp8_n7-7I-SkEzJtuWYeT632wsRwT3MpPKUyqbTREVMYO3x3L7tP4U1P9Brkoo6FQVqls_2VTS-ttbvTdtpsj_K8_cBg68AB7gOiANDBexjBHkY4yk_fP2kefhUOh5097P2N45VPRuNJHy7keEWnmM4TmYm_r1A_bHW5AF1tkHWHHOl59XKbpGGLJln7kk-wSVZcSfPX-RZ5LstdIhGo1D1FhyvVBUX_2Rld-BLoxA5AuXlC8wGYFmi-DFw4kpMxZeGl8Zymdlrytopt8nR1-XjR9VwhBS_hMph6HSMDbUWmDHy2TDjPYNoKBdDFt0EijQnDjGmdhjzlMmWC8cz6fmYMMyYKrAh2yFIxLOwuoUok1qR-JtJQwt5KGQUQzo8iqXmmYW63iKyVFycuyzgWu3iLazpZP15oPUatwx4kBq23CFsIjqpEG3-LnNV_J_42ZmJYDv4WPq7_ZwwzCo9JdGGHM-yEp9MdLoK9_zxgn6xCS1XumgOyNB3P7CEAmKk5KkfoEVk-v77t3n8A9SDzsg |
linkProvider | Elsevier |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV07T8MwED5BGYAB8RRvjMQa6jix47IhBCqvLoBUsVh24kAQDVVbBv4958SpACExMGRI4lOSz_H58_keAEedmGrLNA1YZnUQp4YFmgkeSJkLqY1OqHSxw7c90X2Ir_q8PwNnTSyMc6v0ur_W6ZW29lfaHs32sCjadxSXDgzpPjIKJB2sPwtzLjsVb8Hc6eV1tzdVyGEdEefaB07gm5tXMR6Oxi94ODev5Nhl9HTOib9PUj_UdTUHXSzDkieP5LR-vxWYseUqLH5JKbgK876q-fPHGjxWFS-dL1AFP3E2V6JL4kxoJ2RqTiBjO0B8i5QUA9QuePo08BFJXsZUtZdGHySzk8p1q1yHh4vz-7Nu4GspBCkT0SToGBFpy3Np8LNFyliOI5dLZC-hjVJhTBznVOssZhkTGeWU5TYMc2OoMUlkebQBrfKttJtAJE-tycKcZ7HA5ZU0EllcmCRCs1zj8N4C0YCnUp9o3NW7eFWNR9mLmqKuHOq4DFGI-hbQqeCwzrXxt8hJ0zvq22-jcEb4W_iw6U-Fg8rtlOjSvr27Rm6DusN4tP2fBxzAfPf-9kbdXPaud2AB78jaerMLrcno3e4hn5mYff-_fgJICvZj |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Classification+with+an+edge%3A+Improving+semantic+image+segmentation+with+boundary+detection&rft.jtitle=ISPRS+journal+of+photogrammetry+and+remote+sensing&rft.au=Marmanis%2C+D.&rft.au=Schindler%2C+K.&rft.au=Wegner%2C+J.D.&rft.au=Galliani%2C+S.&rft.date=2018-01-01&rft.pub=Elsevier+B.V&rft.issn=0924-2716&rft.eissn=1872-8235&rft.volume=135&rft.spage=158&rft.epage=172&rft_id=info:doi/10.1016%2Fj.isprsjprs.2017.11.009&rft.externalDocID=S092427161630572X |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0924-2716&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0924-2716&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0924-2716&client=summon |