A survey on deep learning techniques for image and video semantic segmentation

•An in-depth review of deep learning methods for semantic segmentation applied to various areas.•An overview of background concepts and formulation for newcomers.•A structured and logical review of datasets and methods, highlighting their contributions and significance.•Quantitative comparison of pe...

Full description

Saved in:
Bibliographic Details
Published inApplied soft computing Vol. 70; pp. 41 - 65
Main Authors Garcia-Garcia, Alberto, Orts-Escolano, Sergio, Oprea, Sergiu, Villena-Martinez, Victor, Martinez-Gonzalez, Pablo, Garcia-Rodriguez, Jose
Format Journal Article
LanguageEnglish
Published Elsevier B.V 01.09.2018
Subjects
Online AccessGet full text

Cover

Loading…
Abstract •An in-depth review of deep learning methods for semantic segmentation applied to various areas.•An overview of background concepts and formulation for newcomers.•A structured and logical review of datasets and methods, highlighting their contributions and significance.•Quantitative comparison of performance and accuracy on common datasets.•A discussion of future works and promising research lines and conclusions about the state of the art of the field. Image semantic segmentation is more and more being of interest for computer vision and machine learning researchers. Many applications on the rise need accurate and efficient segmentation mechanisms: autonomous driving, indoor navigation, and even virtual or augmented reality systems to name a few. This demand coincides with the rise of deep learning approaches in almost every field or application target related to computer vision, including semantic segmentation or scene understanding. This paper provides a review on deep learning methods for semantic segmentation applied to various application areas. Firstly, we formulate the semantic segmentation problem and define the terminology of this field as well as interesting background concepts. Next, the main datasets and challenges are exposed to help researchers decide which are the ones that best suit their needs and goals. Then, existing methods are reviewed, highlighting their contributions and their significance in the field. We also devote a part of the paper to review common loss functions and error metrics for this problem. Finally, quantitative results are given for the described methods and the datasets in which they were evaluated, following up with a discussion of the results. At last, we point out a set of promising future works and draw our own conclusions about the state of the art of semantic segmentation using deep learning techniques.
AbstractList •An in-depth review of deep learning methods for semantic segmentation applied to various areas.•An overview of background concepts and formulation for newcomers.•A structured and logical review of datasets and methods, highlighting their contributions and significance.•Quantitative comparison of performance and accuracy on common datasets.•A discussion of future works and promising research lines and conclusions about the state of the art of the field. Image semantic segmentation is more and more being of interest for computer vision and machine learning researchers. Many applications on the rise need accurate and efficient segmentation mechanisms: autonomous driving, indoor navigation, and even virtual or augmented reality systems to name a few. This demand coincides with the rise of deep learning approaches in almost every field or application target related to computer vision, including semantic segmentation or scene understanding. This paper provides a review on deep learning methods for semantic segmentation applied to various application areas. Firstly, we formulate the semantic segmentation problem and define the terminology of this field as well as interesting background concepts. Next, the main datasets and challenges are exposed to help researchers decide which are the ones that best suit their needs and goals. Then, existing methods are reviewed, highlighting their contributions and their significance in the field. We also devote a part of the paper to review common loss functions and error metrics for this problem. Finally, quantitative results are given for the described methods and the datasets in which they were evaluated, following up with a discussion of the results. At last, we point out a set of promising future works and draw our own conclusions about the state of the art of semantic segmentation using deep learning techniques.
Author Garcia-Garcia, Alberto
Villena-Martinez, Victor
Martinez-Gonzalez, Pablo
Oprea, Sergiu
Garcia-Rodriguez, Jose
Orts-Escolano, Sergio
Author_xml – sequence: 1
  givenname: Alberto
  orcidid: 0000-0002-9575-6403
  surname: Garcia-Garcia
  fullname: Garcia-Garcia, Alberto
  email: agarcia@dtic.ua.es
– sequence: 2
  givenname: Sergio
  orcidid: 0000-0001-6817-6326
  surname: Orts-Escolano
  fullname: Orts-Escolano, Sergio
  email: sorts@ua.es
– sequence: 3
  givenname: Sergiu
  surname: Oprea
  fullname: Oprea, Sergiu
  email: soprea@dtic.ua.es
– sequence: 4
  givenname: Victor
  surname: Villena-Martinez
  fullname: Villena-Martinez, Victor
  email: vvillena@dtic.ua.es
– sequence: 5
  givenname: Pablo
  surname: Martinez-Gonzalez
  fullname: Martinez-Gonzalez, Pablo
  email: pmartinez@dtic.ua.es
– sequence: 6
  givenname: Jose
  surname: Garcia-Rodriguez
  fullname: Garcia-Rodriguez, Jose
  email: jgarcia@dtic.ua.es
BookMark eNp9kM1uAiEUhUljk6rtC3TFC8wUmAGZpBtj-peYduOeIFwsRsHCaOLbFzNddeHqnM13c-43QaMQAyD0SElNCRVP21rnaGpGqKwJr0vcoDGVM1Z1QtJR6VzIqu1acYcmOW9JgTomx-hzjvMxneCMY8AW4IB3oFPwYYN7MN_B_xwhYxcT9nu9AayDxSdvIeIMex16b0rZ7CH0uvcx3KNbp3cZHv5yilavL6vFe7X8evtYzJeVaQjpK8dZ22nTOiGkY50B2hDXWbfm6zWHxmlnqCYW3EzMBOdWG0lkS5lxpIGONlPEhrMmxZwTOHVIZV86K0rURYjaqosQdRGiCFclCiT_QcYPq_uk_e46-jygUH46eUgqGw_BgPUJTK9s9NfwXz1MgAg
CitedBy_id crossref_primary_10_1007_s11042_023_17292_8
crossref_primary_10_1016_j_ndteint_2020_102341
crossref_primary_10_1109_TIM_2024_3378205
crossref_primary_10_3390_rs12030423
crossref_primary_10_1175_JTECH_D_19_0146_1
crossref_primary_10_1016_j_eswa_2025_126726
crossref_primary_10_3390_electronics12244926
crossref_primary_10_3390_s19204599
crossref_primary_10_3390_s25010251
crossref_primary_10_1109_ACCESS_2021_3133276
crossref_primary_10_3390_agriculture12071033
crossref_primary_10_3390_app11020782
crossref_primary_10_3390_rs13183691
crossref_primary_10_3390_ijms21186737
crossref_primary_10_1016_j_aej_2022_06_045
crossref_primary_10_3390_rs14143415
crossref_primary_10_1109_ACCESS_2021_3077006
crossref_primary_10_3390_diagnostics9030072
crossref_primary_10_3390_rs15133268
crossref_primary_10_1007_s12559_024_10304_1
crossref_primary_10_1515_bmt_2023_0331
crossref_primary_10_3390_rs15194649
crossref_primary_10_1145_3329784
crossref_primary_10_3390_rs12213547
crossref_primary_10_3390_rs16234614
crossref_primary_10_1016_j_compag_2023_107805
crossref_primary_10_3390_agriengineering6040277
crossref_primary_10_1007_s10040_023_02672_z
crossref_primary_10_1109_TNNLS_2021_3056384
crossref_primary_10_61186_jgit_10_4_87
crossref_primary_10_1007_s11042_019_08545_6
crossref_primary_10_1016_j_patcog_2022_108586
crossref_primary_10_1016_j_measurement_2022_110778
crossref_primary_10_1016_j_neucom_2021_04_061
crossref_primary_10_3390_technologies8010001
crossref_primary_10_1016_j_asoc_2024_111363
crossref_primary_10_1088_1748_3190_ababb0
crossref_primary_10_3390_rs12213537
crossref_primary_10_1016_j_compag_2024_109244
crossref_primary_10_1016_j_inffus_2021_09_018
crossref_primary_10_1016_j_jmapro_2022_11_018
crossref_primary_10_1109_TGRS_2023_3276591
crossref_primary_10_3390_app10144945
crossref_primary_10_1186_s12938_022_01058_8
crossref_primary_10_1007_s10489_021_02769_6
crossref_primary_10_1016_j_engfailanal_2024_108632
crossref_primary_10_1002_esp_70007
crossref_primary_10_1016_j_jfca_2023_105746
crossref_primary_10_1016_j_matchar_2022_112175
crossref_primary_10_1088_1361_6501_ad01ce
crossref_primary_10_1049_ipr2_13045
crossref_primary_10_3390_mi14040879
crossref_primary_10_3390_s25061824
crossref_primary_10_1016_j_autcon_2020_103383
crossref_primary_10_1016_j_image_2021_116410
crossref_primary_10_3390_rs12142274
crossref_primary_10_1002_adem_201901197
crossref_primary_10_3390_s25010271
crossref_primary_10_1007_s00521_021_06397_6
crossref_primary_10_1016_j_compag_2024_109274
crossref_primary_10_1016_j_jbiomech_2024_112012
crossref_primary_10_1088_1742_6596_2718_1_012062
crossref_primary_10_1016_j_conbuildmat_2021_126265
crossref_primary_10_3390_s20185411
crossref_primary_10_1007_s11042_020_08862_1
crossref_primary_10_17694_bajece_1024073
crossref_primary_10_3390_cancers13112766
crossref_primary_10_3390_rs14225862
crossref_primary_10_1016_j_ecoinf_2022_101641
crossref_primary_10_1007_s11554_020_01021_7
crossref_primary_10_1109_TSE_2023_3285280
crossref_primary_10_1007_s00603_024_03824_7
crossref_primary_10_1016_j_dibe_2023_100283
crossref_primary_10_3390_rs12182932
crossref_primary_10_1016_j_asoc_2022_109500
crossref_primary_10_1016_j_neucom_2024_129264
crossref_primary_10_3390_rs13050905
crossref_primary_10_1007_s11042_021_11722_1
crossref_primary_10_1016_j_asr_2024_03_053
crossref_primary_10_3390_agriengineering7010011
crossref_primary_10_1016_j_neucom_2017_09_071
crossref_primary_10_3390_rs15020379
crossref_primary_10_1016_j_powtec_2022_117655
crossref_primary_10_3390_rs15030588
crossref_primary_10_1007_s10462_021_09967_1
crossref_primary_10_3390_ijgi12110454
crossref_primary_10_1016_j_engstruct_2022_115158
crossref_primary_10_1186_s13640_023_00613_0
crossref_primary_10_1016_j_scitotenv_2021_145191
crossref_primary_10_3390_agriculture14040637
crossref_primary_10_1016_j_ribaf_2024_102447
crossref_primary_10_1016_j_jvcir_2024_104167
crossref_primary_10_1007_s13632_020_00676_9
crossref_primary_10_1088_1361_6501_aca991
crossref_primary_10_1002_rob_21915
crossref_primary_10_1016_j_caeo_2024_100181
crossref_primary_10_1016_j_compag_2021_106191
crossref_primary_10_3390_electronics12051199
crossref_primary_10_3390_jmse11122268
crossref_primary_10_3390_a13090226
crossref_primary_10_1016_j_engappai_2023_106669
crossref_primary_10_1016_j_inpa_2021_01_004
crossref_primary_10_3390_f13071133
crossref_primary_10_1016_j_scitotenv_2023_168696
crossref_primary_10_1177_0278364920907688
crossref_primary_10_1080_22797254_2024_2343531
crossref_primary_10_1016_j_knosys_2021_107187
crossref_primary_10_1007_s00345_024_04775_y
crossref_primary_10_1016_j_compag_2023_108065
crossref_primary_10_3390_s23094287
crossref_primary_10_1109_TITS_2022_3142393
crossref_primary_10_4018_JITR_299388
crossref_primary_10_1109_TGRS_2023_3245603
crossref_primary_10_3390_rs11222631
crossref_primary_10_3390_f14020244
crossref_primary_10_1109_COMST_2018_2869360
crossref_primary_10_1049_2024_9929900
crossref_primary_10_3390_s21010056
crossref_primary_10_1016_j_neuroimage_2021_117934
crossref_primary_10_1016_j_compag_2023_108073
crossref_primary_10_1038_s41598_023_28586_0
crossref_primary_10_3233_JIFS_233498
crossref_primary_10_4995_var_2021_15329
crossref_primary_10_1002_stc_2899
crossref_primary_10_1587_transinf_2023EDP7009
crossref_primary_10_3390_fi14100277
crossref_primary_10_3390_rs15051240
crossref_primary_10_3390_app122412606
crossref_primary_10_53508_ijiam_1406569
crossref_primary_10_1016_j_eswa_2022_116926
crossref_primary_10_3390_rs13214445
crossref_primary_10_1016_j_tust_2018_04_002
crossref_primary_10_1007_s11042_020_09185_x
crossref_primary_10_3390_rs14112519
crossref_primary_10_1007_s11102_019_01026_x
crossref_primary_10_1109_TNNLS_2022_3142181
crossref_primary_10_1364_PRJ_428425
crossref_primary_10_1016_j_neucom_2018_06_059
crossref_primary_10_3390_s19163576
crossref_primary_10_1016_j_buildenv_2024_112303
crossref_primary_10_35940_ijaent_A9759_12101223
crossref_primary_10_1145_3453651
crossref_primary_10_3390_s23031731
crossref_primary_10_1016_j_compag_2019_104874
crossref_primary_10_1016_j_jbi_2020_103526
crossref_primary_10_1007_s10462_022_10294_2
crossref_primary_10_1016_j_measurement_2023_113034
crossref_primary_10_3233_SJI_190555
crossref_primary_10_1002_qj_4153
crossref_primary_10_1109_ACCESS_2022_3202931
crossref_primary_10_3390_rs14051249
crossref_primary_10_11834_jig_230605
crossref_primary_10_3390_rs14020305
crossref_primary_10_3390_rs12020207
crossref_primary_10_3390_rs16111987
crossref_primary_10_1016_j_wasman_2022_02_009
crossref_primary_10_3390_s19040810
crossref_primary_10_34133_2020_4152816
crossref_primary_10_1109_JOE_2022_3219129
crossref_primary_10_3390_mi13020300
crossref_primary_10_1016_j_asoc_2020_106702
crossref_primary_10_12677_CSA_2022_122033
crossref_primary_10_3390_s24186016
crossref_primary_10_1016_j_jisa_2023_103434
crossref_primary_10_35414_akufemubid_1278080
crossref_primary_10_3390_rs14030582
crossref_primary_10_3390_cancers13184674
crossref_primary_10_3389_fpubh_2023_1025746
crossref_primary_10_1038_s41467_023_41260_3
crossref_primary_10_1016_j_compag_2023_107862
crossref_primary_10_1016_j_ecoinf_2023_102193
crossref_primary_10_3390_rs13183613
crossref_primary_10_3390_agriculture13122182
crossref_primary_10_1016_j_procs_2018_05_198
crossref_primary_10_3390_s22176354
crossref_primary_10_1145_3715093
crossref_primary_10_1038_s41598_022_07034_5
crossref_primary_10_1063_10_0003218
crossref_primary_10_1016_j_cviu_2024_104102
crossref_primary_10_1088_2632_2153_acc638
crossref_primary_10_3390_s24051509
crossref_primary_10_3390_rs15215238
crossref_primary_10_3390_app12104912
crossref_primary_10_3390_app14020864
crossref_primary_10_1002_adts_202200140
crossref_primary_10_1007_s00530_025_01705_9
crossref_primary_10_1007_s11042_022_13831_x
crossref_primary_10_3390_rs12203372
crossref_primary_10_1109_TGRS_2020_2990640
crossref_primary_10_3390_rs17030411
crossref_primary_10_1177_14759217231168212
crossref_primary_10_1038_s41598_024_55864_2
crossref_primary_10_1007_s11356_022_22167_w
crossref_primary_10_1007_s12559_024_10257_5
crossref_primary_10_3390_electronics12122669
crossref_primary_10_3390_app122010316
crossref_primary_10_3390_rs13061176
crossref_primary_10_1016_j_patcog_2020_107333
crossref_primary_10_3390_s24175552
crossref_primary_10_1016_j_compstruct_2023_116672
crossref_primary_10_3390_s20030717
crossref_primary_10_3390_electronics9111768
crossref_primary_10_1007_s11004_023_10108_3
crossref_primary_10_1080_15583058_2023_2260771
crossref_primary_10_3390_app10217524
crossref_primary_10_3390_rs15215218
crossref_primary_10_1109_JBHI_2020_3045475
crossref_primary_10_1007_s10462_022_10152_1
crossref_primary_10_1038_s41598_022_13878_8
crossref_primary_10_3390_s20205759
crossref_primary_10_1109_TITS_2021_3076844
crossref_primary_10_1109_ACCESS_2024_3389979
crossref_primary_10_1186_s13007_023_01031_z
crossref_primary_10_3390_app14020603
crossref_primary_10_3390_rs12101574
crossref_primary_10_3390_app11146524
crossref_primary_10_7717_peerj_cs_1453
crossref_primary_10_1007_s10489_024_05406_0
crossref_primary_10_3390_math10234468
crossref_primary_10_3390_electronics10161932
crossref_primary_10_3390_s21113813
crossref_primary_10_1080_13682199_2022_2163531
crossref_primary_10_3390_app11020518
crossref_primary_10_3390_min11111265
crossref_primary_10_3390_ijgi10030168
crossref_primary_10_3390_jmse6030093
crossref_primary_10_1007_s11063_021_10704_6
crossref_primary_10_3390_rs15051212
crossref_primary_10_1109_JSTARS_2024_3351873
crossref_primary_10_1007_s10462_022_10176_7
crossref_primary_10_3390_bdcc5010009
crossref_primary_10_1016_j_asoc_2020_106698
crossref_primary_10_1155_2021_3846078
crossref_primary_10_1002_gj_4615
crossref_primary_10_1093_bioinformatics_btae559
crossref_primary_10_3390_rs14236049
crossref_primary_10_1093_iob_obae036
crossref_primary_10_3390_rs14133046
crossref_primary_10_1016_j_inpa_2022_05_002
crossref_primary_10_1109_ACCESS_2021_3133712
crossref_primary_10_1007_s43762_021_00031_w
crossref_primary_10_1016_j_dss_2022_113794
crossref_primary_10_3390_land12040810
crossref_primary_10_1016_j_asoc_2023_110315
crossref_primary_10_1016_j_bspc_2022_103978
crossref_primary_10_1016_j_isprsjprs_2021_11_010
crossref_primary_10_1109_ACCESS_2023_3318486
crossref_primary_10_1002_rob_22248
crossref_primary_10_1051_wujns_2023282150
crossref_primary_10_3390_app14041527
crossref_primary_10_1016_j_neucom_2022_01_022
crossref_primary_10_1038_s41699_020_0137_z
crossref_primary_10_3390_s22062416
crossref_primary_10_61186_jgit_11_3_43
crossref_primary_10_1016_j_csbj_2020_08_003
crossref_primary_10_1016_j_eswa_2022_118815
crossref_primary_10_3390_app11041807
crossref_primary_10_1016_j_asoc_2020_106210
crossref_primary_10_3390_s21041196
crossref_primary_10_1007_s11082_023_06249_8
crossref_primary_10_3390_app14062455
crossref_primary_10_3390_electronics13122366
crossref_primary_10_1109_ACCESS_2021_3086530
crossref_primary_10_1002_mrm_27550
crossref_primary_10_1016_j_neucom_2024_127509
crossref_primary_10_31127_tuje_1481696
crossref_primary_10_1177_14759217221126170
crossref_primary_10_1016_j_dsp_2021_103134
crossref_primary_10_1007_s11042_022_12842_y
crossref_primary_10_1016_j_eswa_2021_115486
crossref_primary_10_1016_j_jag_2023_103421
crossref_primary_10_3390_jimaging6090085
crossref_primary_10_3390_rs14010223
crossref_primary_10_3390_rs14010228
crossref_primary_10_1007_s11042_020_09187_9
crossref_primary_10_1002_jsid_2018
crossref_primary_10_1038_s44172_023_00066_3
crossref_primary_10_3390_ma17194819
crossref_primary_10_1155_2020_2785464
crossref_primary_10_1080_07391102_2022_2136244
crossref_primary_10_3390_rs14081848
crossref_primary_10_1007_s10489_021_02542_9
crossref_primary_10_1109_JSTARS_2022_3181355
crossref_primary_10_1016_j_bspc_2021_103472
crossref_primary_10_1016_j_isprsjprs_2020_08_001
crossref_primary_10_1016_j_bspc_2022_103532
crossref_primary_10_1016_j_ecoinf_2023_102116
crossref_primary_10_1007_s00371_024_03266_w
crossref_primary_10_1109_ACCESS_2019_2958671
crossref_primary_10_1007_s00371_022_02734_5
crossref_primary_10_1007_s10668_022_02740_6
crossref_primary_10_1016_j_asoc_2019_105820
crossref_primary_10_3390_drones9030221
crossref_primary_10_1016_j_conbuildmat_2023_132839
crossref_primary_10_1016_j_cmpb_2019_105273
crossref_primary_10_3390_rs17060974
crossref_primary_10_3390_cancers14051199
crossref_primary_10_3390_electronics11081241
crossref_primary_10_3390_s22218086
crossref_primary_10_1016_j_engappai_2024_109683
crossref_primary_10_1016_j_eswa_2021_115064
crossref_primary_10_3390_rs15245694
crossref_primary_10_1016_j_matdes_2024_113031
crossref_primary_10_3390_s20236711
crossref_primary_10_1016_j_dib_2019_104414
crossref_primary_10_3390_agriculture13071283
crossref_primary_10_3390_rs15112800
crossref_primary_10_3390_ijgi13100347
crossref_primary_10_3390_rs13091828
crossref_primary_10_12677_CSA_2023_134081
crossref_primary_10_1016_j_measurement_2023_112499
crossref_primary_10_1016_j_energy_2018_09_118
crossref_primary_10_1007_s11831_022_09811_x
crossref_primary_10_1038_s41598_020_78632_4
crossref_primary_10_3390_sym12030427
crossref_primary_10_1109_ACCESS_2020_3001679
crossref_primary_10_1016_j_watres_2021_117482
crossref_primary_10_1109_ACCESS_2021_3085218
crossref_primary_10_1038_s41598_019_53254_7
crossref_primary_10_3390_agronomy12112812
crossref_primary_10_3390_jpm11070629
crossref_primary_10_1016_j_ultras_2024_107358
crossref_primary_10_1088_1361_6501_ac0216
crossref_primary_10_3390_s19143106
crossref_primary_10_1007_s42044_021_00090_6
crossref_primary_10_2139_ssrn_4109973
crossref_primary_10_3390_rs13163122
crossref_primary_10_3389_fonc_2023_1009681
crossref_primary_10_3390_app11030968
crossref_primary_10_1016_j_jag_2023_103624
crossref_primary_10_4236_jcc_2024_129011
crossref_primary_10_3390_rs13245043
crossref_primary_10_3390_rs16234583
crossref_primary_10_1109_TPAMI_2020_3045007
crossref_primary_10_1002_ima_23129
crossref_primary_10_1080_08839514_2022_2039348
crossref_primary_10_1016_j_rineng_2024_103383
crossref_primary_10_1016_j_asoc_2023_111224
crossref_primary_10_1016_j_eswa_2021_116105
crossref_primary_10_31590_ejosat_779059
crossref_primary_10_2208_jscejhe_76_2_I_997
crossref_primary_10_1007_s00521_023_08839_9
crossref_primary_10_31857_S1234567823190072
crossref_primary_10_1016_j_asoc_2019_105612
crossref_primary_10_1109_TPAMI_2022_3225573
crossref_primary_10_1016_j_compag_2024_108702
crossref_primary_10_3390_rs13071234
crossref_primary_10_1080_10095020_2024_2343323
crossref_primary_10_3390_math8050768
crossref_primary_10_3390_diagnostics10080577
crossref_primary_10_1093_gigascience_giab052
crossref_primary_10_3390_rs13224597
crossref_primary_10_1109_TCSVT_2024_3495769
crossref_primary_10_1109_ACCESS_2020_3001652
crossref_primary_10_1016_j_undsp_2024_10_002
crossref_primary_10_1080_10402004_2022_2037800
crossref_primary_10_3390_s24061804
crossref_primary_10_1155_2021_5596135
crossref_primary_10_1007_s11082_020_02500_8
crossref_primary_10_1109_LRA_2022_3187836
crossref_primary_10_3390_photonics10010011
crossref_primary_10_4018_JITR_299932
crossref_primary_10_3390_jimaging8090244
crossref_primary_10_1016_j_patrec_2021_10_009
crossref_primary_10_3390_rs10060973
crossref_primary_10_1109_ACCESS_2021_3124831
crossref_primary_10_12677_AAM_2022_116403
crossref_primary_10_1007_s11042_022_13606_4
crossref_primary_10_3390_jmse11040691
crossref_primary_10_1007_s00371_021_02324_x
crossref_primary_10_1007_s11042_023_15201_7
crossref_primary_10_1007_s10462_019_09792_7
crossref_primary_10_1371_journal_pone_0227677
crossref_primary_10_1016_j_lwt_2023_115047
crossref_primary_10_1007_s11063_024_11533_z
crossref_primary_10_1155_2022_6010912
crossref_primary_10_1007_s10980_021_01303_w
crossref_primary_10_3390_s22062252
crossref_primary_10_3390_e23020197
crossref_primary_10_3390_app10031183
crossref_primary_10_1016_j_compag_2023_108051
crossref_primary_10_1016_j_scitotenv_2019_134413
crossref_primary_10_3390_app8040500
crossref_primary_10_1016_j_bspc_2022_103907
crossref_primary_10_1016_j_eswa_2021_115090
crossref_primary_10_1016_j_neucom_2021_08_009
crossref_primary_10_3103_S8756699019020109
crossref_primary_10_1002_nbm_4609
crossref_primary_10_1016_j_marpolbul_2022_113343
crossref_primary_10_1049_iet_ipr_2018_6582
crossref_primary_10_1109_ACCESS_2022_3192605
crossref_primary_10_1145_3524497
crossref_primary_10_3390_rs12060934
crossref_primary_10_1109_ACCESS_2019_2958825
crossref_primary_10_3390_app11198802
crossref_primary_10_3390_rs12091435
crossref_primary_10_1016_j_cageo_2021_104778
crossref_primary_10_3390_rs13234759
crossref_primary_10_3390_rs13142656
crossref_primary_10_1007_s00371_020_01981_8
crossref_primary_10_3390_ijgi8120582
crossref_primary_10_1007_s40747_024_01364_9
crossref_primary_10_1016_j_compag_2023_108029
crossref_primary_10_3390_s21238047
crossref_primary_10_1016_j_mineng_2022_107806
crossref_primary_10_1016_j_comcom_2022_10_011
crossref_primary_10_3390_rs12060944
crossref_primary_10_3390_rs14174225
crossref_primary_10_1016_j_procir_2023_09_172
crossref_primary_10_3390_app14209521
crossref_primary_10_1109_TIP_2021_3109530
crossref_primary_10_31590_ejosat_1115837
crossref_primary_10_1016_j_aei_2024_102586
crossref_primary_10_1080_10298436_2024_2400562
crossref_primary_10_1109_TIP_2023_3319274
crossref_primary_10_37391_ijeer_110443
crossref_primary_10_1007_s00521_023_08961_8
crossref_primary_10_1109_JSTARS_2025_3538920
crossref_primary_10_3390_s24113421
crossref_primary_10_3390_land13081161
crossref_primary_10_1109_LRA_2022_3143219
crossref_primary_10_1111_mice_12792
crossref_primary_10_1016_j_ufug_2023_127845
crossref_primary_10_1155_2022_8165580
crossref_primary_10_3390_rs12060959
crossref_primary_10_3390_rs16173327
crossref_primary_10_1109_TMM_2021_3080516
crossref_primary_10_3390_tomography8040156
crossref_primary_10_2174_1573405616666210108122048
crossref_primary_10_3390_electronics10010081
crossref_primary_10_1016_j_ins_2020_11_048
crossref_primary_10_1016_j_engstruct_2023_117219
crossref_primary_10_1038_s41598_022_16978_7
crossref_primary_10_3390_electronics10172046
crossref_primary_10_1016_j_bbe_2020_01_004
crossref_primary_10_1111_phor_12531
crossref_primary_10_3390_agronomy13061469
crossref_primary_10_3390_rs13010039
crossref_primary_10_1016_j_cmpb_2023_107914
crossref_primary_10_1016_j_cviu_2023_103886
crossref_primary_10_1038_s41598_024_79919_6
crossref_primary_10_1007_s11042_023_17113_y
crossref_primary_10_1155_2022_8342767
crossref_primary_10_1016_j_jsb_2019_107432
crossref_primary_10_1007_s13735_020_00195_x
crossref_primary_10_1109_ACCESS_2020_3006919
crossref_primary_10_3390_plants12040786
crossref_primary_10_3390_rs13010026
crossref_primary_10_1080_00207543_2020_1859636
crossref_primary_10_11728_cjss2021_04_667
crossref_primary_10_1016_j_epsr_2022_108148
crossref_primary_10_1111_exsy_12826
crossref_primary_10_3390_diagnostics13121994
crossref_primary_10_2139_ssrn_4015024
crossref_primary_10_3390_rs11010068
crossref_primary_10_1016_j_neunet_2019_07_020
crossref_primary_10_3390_rs12223685
crossref_primary_10_3390_rs10081215
crossref_primary_10_1016_j_inpa_2020_01_002
crossref_primary_10_1002_mrm_27758
crossref_primary_10_1007_s10278_024_01142_6
crossref_primary_10_3390_f10030235
crossref_primary_10_1109_ACCESS_2020_3023495
crossref_primary_10_3390_f15040689
crossref_primary_10_1049_ipr2_12824
crossref_primary_10_1016_j_geomorph_2024_109212
crossref_primary_10_1016_j_cogr_2021_06_003
crossref_primary_10_1039_D2NA00781A
crossref_primary_10_3390_land13111842
crossref_primary_10_1016_j_patcog_2021_108023
crossref_primary_10_3390_agriculture12091493
crossref_primary_10_1016_j_infrared_2021_103883
crossref_primary_10_1016_j_eswa_2022_117106
crossref_primary_10_1007_s00521_023_08214_8
crossref_primary_10_1109_TCYB_2021_3065247
crossref_primary_10_1007_s40747_023_01103_6
crossref_primary_10_1016_j_compag_2020_105302
crossref_primary_10_3390_rs13081420
crossref_primary_10_3390_su141912321
crossref_primary_10_1007_s12652_022_03736_w
crossref_primary_10_3390_rs14040897
crossref_primary_10_1007_s10015_022_00827_x
crossref_primary_10_1016_j_cviu_2019_07_005
crossref_primary_10_1109_JSAC_2022_3221950
crossref_primary_10_3390_rs14122826
crossref_primary_10_1007_s13349_024_00879_6
crossref_primary_10_1016_j_imu_2020_100297
crossref_primary_10_1109_ACCESS_2025_3526244
crossref_primary_10_3390_rs13010119
crossref_primary_10_4018_IJSST_2019010101
crossref_primary_10_1038_s41524_023_01172_8
crossref_primary_10_3390_s24020522
crossref_primary_10_1109_MGRS_2023_3292467
crossref_primary_10_1016_j_geoderma_2019_113977
crossref_primary_10_1111_mice_13052
crossref_primary_10_1007_s10462_021_10018_y
crossref_primary_10_1016_j_neunet_2022_10_006
crossref_primary_10_1021_acsnano_0c09685
crossref_primary_10_1051_matecconf_202236405020
crossref_primary_10_1111_exsy_13663
crossref_primary_10_3390_app10186473
crossref_primary_10_1155_2018_8207201
crossref_primary_10_1016_j_isprsjprs_2021_06_002
crossref_primary_10_3390_rs13112140
crossref_primary_10_1515_cdbme_2020_0034
crossref_primary_10_3390_f13111813
crossref_primary_10_1016_j_future_2018_02_036
crossref_primary_10_1016_j_media_2024_103179
crossref_primary_10_3390_s24020510
crossref_primary_10_3390_app10207272
crossref_primary_10_3390_app8050837
crossref_primary_10_1016_j_cities_2022_103834
crossref_primary_10_3390_rs13183595
crossref_primary_10_3390_app9153128
crossref_primary_10_3390_computers8040072
crossref_primary_10_1007_s11063_022_10864_z
crossref_primary_10_1016_j_engappai_2024_108602
crossref_primary_10_3390_app10072471
crossref_primary_10_1007_s00521_024_09660_8
crossref_primary_10_1016_j_cose_2022_102632
crossref_primary_10_1016_j_jag_2021_102411
crossref_primary_10_1038_s41598_022_05455_w
crossref_primary_10_3390_s21051570
crossref_primary_10_1515_teme_2021_0028
crossref_primary_10_1680_jgeot_20_P_091
crossref_primary_10_1007_s00521_023_08758_9
crossref_primary_10_3390_s19194329
crossref_primary_10_1016_j_asoc_2024_112315
crossref_primary_10_1016_j_bspc_2021_102825
crossref_primary_10_3390_electronics10101187
crossref_primary_10_4329_wjr_v12_i1_1
crossref_primary_10_1016_j_rse_2022_113203
crossref_primary_10_12677_JISP_2023_121007
crossref_primary_10_4108_eetiot_4590
crossref_primary_10_3390_mi14010053
crossref_primary_10_1109_ACCESS_2022_3166603
crossref_primary_10_1007_s11227_022_04642_w
crossref_primary_10_1016_j_rcim_2023_102574
crossref_primary_10_3390_rs10101553
crossref_primary_10_1109_TCSVT_2024_3431714
crossref_primary_10_1016_j_engappai_2023_107449
crossref_primary_10_1016_j_displa_2022_102223
crossref_primary_10_3390_s24020531
crossref_primary_10_1007_s10846_021_01395_1
crossref_primary_10_1016_j_eswa_2020_114417
crossref_primary_10_3390_rs11182142
crossref_primary_10_1007_s12541_023_00937_x
crossref_primary_10_3390_app8122393
crossref_primary_10_1007_s00170_022_09791_z
crossref_primary_10_3390_foods11193150
crossref_primary_10_3390_jpm11040310
crossref_primary_10_1109_ACCESS_2022_3147251
crossref_primary_10_1145_3480970
crossref_primary_10_1049_iet_ipr_2020_0491
crossref_primary_10_3390_s25020370
crossref_primary_10_1177_15589250231168950
crossref_primary_10_3390_app11031127
crossref_primary_10_3390_geosciences8070244
crossref_primary_10_1007_s12145_024_01654_3
crossref_primary_10_1016_j_isprsjprs_2022_02_002
crossref_primary_10_1016_j_coal_2025_104712
crossref_primary_10_3390_rs10091429
crossref_primary_10_1007_s11042_022_14303_y
crossref_primary_10_2208_jscejhe_77_2_I_901
crossref_primary_10_3390_app122211799
crossref_primary_10_3390_rs14143324
crossref_primary_10_1007_s10853_020_05148_7
crossref_primary_10_1007_s11661_020_06008_4
crossref_primary_10_1109_TMM_2024_3357198
crossref_primary_10_1186_s12859_020_03943_2
crossref_primary_10_1007_s00500_020_05045_w
crossref_primary_10_1007_s00371_023_03161_w
crossref_primary_10_1016_j_aei_2020_101169
crossref_primary_10_3390_s21072272
crossref_primary_10_1016_j_cja_2020_09_024
crossref_primary_10_3390_rs16142561
crossref_primary_10_3390_s22134696
crossref_primary_10_1186_s42400_023_00141_4
crossref_primary_10_3390_electronics11121884
crossref_primary_10_3390_rs10091423
crossref_primary_10_37394_23205_2024_23_25
crossref_primary_10_1111_mice_13233
crossref_primary_10_1016_j_micpro_2023_104830
crossref_primary_10_1016_j_measurement_2022_111566
crossref_primary_10_3390_electronics12102339
crossref_primary_10_1016_j_neucom_2023_126609
crossref_primary_10_1038_s41598_025_89952_8
crossref_primary_10_3390_s24072134
crossref_primary_10_1016_j_compag_2020_105947
crossref_primary_10_1007_s12652_023_04607_8
crossref_primary_10_3390_land14030649
crossref_primary_10_1155_2022_9541115
crossref_primary_10_3390_s21237947
crossref_primary_10_1016_j_cj_2021_05_014
crossref_primary_10_3390_en14102960
crossref_primary_10_1109_TGRS_2023_3264232
crossref_primary_10_1007_s10489_022_03774_z
crossref_primary_10_34133_plantphenomics_0064
crossref_primary_10_1007_s11042_021_11643_z
crossref_primary_10_3390_s19194188
crossref_primary_10_1007_s11701_023_01657_0
crossref_primary_10_3390_app15020871
crossref_primary_10_3390_app11199119
crossref_primary_10_1115_1_4063983
crossref_primary_10_3390_met13081395
crossref_primary_10_1016_j_rinp_2022_105904
crossref_primary_10_1109_LRA_2018_2849498
crossref_primary_10_3389_fnins_2020_586197
crossref_primary_10_1007_s00500_022_07138_0
crossref_primary_10_1016_j_sna_2024_115394
crossref_primary_10_3390_s23156753
crossref_primary_10_1007_s12559_017_9530_0
crossref_primary_10_3390_rs14143370
crossref_primary_10_3390_jpm12020309
crossref_primary_10_3390_w15244202
crossref_primary_10_1007_s10278_021_00452_3
crossref_primary_10_3390_su141912178
crossref_primary_10_1088_2632_2153_aba8e8
crossref_primary_10_17798_bitlisfen_1432965
crossref_primary_10_3390_s20164430
crossref_primary_10_3390_rs16111854
crossref_primary_10_3390_s23249839
crossref_primary_10_3390_bioengineering10091060
crossref_primary_10_3390_rs12101667
crossref_primary_10_1080_21681163_2024_2373996
crossref_primary_10_3390_ijerph19031272
crossref_primary_10_3390_s20154161
crossref_primary_10_1364_BOE_449714
crossref_primary_10_1080_26895293_2024_2399683
crossref_primary_10_1016_j_jag_2021_102497
crossref_primary_10_1016_j_jvir_2024_06_028
crossref_primary_10_7717_peerj_cs_773
crossref_primary_10_3390_s23062936
crossref_primary_10_7717_peerj_cs_1770
crossref_primary_10_1073_pnas_1916219117
crossref_primary_10_1088_2051_672X_ac9492
crossref_primary_10_1109_LRA_2022_3180439
crossref_primary_10_1080_17538947_2024_2418880
crossref_primary_10_3390_rs17061019
crossref_primary_10_3390_s21051720
crossref_primary_10_1016_j_neunet_2019_04_024
crossref_primary_10_3390_rs15215148
crossref_primary_10_1016_j_procs_2018_01_104
crossref_primary_10_1016_j_ifacol_2020_12_696
crossref_primary_10_1002_stc_2981
crossref_primary_10_3390_wevj15110516
crossref_primary_10_1007_s11760_019_01589_z
crossref_primary_10_3390_app12199691
crossref_primary_10_3800_pbr_17_91
crossref_primary_10_1109_JSTARS_2022_3178470
crossref_primary_10_1088_1757_899X_1022_1_012118
crossref_primary_10_1038_s41598_022_11842_0
crossref_primary_10_1109_ACCESS_2020_3011406
crossref_primary_10_1002_ppj2_20065
crossref_primary_10_1016_j_triboint_2022_107948
crossref_primary_10_3390_biom10121691
crossref_primary_10_1049_ipr2_13101
crossref_primary_10_3390_f12060768
crossref_primary_10_1038_s41598_019_56649_8
crossref_primary_10_1109_TGRS_2023_3240481
crossref_primary_10_1016_j_compmedimag_2024_102396
crossref_primary_10_3788_LOP232714
crossref_primary_10_1038_s41586_023_06174_6
crossref_primary_10_1109_LRA_2021_3061359
crossref_primary_10_3390_machines10040285
crossref_primary_10_1007_s11517_023_02963_3
crossref_primary_10_1007_s11042_023_16806_8
crossref_primary_10_3390_rs14246291
crossref_primary_10_1016_j_engfailanal_2021_105268
crossref_primary_10_1016_j_geoen_2023_212529
crossref_primary_10_3390_rs14246297
crossref_primary_10_1007_s11053_021_09896_4
crossref_primary_10_1016_j_ijmecsci_2022_107529
crossref_primary_10_3390_s20216305
crossref_primary_10_3390_foods12030624
crossref_primary_10_1016_j_compag_2021_106226
crossref_primary_10_3390_diagnostics12020557
crossref_primary_10_3390_rs13214370
crossref_primary_10_1007_s11042_023_14958_1
crossref_primary_10_1364_BOE_385218
crossref_primary_10_1038_s41598_023_29230_7
crossref_primary_10_3390_electronics12061347
crossref_primary_10_3390_rs12020311
crossref_primary_10_1016_j_visinf_2024_11_002
crossref_primary_10_3390_jimaging8100284
crossref_primary_10_1080_21681163_2020_1790040
crossref_primary_10_3390_app12073247
crossref_primary_10_1016_j_energy_2022_124933
crossref_primary_10_53469_jssh_2024_6_12__20
crossref_primary_10_1109_ACCESS_2023_3321912
crossref_primary_10_3390_polym15020295
crossref_primary_10_1016_j_asoc_2020_106874
crossref_primary_10_3390_rs14133109
crossref_primary_10_1038_s41598_023_31677_7
crossref_primary_10_1016_j_compag_2021_106482
crossref_primary_10_1007_s11042_023_16782_z
crossref_primary_10_1016_j_patcog_2019_107125
crossref_primary_10_1016_j_engfracmech_2024_110149
crossref_primary_10_3390_rs12030560
crossref_primary_10_1016_j_compstruc_2021_106570
crossref_primary_10_1177_00220345241226871
crossref_primary_10_1007_s13278_024_01362_2
crossref_primary_10_3390_rs12132159
crossref_primary_10_1016_j_comcom_2021_06_022
crossref_primary_10_1016_j_media_2021_101994
crossref_primary_10_3390_app12125960
crossref_primary_10_32604_cmc_2022_028632
crossref_primary_10_1007_s12517_021_08420_5
crossref_primary_10_1007_s11694_020_00707_7
crossref_primary_10_3390_ijerph15081576
crossref_primary_10_1016_j_undsp_2023_09_012
crossref_primary_10_2139_ssrn_4158164
crossref_primary_10_3390_s22249743
crossref_primary_10_1016_j_eswa_2022_116812
crossref_primary_10_3390_electronics14030620
crossref_primary_10_1109_ACCESS_2022_3184453
crossref_primary_10_1007_s11831_021_09542_5
crossref_primary_10_3390_bioengineering11100994
crossref_primary_10_3390_land11060905
crossref_primary_10_3390_s19245361
crossref_primary_10_3390_s22197131
crossref_primary_10_3390_electronics12122730
crossref_primary_10_3390_f15091504
crossref_primary_10_3390_mi13111920
crossref_primary_10_1080_01431161_2024_2365811
crossref_primary_10_1007_s40747_023_01054_y
crossref_primary_10_1016_j_jmr_2025_107851
crossref_primary_10_35414_akufemubid_1013047
crossref_primary_10_1007_s11814_024_00277_0
crossref_primary_10_3390_rs12162532
crossref_primary_10_3390_rs16122222
crossref_primary_10_3390_s21082627
crossref_primary_10_1007_s11517_024_03252_3
crossref_primary_10_1016_j_engstruct_2024_118996
crossref_primary_10_1186_s40543_023_00407_z
crossref_primary_10_1016_j_autcon_2022_104342
crossref_primary_10_1007_s44163_021_00004_2
crossref_primary_10_3390_s22176546
crossref_primary_10_1016_j_eswa_2022_117846
crossref_primary_10_1016_j_ijpharm_2022_122179
crossref_primary_10_1016_j_tust_2020_103677
crossref_primary_10_3390_jimaging8030055
crossref_primary_10_1002_rob_22120
crossref_primary_10_1016_j_compag_2021_105987
crossref_primary_10_2196_63686
crossref_primary_10_1016_j_neucom_2022_11_084
crossref_primary_10_1016_j_aej_2023_04_062
crossref_primary_10_1002_esp_5652
crossref_primary_10_3390_app122211801
crossref_primary_10_3390_jmse11071265
crossref_primary_10_1016_j_ymssp_2022_109287
crossref_primary_10_1016_j_neucom_2020_06_095
crossref_primary_10_1016_j_isprsjprs_2018_04_022
crossref_primary_10_1016_j_bspc_2022_103633
crossref_primary_10_37394_232015_2020_16_65
crossref_primary_10_1097_ACM_0000000000003630
crossref_primary_10_3390_e24070942
crossref_primary_10_3390_app10134486
crossref_primary_10_1016_j_future_2021_06_045
crossref_primary_10_1016_j_cviu_2023_103744
crossref_primary_10_1016_j_compbiomed_2020_103738
crossref_primary_10_1016_j_compchemeng_2021_107614
crossref_primary_10_1186_s40494_023_00895_7
crossref_primary_10_1016_j_isprsjprs_2023_07_024
crossref_primary_10_1016_j_tust_2023_105107
crossref_primary_10_1016_j_measurement_2023_113207
crossref_primary_10_1007_s11042_023_17112_z
crossref_primary_10_1016_j_measurement_2023_113205
crossref_primary_10_1007_s00521_021_06401_z
crossref_primary_10_3390_rs16132276
crossref_primary_10_1109_ACCESS_2023_3241638
crossref_primary_10_1093_jge_gxad024
crossref_primary_10_1117_1_JBO_28_10_106003
crossref_primary_10_1190_geo2018_0870_1
crossref_primary_10_1109_LGRS_2023_3333017
crossref_primary_10_1016_j_neucom_2019_02_003
crossref_primary_10_1186_s12859_024_05894_4
crossref_primary_10_3390_rs14194763
crossref_primary_10_3390_ijgi13050153
crossref_primary_10_3390_app112412093
crossref_primary_10_1088_1361_6501_ac3856
crossref_primary_10_1007_s11042_023_14978_x
crossref_primary_10_1016_j_egyai_2023_100265
crossref_primary_10_5194_essd_14_295_2022
crossref_primary_10_1016_j_asoc_2021_107511
crossref_primary_10_3390_electronics12132975
crossref_primary_10_1016_j_imavis_2023_104708
crossref_primary_10_1016_j_compag_2019_105091
crossref_primary_10_3390_computers9040099
crossref_primary_10_1038_s41598_023_29665_y
crossref_primary_10_1109_TMLCN_2025_3530875
crossref_primary_10_4028_p_6bfVRH
crossref_primary_10_3390_rs14194744
crossref_primary_10_1088_1742_6596_2356_1_012039
crossref_primary_10_1038_s41598_023_32149_8
crossref_primary_10_3389_fcvm_2020_00086
crossref_primary_10_1016_j_neucom_2023_03_006
crossref_primary_10_3390_rs11070847
crossref_primary_10_1007_s41064_020_00124_x
crossref_primary_10_1002_mp_14512
crossref_primary_10_1016_j_petrol_2022_110734
crossref_primary_10_3390_rs11242912
crossref_primary_10_1007_s00138_023_01391_5
crossref_primary_10_1016_j_imavis_2022_104401
crossref_primary_10_3390_s21196565
crossref_primary_10_1016_j_asoc_2019_105975
crossref_primary_10_1007_s10462_020_09854_1
crossref_primary_10_3390_rs12172770
crossref_primary_10_1007_s00521_024_10165_7
crossref_primary_10_1007_s42979_023_02338_3
crossref_primary_10_1109_MSP_2020_2977269
crossref_primary_10_1109_TIV_2020_2980671
crossref_primary_10_3390_rs15082208
crossref_primary_10_1109_ACCESS_2023_3241837
crossref_primary_10_3390_rs15061525
crossref_primary_10_1038_s41598_020_78799_w
crossref_primary_10_1080_10095020_2023_2292587
crossref_primary_10_1016_j_asoc_2020_106353
crossref_primary_10_1145_3472770
crossref_primary_10_17341_gazimmfd_652101
crossref_primary_10_3390_jimaging7110241
crossref_primary_10_1016_j_energy_2019_03_080
crossref_primary_10_1017_S0890060420000372
crossref_primary_10_1016_j_patcog_2023_109667
crossref_primary_10_1088_1361_6501_aceb7e
crossref_primary_10_1002_jbio_202300078
crossref_primary_10_1016_j_aei_2021_101307
crossref_primary_10_1109_JIOT_2020_3022353
crossref_primary_10_1088_1742_6596_2418_1_012081
crossref_primary_10_1016_j_compchemeng_2022_107768
crossref_primary_10_1016_j_bbe_2020_09_008
crossref_primary_10_1016_j_ecolind_2024_111844
crossref_primary_10_1007_s00530_021_00758_w
crossref_primary_10_21307_ijanmc_2021_022
crossref_primary_10_3390_pharmaceutics14112257
crossref_primary_10_34133_research_0491
crossref_primary_10_1111_mice_12481
crossref_primary_10_1134_S1054661821030202
crossref_primary_10_1049_iet_ipr_2019_1398
crossref_primary_10_3390_electronics12040827
crossref_primary_10_3390_rs14061516
crossref_primary_10_3390_s23136238
crossref_primary_10_1088_2399_6528_abebcf
crossref_primary_10_3390_s20020563
crossref_primary_10_3390_s21062153
crossref_primary_10_3389_frai_2020_00072
crossref_primary_10_1016_j_cviu_2020_103077
crossref_primary_10_3390_rs16050817
crossref_primary_10_3390_electronics12194169
crossref_primary_10_1007_s41870_023_01408_2
crossref_primary_10_1080_13621718_2019_1687635
crossref_primary_10_1111_exsy_12742
crossref_primary_10_3390_app15031063
crossref_primary_10_1134_S0021364023602725
crossref_primary_10_1155_2021_3481469
crossref_primary_10_34133_plantphenomics_0271
crossref_primary_10_1038_s41598_024_64636_x
crossref_primary_10_1007_s10278_018_0160_1
crossref_primary_10_3390_rs12152368
crossref_primary_10_3390_rs14071527
crossref_primary_10_1016_j_neucom_2021_08_157
crossref_primary_10_3390_land13071007
crossref_primary_10_1007_s00500_024_09946_y
crossref_primary_10_1007_s11604_018_0795_3
crossref_primary_10_1007_s12021_021_09556_1
crossref_primary_10_1109_TGRS_2024_3446628
crossref_primary_10_1007_s44379_024_00003_x
crossref_primary_10_1007_s44196_023_00364_w
crossref_primary_10_1016_j_bspc_2021_102661
crossref_primary_10_1016_j_media_2018_08_007
crossref_primary_10_1007_s11554_021_01170_3
crossref_primary_10_1007_s10494_020_00151_z
crossref_primary_10_1016_j_asoc_2020_106153
crossref_primary_10_3390_s21103389
crossref_primary_10_1080_01431161_2021_1913298
crossref_primary_10_1109_TITS_2023_3321309
crossref_primary_10_1016_j_compag_2022_106911
crossref_primary_10_1186_s12859_020_3521_y
crossref_primary_10_3390_s21041492
crossref_primary_10_1007_s11042_022_12447_5
crossref_primary_10_1007_s11042_022_12425_x
crossref_primary_10_3390_app13010164
crossref_primary_10_1007_s12652_020_01803_8
crossref_primary_10_1016_j_isprsjprs_2024_02_010
crossref_primary_10_1007_s11042_023_14961_6
crossref_primary_10_1016_j_eswa_2023_119950
crossref_primary_10_1111_mice_12433
crossref_primary_10_3390_rs14153650
crossref_primary_10_3390_app14146298
crossref_primary_10_1061_JCEMD4_COENG_12542
crossref_primary_10_1109_ACCESS_2020_3045147
crossref_primary_10_1016_j_mlwa_2021_100158
crossref_primary_10_1109_TITS_2021_3127553
crossref_primary_10_3390_robotics10010002
crossref_primary_10_1016_j_marenvres_2022_105829
crossref_primary_10_3390_rs14163864
crossref_primary_10_1109_TMC_2023_3265010
crossref_primary_10_1007_s11760_021_01862_0
crossref_primary_10_1016_j_patrec_2020_10_011
crossref_primary_10_3390_app11209691
crossref_primary_10_1007_s41060_024_00660_4
crossref_primary_10_3390_f15030561
crossref_primary_10_1049_ipr2_12935
crossref_primary_10_3390_electronics11223787
crossref_primary_10_1016_j_cviu_2019_102809
crossref_primary_10_1016_j_isprsjprs_2021_07_012
crossref_primary_10_3390_f15030529
crossref_primary_10_1016_j_knosys_2024_112217
crossref_primary_10_1016_j_cmpb_2022_106874
crossref_primary_10_54569_aair_1164731
crossref_primary_10_1109_TIV_2023_3268051
crossref_primary_10_1155_are_8892810
crossref_primary_10_1186_s42400_023_00145_0
crossref_primary_10_3390_app9030404
crossref_primary_10_1016_j_engappai_2023_107486
crossref_primary_10_1016_j_measen_2023_100974
crossref_primary_10_1617_s11527_024_02341_x
crossref_primary_10_3390_rs15184554
crossref_primary_10_3390_jmse10101503
crossref_primary_10_3390_w13182512
crossref_primary_10_1109_TIV_2022_3167733
crossref_primary_10_3390_app10082641
crossref_primary_10_4018_IJWLTT_334708
crossref_primary_10_1109_TIV_2022_3216734
crossref_primary_10_1080_14942119_2021_1831426
crossref_primary_10_3390_s25051278
crossref_primary_10_1109_JSTARS_2022_3203750
crossref_primary_10_1111_mice_12667
crossref_primary_10_2478_amns_2025_0338
crossref_primary_10_1016_j_cmpb_2021_106563
crossref_primary_10_29252_jgit_7_3_173
crossref_primary_10_1007_s11042_020_09518_w
crossref_primary_10_3390_ma14216311
crossref_primary_10_1016_j_autcon_2021_103804
crossref_primary_10_3390_s22010329
crossref_primary_10_1016_j_conbuildmat_2024_138379
crossref_primary_10_1016_j_mlwa_2022_100422
crossref_primary_10_1016_j_procs_2019_02_076
crossref_primary_10_1111_cogs_13258
crossref_primary_10_1145_3714463
crossref_primary_10_1016_j_asoc_2021_107101
crossref_primary_10_1016_j_asoc_2021_107344
Cites_doi 10.1145/3005348
10.1162/neco.1997.9.8.1735
10.1109/CVPR.2012.6248074
10.1007/s11263-007-0090-8
10.1145/2461912.2462002
10.1007/s11263-007-0109-1
10.1007/s11263-015-0816-y
10.1177/0278364913491297
10.1109/ICCV.2013.458
10.1016/j.patrec.2008.04.005
10.1007/s11263-014-0733-5
10.1364/BOE.8.003627
10.1109/TIP.2005.852470
10.1145/2980179.2980238
10.1145/1531326.1531379
10.1016/j.jvcir.2015.10.012
10.1109/34.969114
10.1109/TPAMI.2012.231
10.1109/TPAMI.2016.2644615
ContentType Journal Article
Copyright 2018 Elsevier B.V.
Copyright_xml – notice: 2018 Elsevier B.V.
DBID AAYXX
CITATION
DOI 10.1016/j.asoc.2018.05.018
DatabaseName CrossRef
DatabaseTitle CrossRef
DatabaseTitleList
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 1872-9681
EndPage 65
ExternalDocumentID 10_1016_j_asoc_2018_05_018
S1568494618302813
GroupedDBID --K
--M
.DC
.~1
0R~
1B1
1~.
1~5
23M
4.4
457
4G.
53G
5GY
5VS
6J9
7-5
71M
8P~
AABNK
AACTN
AAEDT
AAEDW
AAIAV
AAIKJ
AAKOC
AALRI
AAOAW
AAQFI
AAQXK
AAXUO
AAYFN
ABBOA
ABFNM
ABFRF
ABJNI
ABMAC
ABXDB
ABYKQ
ACDAQ
ACGFO
ACGFS
ACNNM
ACRLP
ACZNC
ADBBV
ADEZE
ADJOM
ADMUD
ADTZH
AEBSH
AECPX
AEFWE
AEKER
AENEX
AFKWA
AFTJW
AGHFR
AGUBO
AGYEJ
AHJVU
AHZHX
AIALX
AIEXJ
AIKHN
AITUG
AJBFU
AJOXV
ALMA_UNASSIGNED_HOLDINGS
AMFUW
AMRAJ
AOUOD
ASPBG
AVWKF
AXJTR
AZFZN
BJAXD
BKOJK
BLXMC
CS3
EBS
EFJIC
EFLBG
EJD
EO8
EO9
EP2
EP3
F5P
FDB
FEDTE
FGOYB
FIRID
FNPLU
FYGXN
G-Q
GBLVA
GBOLZ
HVGLF
HZ~
IHE
J1W
JJJVA
KOM
M41
MO0
N9A
O-L
O9-
OAUVE
OZT
P-8
P-9
P2P
PC.
Q38
R2-
RIG
ROL
RPZ
SDF
SDG
SES
SEW
SPC
SPCBC
SST
SSV
SSZ
T5K
UHS
UNMZH
~G-
AATTM
AAXKI
AAYWO
AAYXX
ABWVN
ACRPL
ACVFH
ADCNI
ADNMO
AEIPS
AEUPX
AFJKZ
AFPUW
AFXIZ
AGCQF
AGQPQ
AGRNS
AIGII
AIIUN
AKBMS
AKRWK
AKYEP
ANKPU
APXCP
BNPGV
CITATION
SSH
ID FETCH-LOGICAL-c300t-f5249ac4f668f29ce130f9dfb5bb5e3fafc1a0def767655dac808412cf03e913
IEDL.DBID .~1
ISSN 1568-4946
IngestDate Tue Jul 01 01:50:01 EDT 2025
Thu Apr 24 23:02:06 EDT 2025
Fri Feb 23 02:45:59 EST 2024
IsPeerReviewed true
IsScholarly true
Keywords Deep learning
Semantic segmentation
Scene labeling
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c300t-f5249ac4f668f29ce130f9dfb5bb5e3fafc1a0def767655dac808412cf03e913
ORCID 0000-0001-6817-6326
0000-0002-9575-6403
PageCount 25
ParticipantIDs crossref_primary_10_1016_j_asoc_2018_05_018
crossref_citationtrail_10_1016_j_asoc_2018_05_018
elsevier_sciencedirect_doi_10_1016_j_asoc_2018_05_018
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate September 2018
2018-09-00
PublicationDateYYYYMMDD 2018-09-01
PublicationDate_xml – month: 09
  year: 2018
  text: September 2018
PublicationDecade 2010
PublicationTitle Applied soft computing
PublicationYear 2018
Publisher Elsevier B.V
Publisher_xml – name: Elsevier B.V
References Yosinski, Clune, Bengio, Lipson (bib0115) 2014
Shuai, Zuo, Wang, Wang (bib0435) 2016
Geiger, Lenz, Urtasun (bib0010) 2012
Gould, Fulton, Koller (bib0215) 2009
Farabet, Couprie, Najman, LeCun (bib0045) 2013; 35
Russell, Torralba, Murphy, Freeman (bib0310) 2008; 77
Ess, Müller, Grabner, Van Gool (bib0005) 2009
Krizhevsky, Sutskever, Hinton (bib0070) 2012
Raj, Maturana, Scherer (bib0385) 2015
Zheng, Jayasumana, Romera-Paredes, Vineet, Su, Du, Huang, Torr (bib0370) 2015
Roy, Conjeti, Karri, Sheet, Katouzian, Wachinger, Navab (bib0590) 2017; 8
Perazzi, Pont-Tuset, McWilliams, Van Gool, Gross, Sorkine-Hornung (bib0235) 2016
Byeon, Breuel, Raue, Liwicki (bib0425) 2015
Armeni, Sax, Zamir, Savarese (bib0270) 2017
Neverova, Luc, Couprie, Verbeek, LeCun (bib0485) 2017
Zeiler, Taylor, Fergus (bib0490) 2011
Quadros, Underwood, Douillard (bib0280) 2012
Brostow, Shotton, Fauqueur, Cipolla (bib0290) 2008
Shen, Hertzmann, Jia, Paris, Price, Shechtman, Sachs (bib0145) 2016; vol. 35
Zhang, Liu, Wang (bib0500) 2017
Zhang, Candra, Vetter, Zakhor (bib0210) 2015
Zhang, Jiang, Zhang, Li, Xia, Chen (bib0575) 2014
Gupta, Girshick, Arbeláez, Malik (bib0055) 2014
Krähenbühl, Koltun (bib0520) 2013
Yoon, Jeon, Yoo, Lee, So Kweon (bib0025) 2015
Cordts, Omran, Ramos, Rehfeld, Enzweiler, Benenson, Franke, Roth, Schiele (bib0015) 2016
Milletari, Navab, Ahmadi (bib0595) 2016
Tran, Bourdev, Fergus, Torresani, Paluri (bib0585) 2015
Ning, Delhomme, LeCun, Piano, Bottou, Barbano (bib0035) 2005; 14
Roy, Todorovic (bib0395) 2016
Pinheiro, Collobert, Dollar (bib0440) 2015
Richter, Vineet, Roth, Koltun (bib0135) 2016
Liang-Chieh, Papandreou, Kokkinos, Murphy, Yuille (bib0360) 2015
Shotton, Winn, Rother, Criminisi (bib0510) 2009; 81
Lin, Maire, Belongie, Hays, Perona, Ramanan, Dollár, Zitnick (bib0170) 2014
Ros, Sellart, Materzynska, Vazquez, Lopez (bib0175) 2016
Hackel, Wegner, Schindler (bib0285) 2016
Liu, Rabinovich, Berg (bib0405) 2015
Pont-Tuset, Perazzi, Caelles, Arbeláez, Sorkine-Hornung, Van Gool (bib0240) 2017
Zhao, Shi, Qi, Wang, Jia (bib0410) 2016
Chang, Funkhouser, Guibas, Hanrahan, Huang, Li, Savarese, Savva, Song, Su (bib0330) 2015
Eigen, Fergus (bib0390) 2015
Cordts, Omran, Ramos, Scharwächter, Enzweiler, Benenson, Franke, Roth, Schiele (bib0180) 2015
Boykov, Veksler, Zabih (bib0580) 2001; 23
Jain, Grauman (bib0225) 2014
Li, Gan, Liang, Yu, Cheng, Lin (bib0540) 2016
Mottaghi, Chen, Liu, Cho, Lee, Fidler, Urtasun, Yuille (bib0155) 2014
Xiao, Owens, Torralba (bib0250) 2013
Visin, Kastner, Cho, Matteucci, Courville, Bengio (bib0100) 2015
Chen, Mottaghi, Liu, Fidler, Urtasun, Yuille (bib0160) 2014
Chen, Golovinskiy, Funkhouser (bib0275) 2009; 28
Zagoruyko, Lerer, Lin, Pinheiro, Gross, Chintala, Dollár (bib0450) 2016
Ronneberger, Fischer, Brox (bib0345) 2015
Huang, You (bib0455) 2016
Ma, Stuckler, Kerl, Cremers (bib0565) 2017
Hariharan, Arbeláez, Bourdev, Maji, Malik (bib0165) 2011
Russakovsky, Deng, Su, Krause, Satheesh, Ma, Huang, Karpathy, Khosla, Bernstein (bib0130) 2015; 115
Prest, Leistner, Civera, Schmid, Ferrari (bib0300) 2012
Girshick, Donahue, Darrell, Malik (bib0555) 2014
Molchanov, Tyree, Karras, Aila, Kautz (bib0630) 2016
Wang, Sun, Liu, Sarma, Bronstein, Solomon (bib0470) 2018
Gupta, Arbelaez, Malik (bib0315) 2013
Zhu, Meng, Cai, Lu (bib0060) 2016; 34
Everingham, Eslami, Van Gool, Williams, Winn, Zisserman (bib0150) 2015; 111
Pinheiro, Lin, Collobert, Dollár (bib0445) 2016
Cho, van Merrienboer, Bahdanau, Bengio (bib0535) 2014
Long, Shelhamer, Darrell (bib0340) 2015
Zhou, Wu, Wu, Zhou (bib0525) 2015
Thoma (bib0065) 2016
Ros, Alvarez (bib0200) 2015
Pinheiro, Collobert (bib0430) 2014
Paszke, Chaurasia, Kim, Culurciello (bib0380) 2016
Arbeláez, Pont-Tuset, Barron, Marques, Malik (bib0550) 2014
Wan, Wang, Hoi, Wu, Zhu, Zhang, Li (bib0030) 2014
Li, Gan, Liang, Yu, Cheng, Lin (bib0420) 2016
Wong, Gatt, Stamatescu, McDonnell (bib0140) 2016
Szegedy, Liu, Jia, Sermanet, Reed, Anguelov, Erhan, Vanhoucke, Rabinovich (bib0085) 2015
Bell, Upchurch, Snavely, Bala (bib0230) 2015
Bell, Upchurch, Snavely, Bala (bib0305) 2013; 32
Pathak, Krähenbühl, Donahue, Darrell, Efros (bib0120) 2016
Song, Lichtenberg, Xiao (bib0255) 2015
Kendall, Badrinarayanan, Cipolla (bib0355) 2015
Noh, Hong, Han (bib0080) 2015
He, Zhang, Ren, Sun (bib0090) 2016
Alvarez, Gevers, LeCun, Lopez (bib0195) 2012
Liu, Yuen, Torralba (bib0220) 2009
Lai, Bo, Ren, Fox (bib0260) 2011
Bian, Lim, Zhou (bib0400) 2016
Yi, Kim, Ceylan, Shen, Yan, Su, Lu, Huang, Sheffer, Guibas (bib0265) 2016
Visin, Ciccone, Romero, Kastner, Cho, Bengio, Matteucci, Courville (bib0415) 2016
Han, Mao, Dally (bib0625) 2015
Brostow, Fauqueur, Cipolla (bib0185) 2009; 30
Ros, Ramos, Granados, Bakhtiary, Vazquez, Lopez (bib0205) 2015
Lin, Goyal, Girshick, He, Dollár (bib0600) 2017
Niepert, Ahmed, Kutzkov (bib0615) 2016
Qi, Su, Mo, Guibas (bib0460) 2016
Tran, Bourdev, Fergus, Torresani, Paluri (bib0480) 2016
Ciresan, Giusti, Gambardella, Schmidhuber (bib0040) 2012
Zeiler, Fergus (bib0495) 2014
Janoch, Karayev, Jia, Barron, Fritz, Saenko, Darrell (bib0320) 2013
Anwar, Hwang, Sung (bib0620) 2017; 13
Hariharan, Arbeláez, Girshick, Malik (bib0050) 2014
Simonyan, Zisserman (bib0075) 2014
Richtsfeld (bib0325) 2012
Oquab, Bottou, Laptev, Sivic (bib0110) 2014
Chen, Papandreou, Kokkinos, Murphy, Yuille (bib0365) 2016
Hazirbas, Ma, Domokos, Cremers (bib0570) 2016
Deng, Dong, Socher, Li, Li, Fei-Fei (bib0125) 2009
Ahmed, Yu, Xu, Gong, Xing (bib0105) 2008
Li, Yu (bib0545) 2016
Armeni, Sener, Zamir, Jiang, Brilakis, Fischer, Savarese (bib0335) 2016
Henaff, Bruna, LeCun (bib0605) 2015
Qi, Yi, Su, Guibas (bib0465) 2017
Sturgess, Alahari, Ladicky, Torr (bib0190) 2009
Hochreiter, Schmidhuber (bib0530) 1997; 9
Geiger, Lenz, Stiller, Urtasun (bib0295) 2013; 32
Zeng, Yu, Song, Suo, Walker, Rodriguez, Xiao (bib0560) 2017
Kipf, Welling (bib0610) 2016
Shelhamer, Rakelly, Hoffman, Darrell (bib0475) 2016
Rother, Kolmogorov, Blake (bib0505) 2004; vol. 23
Badrinarayanan, Kendall, Cipolla (bib0350) 2015; 39
Silberman, Hoiem, Kohli, Fergus (bib0245) 2012
Yu, Koltun (bib0375) 2015
Oberweger, Wohlhart, Lepetit (bib0020) 2015
Koltun (bib0515) 2011; 2
Graves, Fernández, Schmidhuber (bib0095) 2007
Simonyan (10.1016/j.asoc.2018.05.018_bib0075) 2014
Song (10.1016/j.asoc.2018.05.018_bib0255) 2015
Chang (10.1016/j.asoc.2018.05.018_bib0330) 2015
Lin (10.1016/j.asoc.2018.05.018_bib0600) 2017
Li (10.1016/j.asoc.2018.05.018_bib0545) 2016
Wang (10.1016/j.asoc.2018.05.018_bib0470) 2018
Henaff (10.1016/j.asoc.2018.05.018_bib0605) 2015
Alvarez (10.1016/j.asoc.2018.05.018_bib0195) 2012
Geiger (10.1016/j.asoc.2018.05.018_bib0295) 2013; 32
Janoch (10.1016/j.asoc.2018.05.018_bib0320) 2013
Pinheiro (10.1016/j.asoc.2018.05.018_bib0445) 2016
Perazzi (10.1016/j.asoc.2018.05.018_bib0235) 2016
Russakovsky (10.1016/j.asoc.2018.05.018_bib0130) 2015; 115
Roy (10.1016/j.asoc.2018.05.018_bib0395) 2016
Zhang (10.1016/j.asoc.2018.05.018_bib0575) 2014
Wong (10.1016/j.asoc.2018.05.018_bib0140) 2016
Liang-Chieh (10.1016/j.asoc.2018.05.018_bib0360) 2015
Anwar (10.1016/j.asoc.2018.05.018_bib0620) 2017; 13
Ess (10.1016/j.asoc.2018.05.018_bib0005) 2009
Noh (10.1016/j.asoc.2018.05.018_bib0080) 2015
Ning (10.1016/j.asoc.2018.05.018_bib0035) 2005; 14
Yosinski (10.1016/j.asoc.2018.05.018_bib0115) 2014
Brostow (10.1016/j.asoc.2018.05.018_bib0290) 2008
Zhang (10.1016/j.asoc.2018.05.018_bib0210) 2015
Paszke (10.1016/j.asoc.2018.05.018_bib0380) 2016
Richter (10.1016/j.asoc.2018.05.018_bib0135) 2016
Zhu (10.1016/j.asoc.2018.05.018_bib0060) 2016; 34
Kipf (10.1016/j.asoc.2018.05.018_bib0610) 2016
Krähenbühl (10.1016/j.asoc.2018.05.018_bib0520) 2013
Cordts (10.1016/j.asoc.2018.05.018_bib0180) 2015
Shelhamer (10.1016/j.asoc.2018.05.018_bib0475) 2016
Cho (10.1016/j.asoc.2018.05.018_bib0535) 2014
Szegedy (10.1016/j.asoc.2018.05.018_bib0085) 2015
Byeon (10.1016/j.asoc.2018.05.018_bib0425) 2015
Oberweger (10.1016/j.asoc.2018.05.018_bib0020) 2015
Ciresan (10.1016/j.asoc.2018.05.018_bib0040) 2012
Rother (10.1016/j.asoc.2018.05.018_bib0505) 2004; vol. 23
Farabet (10.1016/j.asoc.2018.05.018_bib0045) 2013; 35
Ros (10.1016/j.asoc.2018.05.018_bib0205) 2015
Yu (10.1016/j.asoc.2018.05.018_bib0375) 2015
Geiger (10.1016/j.asoc.2018.05.018_bib0010) 2012
Liu (10.1016/j.asoc.2018.05.018_bib0405) 2015
He (10.1016/j.asoc.2018.05.018_bib0090) 2016
Chen (10.1016/j.asoc.2018.05.018_bib0160) 2014
Pinheiro (10.1016/j.asoc.2018.05.018_bib0430) 2014
Pinheiro (10.1016/j.asoc.2018.05.018_bib0440) 2015
Han (10.1016/j.asoc.2018.05.018_bib0625) 2015
Pathak (10.1016/j.asoc.2018.05.018_bib0120) 2016
Badrinarayanan (10.1016/j.asoc.2018.05.018_bib0350) 2015; 39
Raj (10.1016/j.asoc.2018.05.018_bib0385) 2015
Tran (10.1016/j.asoc.2018.05.018_bib0480) 2016
Tran (10.1016/j.asoc.2018.05.018_bib0585) 2015
Richtsfeld (10.1016/j.asoc.2018.05.018_bib0325) 2012
Sturgess (10.1016/j.asoc.2018.05.018_bib0190) 2009
Girshick (10.1016/j.asoc.2018.05.018_bib0555) 2014
Zhou (10.1016/j.asoc.2018.05.018_bib0525) 2015
Deng (10.1016/j.asoc.2018.05.018_bib0125) 2009
Zhao (10.1016/j.asoc.2018.05.018_bib0410) 2016
Brostow (10.1016/j.asoc.2018.05.018_bib0185) 2009; 30
Zagoruyko (10.1016/j.asoc.2018.05.018_bib0450) 2016
Jain (10.1016/j.asoc.2018.05.018_bib0225) 2014
Koltun (10.1016/j.asoc.2018.05.018_bib0515) 2011; 2
Molchanov (10.1016/j.asoc.2018.05.018_bib0630) 2016
Ros (10.1016/j.asoc.2018.05.018_bib0200) 2015
Neverova (10.1016/j.asoc.2018.05.018_bib0485) 2017
Milletari (10.1016/j.asoc.2018.05.018_bib0595) 2016
Visin (10.1016/j.asoc.2018.05.018_bib0100) 2015
Hackel (10.1016/j.asoc.2018.05.018_bib0285) 2016
Hazirbas (10.1016/j.asoc.2018.05.018_bib0570) 2016
Ros (10.1016/j.asoc.2018.05.018_bib0175) 2016
Everingham (10.1016/j.asoc.2018.05.018_bib0150) 2015; 111
Yi (10.1016/j.asoc.2018.05.018_bib0265) 2016
Liu (10.1016/j.asoc.2018.05.018_bib0220) 2009
Bell (10.1016/j.asoc.2018.05.018_bib0230) 2015
Chen (10.1016/j.asoc.2018.05.018_bib0365) 2016
Thoma (10.1016/j.asoc.2018.05.018_bib0065) 2016
Hochreiter (10.1016/j.asoc.2018.05.018_bib0530) 1997; 9
Yoon (10.1016/j.asoc.2018.05.018_bib0025) 2015
Ma (10.1016/j.asoc.2018.05.018_bib0565) 2017
Hariharan (10.1016/j.asoc.2018.05.018_bib0165) 2011
Qi (10.1016/j.asoc.2018.05.018_bib0465) 2017
Li (10.1016/j.asoc.2018.05.018_bib0420) 2016
Krizhevsky (10.1016/j.asoc.2018.05.018_bib0070) 2012
Shuai (10.1016/j.asoc.2018.05.018_bib0435) 2016
Roy (10.1016/j.asoc.2018.05.018_bib0590) 2017; 8
Bell (10.1016/j.asoc.2018.05.018_bib0305) 2013; 32
Visin (10.1016/j.asoc.2018.05.018_bib0415) 2016
Bian (10.1016/j.asoc.2018.05.018_bib0400) 2016
Wan (10.1016/j.asoc.2018.05.018_bib0030) 2014
Russell (10.1016/j.asoc.2018.05.018_bib0310) 2008; 77
Zeiler (10.1016/j.asoc.2018.05.018_bib0495) 2014
Gould (10.1016/j.asoc.2018.05.018_bib0215) 2009
Lin (10.1016/j.asoc.2018.05.018_bib0170) 2014
Zeng (10.1016/j.asoc.2018.05.018_bib0560) 2017
Arbeláez (10.1016/j.asoc.2018.05.018_bib0550) 2014
Gupta (10.1016/j.asoc.2018.05.018_bib0315) 2013
Graves (10.1016/j.asoc.2018.05.018_bib0095) 2007
Li (10.1016/j.asoc.2018.05.018_bib0540) 2016
Qi (10.1016/j.asoc.2018.05.018_bib0460) 2016
Chen (10.1016/j.asoc.2018.05.018_bib0275) 2009; 28
Quadros (10.1016/j.asoc.2018.05.018_bib0280) 2012
Zhang (10.1016/j.asoc.2018.05.018_bib0500) 2017
Kendall (10.1016/j.asoc.2018.05.018_bib0355) 2015
Huang (10.1016/j.asoc.2018.05.018_bib0455) 2016
Zheng (10.1016/j.asoc.2018.05.018_bib0370) 2015
Long (10.1016/j.asoc.2018.05.018_bib0340) 2015
Hariharan (10.1016/j.asoc.2018.05.018_bib0050) 2014
Ahmed (10.1016/j.asoc.2018.05.018_bib0105) 2008
Boykov (10.1016/j.asoc.2018.05.018_bib0580) 2001; 23
Pont-Tuset (10.1016/j.asoc.2018.05.018_bib0240) 2017
Xiao (10.1016/j.asoc.2018.05.018_bib0250) 2013
Prest (10.1016/j.asoc.2018.05.018_bib0300) 2012
Eigen (10.1016/j.asoc.2018.05.018_bib0390) 2015
Shotton (10.1016/j.asoc.2018.05.018_bib0510) 2009; 81
Cordts (10.1016/j.asoc.2018.05.018_bib0015) 2016
Silberman (10.1016/j.asoc.2018.05.018_bib0245) 2012
Armeni (10.1016/j.asoc.2018.05.018_bib0270) 2017
Niepert (10.1016/j.asoc.2018.05.018_bib0615) 2016
Mottaghi (10.1016/j.asoc.2018.05.018_bib0155) 2014
Oquab (10.1016/j.asoc.2018.05.018_bib0110) 2014
Ronneberger (10.1016/j.asoc.2018.05.018_bib0345) 2015
Lai (10.1016/j.asoc.2018.05.018_bib0260) 2011
Armeni (10.1016/j.asoc.2018.05.018_bib0335) 2016
Gupta (10.1016/j.asoc.2018.05.018_bib0055) 2014
Zeiler (10.1016/j.asoc.2018.05.018_bib0490) 2011
Shen (10.1016/j.asoc.2018.05.018_bib0145) 2016; vol. 35
References_xml – year: 2009
  ident: bib0190
  article-title: Combining appearance and structure from motion features for road scene understanding
  publication-title: BMVC 2012 – 23rd British Machine Vision Conference, BMVA
– start-page: 1
  year: 2016
  end-page: 6
  ident: bib0140
  article-title: Understanding data augmentation for classification: when to warp?
  publication-title: 2016 International Conference on Digital Image Computing: Techniques and Applications (DICTA)
– start-page: 513
  year: 2013
  end-page: 521
  ident: bib0520
  article-title: Parameter learning and convergent inference for dense random fields
  publication-title: ICML (3)
– start-page: 1625
  year: 2013
  end-page: 1632
  ident: bib0250
  article-title: SUN3D: a database of big spaces reconstructed using SfM and object labels
  publication-title: 2013 IEEE International Conference on Computer Vision
– volume: 28
  year: 2009
  ident: bib0275
  article-title: A benchmark for 3D mesh segmentation
  publication-title: ACM Trans. Graph. (Proc. SIGGRAPH)
– start-page: 852
  year: 2016
  end-page: 868
  ident: bib0475
  article-title: Clockwork convnets for video semantic segmentation
  publication-title: Computer Vision – ECCV 2016 Workshops
– start-page: 1520
  year: 2015
  end-page: 1528
  ident: bib0080
  article-title: Learning deconvolution network for semantic segmentation
  publication-title: Proceedings of the IEEE International Conference on Computer Vision
– start-page: 2
  year: 2009
  ident: bib0005
  article-title: Segmentation-based urban traffic scene understanding
  publication-title: BMVC, vol. 1
– year: 2015
  ident: bib0100
  article-title: Renet: A Recurrent Neural Network Based Alternative to Convolutional Networks, CoRR abs/1505.00393
– start-page: 3282
  year: 2012
  end-page: 3289
  ident: bib0300
  article-title: Learning object class detectors from weakly annotated video
  publication-title: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
– year: 2014
  ident: bib0155
  article-title: The role of context for object detection and semantic segmentation in the wild
  publication-title: IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
– year: 2017
  ident: bib0485
  article-title: Predicting Deeper into the Future of Semantic Segmentation, CoRR abs/1703.07684
– start-page: 1529
  year: 2015
  end-page: 1537
  ident: bib0370
  article-title: Conditional random fields as recurrent neural networks
  publication-title: Proceedings of the IEEE International Conference on Computer Vision
– start-page: 345
  year: 2014
  end-page: 360
  ident: bib0055
  article-title: Learning rich features from RGB-D images for object detection and segmentation
  publication-title: European Conference on Computer Vision
– year: 2015
  ident: bib0375
  article-title: Multi-scale Context Aggregation by Dilated Convolutions
– start-page: 17
  year: 2016
  end-page: 24
  ident: bib0480
  article-title: Deep end2end voxel2voxel prediction
  publication-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops
– year: 2015
  ident: bib0605
  article-title: Deep Convolutional Networks on Graph-Structured Data
– start-page: 564
  year: 2013
  end-page: 571
  ident: bib0315
  article-title: Perceptual organization and recognition of indoor scenes from RGB-D images
  publication-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
– year: 2015
  ident: bib0330
  article-title: Shapenet: An Information-Rich 3D Model Repository
– volume: 9
  start-page: 1735
  year: 1997
  end-page: 1780
  ident: bib0530
  article-title: Long short-term memory
  publication-title: Neural Comput.
– start-page: 770
  year: 2016
  end-page: 778
  ident: bib0090
  article-title: Deep residual learning for image recognition
  publication-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
– year: 2017
  ident: bib0270
  article-title: Joint 2D-3D-Semantic Data for Indoor Scene Understanding
– year: 2016
  ident: bib0595
  article-title: V-net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation, CoRR abs/1606.04797
– year: 2015
  ident: bib0020
  article-title: Hands Deep in Deep Learning for Hand Pose Estimation
– volume: 77
  start-page: 157
  year: 2008
  end-page: 173
  ident: bib0310
  article-title: Labelme: a database and web-based tool for image annotation
  publication-title: Int. J. Comput. Vis.
– start-page: 3354
  year: 2012
  end-page: 3361
  ident: bib0010
  article-title: Are we ready for autonomous driving? The KITTI vision benchmark suite
  publication-title: 2012 IEEE Conference on Computer Vision and Pattern Recognition
– year: 2016
  ident: bib0415
  article-title: ReSeg: a recurrent neural network-based model for semantic segmentation
  publication-title: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops
– volume: vol. 23
  start-page: 309
  year: 2004
  end-page: 314
  ident: bib0505
  article-title: GrabCut: interactive foreground extraction using iterated graph cuts
  publication-title: ACM Transactions on Graphics (TOG)
– volume: vol. 35
  start-page: 93
  year: 2016
  end-page: 102
  ident: bib0145
  article-title: Automatic portrait segmentation for image stylization
  publication-title: Computer Graphics Forum
– volume: 32
  year: 2013
  ident: bib0305
  article-title: OpenSurfaces: a richly annotated catalog of surface appearance
  publication-title: ACM Trans. Graph. (SIGGRAPH)
– start-page: 24
  year: 2015
  end-page: 32
  ident: bib0025
  article-title: Learning a deep convolutional network for light-field image super-resolution
  publication-title: Proceedings of the IEEE International Conference on Computer Vision Workshops
– start-page: 186
  year: 2016
  end-page: 201
  ident: bib0395
  article-title: A multi-scale CNN for affordance segmentation in RGB images
  publication-title: European Conference on Computer Vision
– start-page: 75
  year: 2016
  end-page: 91
  ident: bib0445
  article-title: Learning to refine object segments
  publication-title: European Conference on Computer Vision
– year: 2016
  ident: bib0450
  article-title: A multipath network for object detection
  publication-title: Proceedings of the British Machine Vision Conference 2016, BMVC 2016
– start-page: 82
  year: 2014
  end-page: 90
  ident: bib0430
  article-title: Recurrent convolutional neural networks for scene labeling
  publication-title: ICML
– start-page: 2843
  year: 2012
  end-page: 2851
  ident: bib0040
  article-title: Deep neural networks segment neuronal membranes in electron microscopy images
  publication-title: Advances in Neural Information Processing Systems
– volume: 35
  start-page: 1915
  year: 2013
  end-page: 1929
  ident: bib0045
  article-title: Learning hierarchical features for scene labeling
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
– start-page: 580
  year: 2014
  end-page: 587
  ident: bib0555
  article-title: Rich feature hierarchies for accurate object detection and semantic segmentation
  publication-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
– volume: 111
  start-page: 98
  year: 2015
  end-page: 136
  ident: bib0150
  article-title: The PASCAL visual object classes challenge: a retrospective
  publication-title: Int. J. Comput. Vis.
– start-page: 991
  year: 2011
  end-page: 998
  ident: bib0165
  article-title: Semantic contours from inverse detectors
  publication-title: 2011 International Conference on Computer Vision
– start-page: 3431
  year: 2015
  end-page: 3440
  ident: bib0340
  article-title: Fully convolutional networks for semantic segmentation
  publication-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
– year: 2015
  ident: bib0180
  article-title: The cityscapes dataset
  publication-title: CVPR Workshop on The Future of Datasets in Vision
– start-page: 2650
  year: 2015
  end-page: 2658
  ident: bib0390
  article-title: Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture
  publication-title: Proceedings of the IEEE International Conference on Computer Vision
– year: 2016
  ident: bib0540
  article-title: RGB-D Scene Labeling with Long Short-Term Memorized Fusion Model, CoRR abs/1604.05000
– start-page: 537
  year: 2015
  end-page: 542
  ident: bib0200
  article-title: Unsupervised image transformation for outdoor semantic labelling
  publication-title: 2015 IEEE Intelligent Vehicles Symposium (IV)
– volume: 23
  start-page: 1222
  year: 2001
  end-page: 1239
  ident: bib0580
  article-title: Fast approximate energy minimization via graph cuts
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
– year: 2018
  ident: bib0470
  article-title: Dynamic Graph CNN for Learning on Point Clouds
– year: 2017
  ident: bib0565
  article-title: Multi-view Deep Learning for Consistent Semantic Mapping with RGB-D Cameras
– volume: 2
  start-page: 4
  year: 2011
  ident: bib0515
  article-title: Efficient inference in fully connected CRFs with Gaussian edge potentials
  publication-title: Adv. Neural Inf. Process. Syst.
– year: 2016
  ident: bib0460
  article-title: PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation
– start-page: 141
  year: 2013
  end-page: 165
  ident: bib0320
  article-title: A Category-Level 3D Object Dataset: Putting the Kinect to Work
– start-page: 3234
  year: 2016
  end-page: 3243
  ident: bib0175
  article-title: The synthia dataset: a large collection of synthetic images for semantic segmentation of urban scenes
  publication-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
– volume: 81
  start-page: 2
  year: 2009
  end-page: 23
  ident: bib0510
  article-title: Textonboost for image understanding: multi-class object recognition and segmentation by jointly modeling texture, layout, and context
  publication-title: Int. J. Comput. Vis.
– year: 2017
  ident: bib0600
  article-title: Focal loss for dense object detection
  publication-title: International Conference on Computer Vision (ICCV)
– volume: 14
  start-page: 1360
  year: 2005
  end-page: 1371
  ident: bib0035
  article-title: Toward automatic phenotyping of developing embryos from videos
  publication-title: IEEE Trans. Image Process.
– start-page: 549
  year: 2007
  end-page: 558
  ident: bib0095
  article-title: Multi-dimensional Recurrent Neural Networks
– start-page: 541
  year: 2016
  end-page: 557
  ident: bib0420
  article-title: LSTM-CF: Unifying Context Modeling and Fusion with LSTMs for RGB-D Scene Labeling
– start-page: 478
  year: 2016
  end-page: 487
  ident: bib0545
  article-title: Deep contrast learning for salient object detection
  publication-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
– year: 2016
  ident: bib0235
  article-title: A benchmark dataset and evaluation methodology for video object segmentation
  publication-title: Computer Vision and Pattern Recognition
– start-page: 157
  year: 2014
  end-page: 166
  ident: bib0030
  article-title: Deep learning for content-based image retrieval: a comprehensive study
  publication-title: Proceedings of the 22nd ACM International Conference on Multimedia
– volume: 13
  start-page: 32
  year: 2017
  ident: bib0620
  article-title: Structured pruning of deep convolutional neural networks
  publication-title: ACM J. Emerg. Technol. Comput. Syst.
– year: 2015
  ident: bib0405
  article-title: Parsenet: Looking Wider to See Better
– start-page: 1534
  year: 2016
  end-page: 1543
  ident: bib0335
  article-title: 3D semantic parsing of large-scale indoor spaces
  publication-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
– volume: 8
  start-page: 3627
  year: 2017
  end-page: 3642
  ident: bib0590
  article-title: ReLayNet: retinal layer and fluid segmentation of macular optical coherence tomography using fully convolutional networks
  publication-title: Biomed. Opt. Express
– volume: 115
  start-page: 211
  year: 2015
  end-page: 252
  ident: bib0130
  article-title: Imagenet large scale visual recognition challenge
  publication-title: Int. J. Comput. Vis.
– year: 2015
  ident: bib0625
  article-title: Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding
– start-page: 248
  year: 2009
  end-page: 255
  ident: bib0125
  article-title: ImageNet: a large-scale hierarchical image database
  publication-title: IEEE Conference on Computer Vision and Pattern Recognition, 2009, CVPR 2009
– year: 2015
  ident: bib0355
  article-title: Bayesian Segnet: Model Uncertainty in Deep Convolutional Encoder-Decoder Architectures for Scene Understanding
– year: 2016
  ident: bib0570
  article-title: FuseNet: incorporating depth into semantic segmentation via fusion-based CNN architecture
  publication-title: Proc. ACCV, vol. 2
– start-page: 1850
  year: 2015
  end-page: 1857
  ident: bib0210
  article-title: Sensor fusion for semantic segmentation of urban scenes
  publication-title: 2015 IEEE International Conference on Robotics and Automation (ICRA)
– year: 2015
  ident: bib0525
  article-title: Exploiting Local Structures with the Kronecker Layer in Convolutional Networks
– start-page: 69
  year: 2008
  end-page: 82
  ident: bib0105
  article-title: Training hierarchical feed-forward visual recognition models using transfer learning from pseudo-tasks
  publication-title: European Conference on Computer Vision
– start-page: 567
  year: 2015
  end-page: 576
  ident: bib0255
  article-title: SUN RGB-D: a RGB-D scene understanding benchmark suite
  publication-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
– year: 2012
  ident: bib0325
  article-title: The Object Segmentation Database (OSD)
– year: 2016
  ident: bib0380
  article-title: Enet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation
– start-page: 102
  year: 2016
  end-page: 118
  ident: bib0135
  article-title: Playing for Data: Ground Truth from Computer Games
– start-page: 103
  year: 2014
  end-page: 111
  ident: bib0535
  article-title: On the properties of neural machine translation: encoder-decoder approaches
  publication-title: Proceedings of SSST@EMNLP 2014, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation
– volume: 34
  start-page: 12
  year: 2016
  end-page: 27
  ident: bib0060
  article-title: Beyond pixels: a comprehensive survey from bottom-up to semantic image segmentation and cosegmentation
  publication-title: J. Vis. Commun. Image Represent.
– start-page: 3320
  year: 2014
  end-page: 3328
  ident: bib0115
  article-title: How transferable are features in deep neural networks?
  publication-title: Advances in Neural Information Processing Systems
– start-page: 1717
  year: 2014
  end-page: 1724
  ident: bib0110
  article-title: Learning and transferring mid-level image representations using convolutional neural networks
  publication-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
– volume: 32
  start-page: 1231
  year: 2013
  end-page: 1237
  ident: bib0295
  article-title: Vision meets robotics: the KITTI dataset
  publication-title: Int. J. Robot. Res.
– start-page: 1
  year: 2009
  end-page: 8
  ident: bib0215
  article-title: Decomposing a scene into geometric and semantically consistent regions
  publication-title: 2009 IEEE 12th International Conference on Computer Vision
– start-page: 746
  year: 2012
  end-page: 760
  ident: bib0245
  article-title: Indoor segmentation and support inference from RGBD images
  publication-title: European Conference on Computer Vision
– volume: 39
  start-page: 2481
  year: 2015
  end-page: 2495
  ident: bib0350
  article-title: Segnet: a deep convolutional encoder-decoder architecture for scene segmentation
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
– start-page: 1097
  year: 2012
  end-page: 1105
  ident: bib0070
  article-title: ImageNet classification with deep convolutional neural networks
  publication-title: Advances in Neural Information Processing Systems
– start-page: 44
  year: 2008
  end-page: 57
  ident: bib0290
  article-title: Segmentation and recognition using structure from motion point clouds
  publication-title: European Conference on Computer Vision
– start-page: 328
  year: 2014
  end-page: 335
  ident: bib0550
  article-title: Multiscale combinatorial grouping
  publication-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
– start-page: 5105
  year: 2017
  end-page: 5114
  ident: bib0465
  article-title: PointNet++: deep hierarchical feature learning on point sets in a metric space
  publication-title: Advances in Neural Information Processing Systems
– year: 2016
  ident: bib0120
  article-title: Context Encoders: Feature Learning by Inpainting, CoRR abs/1604.07379
– start-page: 1817
  year: 2011
  end-page: 1824
  ident: bib0260
  article-title: A large-scale hierarchical multi-view RGB-D object dataset
  publication-title: 2011 IEEE International Conference on Robotics and Automation (ICRA)
– year: 2016
  ident: bib0630
  article-title: Pruning Convolutional Neural Networks for Resource Efficient Transfer Learning
– start-page: 3213
  year: 2016
  end-page: 3223
  ident: bib0015
  article-title: The cityscapes dataset for semantic urban scene understanding
  publication-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
– year: 2017
  ident: bib0240
  article-title: The 2017 Davis Challenge on Video Object Segmentation
– start-page: 3547
  year: 2015
  end-page: 3555
  ident: bib0425
  article-title: Scene labeling with LSTM recurrent neural networks
  publication-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
– year: 2017
  ident: bib0500
  article-title: Road Extraction by Deep Residual U-Net, CoRR abs/1711.10684
– start-page: 321
  year: 2014
  end-page: 326
  ident: bib0575
  article-title: Discriminative feature learning for video semantic segmentation
  publication-title: 2014 International Conference on Virtual Reality and Visualization (ICVRV)
– year: 2016
  ident: bib0615
  article-title: Learning convolutional neural networks for graphs
  publication-title: Proceedings of the 33rd Annual International Conference on Machine Learning
– year: 2016
  ident: bib0610
  article-title: Semi-supervised Classification with Graph Convolutional Networks
– year: 2016
  ident: bib0455
  article-title: Point cloud labeling using 3D convolutional neural network
  publication-title: Proc. of the International Conf. on Pattern Recognition (ICPR), vol. 2
– year: 2016
  ident: bib0410
  article-title: Pyramid Scene Parsing Network, CoRR abs/1612.01105
– start-page: 1
  year: 2016
  end-page: 8
  ident: bib0400
  article-title: Multiscale fully convolutional network with application to industrial inspection
  publication-title: 2016 IEEE Winter Conference on Applications of Computer Vision (WACV)
– start-page: 1972
  year: 2009
  end-page: 1979
  ident: bib0220
  article-title: Nonparametric scene parsing: label transfer via dense scene alignment
  publication-title: IEEE Conference on Computer Vision and Pattern Recognition, 2009, CVPR 2009
– start-page: 818
  year: 2014
  end-page: 833
  ident: bib0495
  article-title: Visualizing and understanding convolutional networks
  publication-title: European Conference on Computer Vision
– start-page: 1383
  year: 2017
  end-page: 1386
  ident: bib0560
  article-title: Multi-view self-supervised deep learning for 6D pose estimation in the Amazon picking challenge
  publication-title: 2017 IEEE International Conference on Robotics and Automation (ICRA)
– year: 2016
  ident: bib0265
  article-title: A scalable active framework for region annotation in 3D shape collections
  publication-title: SIGGRAPH Asia
– start-page: 2018
  year: 2011
  end-page: 2025
  ident: bib0490
  article-title: Adaptive deconvolutional networks for mid and high level feature learning
  publication-title: 2011 IEEE International Conference on Computer Vision (ICCV)
– start-page: 376
  year: 2012
  end-page: 389
  ident: bib0195
  article-title: Road scene segmentation from a single image
  publication-title: European Conference on Computer Vision
– year: 2016
  ident: bib0365
  article-title: Deeplab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs, CoRR abs/1606.00915
– start-page: 231
  year: 2015
  end-page: 238
  ident: bib0205
  article-title: Vision-based offline-online perception paradigm for autonomous driving
  publication-title: 2015 IEEE Winter Conference on Applications of Computer Vision (WACV)
– start-page: 1
  year: 2015
  end-page: 9
  ident: bib0085
  article-title: Going deeper with convolutions
  publication-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
– year: 2014
  ident: bib0160
  article-title: Detect what you can: detecting and representing objects using holistic models and body parts
  publication-title: IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
– start-page: 1990
  year: 2015
  end-page: 1998
  ident: bib0440
  article-title: Learning to segment object candidates
  publication-title: Advances in Neural Information Processing Systems
– start-page: 4489
  year: 2015
  end-page: 4497
  ident: bib0585
  article-title: Learning spatiotemporal features with 3D convolutional networks
  publication-title: Proceedings of the IEEE International Conference on Computer Vision
– start-page: 234
  year: 2015
  end-page: 241
  ident: bib0345
  article-title: U-Net: convolutional networks for biomedical image segmentation
  publication-title: Medical Image Computing and Computer-Assisted Intervention (MICCAI), vol. 9351 of LNCS
– volume: 30
  start-page: 88
  year: 2009
  end-page: 97
  ident: bib0185
  article-title: Semantic object classes in video: a high-definition ground truth database
  publication-title: Pattern Recognit. Lett.
– start-page: 1610
  year: 2016
  end-page: 1618
  ident: bib0285
  article-title: Contour detection in unstructured 3D point clouds
  publication-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
– year: 2012
  ident: bib0280
  article-title: An occlusion-aware feature for range images
  publication-title: IEEE International Conference on Robotics and Automation, 2012, ICRA’12
– start-page: 297
  year: 2014
  end-page: 312
  ident: bib0050
  article-title: Simultaneous detection and segmentation
  publication-title: European Conference on Computer Vision
– start-page: 740
  year: 2014
  end-page: 755
  ident: bib0170
  article-title: Microsoft coco: common objects in context
  publication-title: European Conference on Computer Vision
– start-page: 656
  year: 2014
  end-page: 671
  ident: bib0225
  article-title: Supervoxel-consistent foreground propagation in video
  publication-title: European Conference on Computer Vision
– year: 2015
  ident: bib0360
  article-title: Semantic image segmentation with deep convolutional nets and fully connected CRFs
  publication-title: International Conference on Learning Representations
– start-page: 3479
  year: 2015
  end-page: 3487
  ident: bib0230
  article-title: Material recognition in the wild with the materials in context database
  publication-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
– year: 2016
  ident: bib0065
  article-title: A Survey of Semantic Segmentation, CoRR abs/1602.06541
– year: 2014
  ident: bib0075
  article-title: Very Deep Convolutional Networks for Large-Scale Image Recognition
– start-page: 3620
  year: 2016
  end-page: 3629
  ident: bib0435
  article-title: DAG-recurrent neural networks for scene labeling
  publication-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
– year: 2015
  ident: bib0385
  article-title: Multi-scale Convolutional Architecture for Semantic Segmentation
– year: 2015
  ident: 10.1016/j.asoc.2018.05.018_bib0405
– year: 2015
  ident: 10.1016/j.asoc.2018.05.018_bib0625
– year: 2015
  ident: 10.1016/j.asoc.2018.05.018_bib0605
– start-page: 157
  year: 2014
  ident: 10.1016/j.asoc.2018.05.018_bib0030
  article-title: Deep learning for content-based image retrieval: a comprehensive study
– year: 2017
  ident: 10.1016/j.asoc.2018.05.018_bib0485
– year: 2014
  ident: 10.1016/j.asoc.2018.05.018_bib0160
  article-title: Detect what you can: detecting and representing objects using holistic models and body parts
  publication-title: IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
– start-page: 75
  year: 2016
  ident: 10.1016/j.asoc.2018.05.018_bib0445
  article-title: Learning to refine object segments
– start-page: 1717
  year: 2014
  ident: 10.1016/j.asoc.2018.05.018_bib0110
  article-title: Learning and transferring mid-level image representations using convolutional neural networks
  publication-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
– year: 2016
  ident: 10.1016/j.asoc.2018.05.018_bib0595
– volume: 13
  start-page: 32
  issue: 3
  year: 2017
  ident: 10.1016/j.asoc.2018.05.018_bib0620
  article-title: Structured pruning of deep convolutional neural networks
  publication-title: ACM J. Emerg. Technol. Comput. Syst.
  doi: 10.1145/3005348
– start-page: 746
  year: 2012
  ident: 10.1016/j.asoc.2018.05.018_bib0245
  article-title: Indoor segmentation and support inference from RGBD images
– year: 2016
  ident: 10.1016/j.asoc.2018.05.018_bib0460
– start-page: 328
  year: 2014
  ident: 10.1016/j.asoc.2018.05.018_bib0550
  article-title: Multiscale combinatorial grouping
  publication-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
– year: 2016
  ident: 10.1016/j.asoc.2018.05.018_bib0570
  article-title: FuseNet: incorporating depth into semantic segmentation via fusion-based CNN architecture
  publication-title: Proc. ACCV, vol. 2
– year: 2015
  ident: 10.1016/j.asoc.2018.05.018_bib0020
– volume: 9
  start-page: 1735
  issue: 8
  year: 1997
  ident: 10.1016/j.asoc.2018.05.018_bib0530
  article-title: Long short-term memory
  publication-title: Neural Comput.
  doi: 10.1162/neco.1997.9.8.1735
– start-page: 44
  year: 2008
  ident: 10.1016/j.asoc.2018.05.018_bib0290
  article-title: Segmentation and recognition using structure from motion point clouds
– start-page: 3354
  year: 2012
  ident: 10.1016/j.asoc.2018.05.018_bib0010
  article-title: Are we ready for autonomous driving? The KITTI vision benchmark suite
  publication-title: 2012 IEEE Conference on Computer Vision and Pattern Recognition
  doi: 10.1109/CVPR.2012.6248074
– year: 2018
  ident: 10.1016/j.asoc.2018.05.018_bib0470
– start-page: 24
  year: 2015
  ident: 10.1016/j.asoc.2018.05.018_bib0025
  article-title: Learning a deep convolutional network for light-field image super-resolution
  publication-title: Proceedings of the IEEE International Conference on Computer Vision Workshops
– year: 2017
  ident: 10.1016/j.asoc.2018.05.018_bib0240
– volume: 77
  start-page: 157
  issue: 1
  year: 2008
  ident: 10.1016/j.asoc.2018.05.018_bib0310
  article-title: Labelme: a database and web-based tool for image annotation
  publication-title: Int. J. Comput. Vis.
  doi: 10.1007/s11263-007-0090-8
– start-page: 2018
  year: 2011
  ident: 10.1016/j.asoc.2018.05.018_bib0490
  article-title: Adaptive deconvolutional networks for mid and high level feature learning
– start-page: 580
  year: 2014
  ident: 10.1016/j.asoc.2018.05.018_bib0555
  article-title: Rich feature hierarchies for accurate object detection and semantic segmentation
  publication-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
– start-page: 991
  year: 2011
  ident: 10.1016/j.asoc.2018.05.018_bib0165
  article-title: Semantic contours from inverse detectors
– start-page: 1
  year: 2016
  ident: 10.1016/j.asoc.2018.05.018_bib0400
  article-title: Multiscale fully convolutional network with application to industrial inspection
– start-page: 4489
  year: 2015
  ident: 10.1016/j.asoc.2018.05.018_bib0585
  article-title: Learning spatiotemporal features with 3D convolutional networks
  publication-title: Proceedings of the IEEE International Conference on Computer Vision
– start-page: 231
  year: 2015
  ident: 10.1016/j.asoc.2018.05.018_bib0205
  article-title: Vision-based offline-online perception paradigm for autonomous driving
– year: 2016
  ident: 10.1016/j.asoc.2018.05.018_bib0065
– year: 2016
  ident: 10.1016/j.asoc.2018.05.018_bib0120
– volume: 32
  issue: 4
  year: 2013
  ident: 10.1016/j.asoc.2018.05.018_bib0305
  article-title: OpenSurfaces: a richly annotated catalog of surface appearance
  publication-title: ACM Trans. Graph. (SIGGRAPH)
  doi: 10.1145/2461912.2462002
– start-page: 818
  year: 2014
  ident: 10.1016/j.asoc.2018.05.018_bib0495
  article-title: Visualizing and understanding convolutional networks
– start-page: 1610
  year: 2016
  ident: 10.1016/j.asoc.2018.05.018_bib0285
  article-title: Contour detection in unstructured 3D point clouds
  publication-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
– volume: 81
  start-page: 2
  issue: 1
  year: 2009
  ident: 10.1016/j.asoc.2018.05.018_bib0510
  article-title: Textonboost for image understanding: multi-class object recognition and segmentation by jointly modeling texture, layout, and context
  publication-title: Int. J. Comput. Vis.
  doi: 10.1007/s11263-007-0109-1
– start-page: 1383
  year: 2017
  ident: 10.1016/j.asoc.2018.05.018_bib0560
  article-title: Multi-view self-supervised deep learning for 6D pose estimation in the Amazon picking challenge
– volume: 115
  start-page: 211
  issue: 3
  year: 2015
  ident: 10.1016/j.asoc.2018.05.018_bib0130
  article-title: Imagenet large scale visual recognition challenge
  publication-title: Int. J. Comput. Vis.
  doi: 10.1007/s11263-015-0816-y
– start-page: 141
  year: 2013
  ident: 10.1016/j.asoc.2018.05.018_bib0320
– start-page: 248
  year: 2009
  ident: 10.1016/j.asoc.2018.05.018_bib0125
  article-title: ImageNet: a large-scale hierarchical image database
– start-page: 549
  year: 2007
  ident: 10.1016/j.asoc.2018.05.018_bib0095
– volume: 32
  start-page: 1231
  issue: 11
  year: 2013
  ident: 10.1016/j.asoc.2018.05.018_bib0295
  article-title: Vision meets robotics: the KITTI dataset
  publication-title: Int. J. Robot. Res.
  doi: 10.1177/0278364913491297
– start-page: 82
  year: 2014
  ident: 10.1016/j.asoc.2018.05.018_bib0430
  article-title: Recurrent convolutional neural networks for scene labeling
  publication-title: ICML
– start-page: 103
  year: 2014
  ident: 10.1016/j.asoc.2018.05.018_bib0535
  article-title: On the properties of neural machine translation: encoder-decoder approaches
– year: 2016
  ident: 10.1016/j.asoc.2018.05.018_bib0365
– start-page: 3213
  year: 2016
  ident: 10.1016/j.asoc.2018.05.018_bib0015
  article-title: The cityscapes dataset for semantic urban scene understanding
  publication-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
– start-page: 1625
  year: 2013
  ident: 10.1016/j.asoc.2018.05.018_bib0250
  article-title: SUN3D: a database of big spaces reconstructed using SfM and object labels
  publication-title: 2013 IEEE International Conference on Computer Vision
  doi: 10.1109/ICCV.2013.458
– start-page: 770
  year: 2016
  ident: 10.1016/j.asoc.2018.05.018_bib0090
  article-title: Deep residual learning for image recognition
  publication-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
– year: 2015
  ident: 10.1016/j.asoc.2018.05.018_bib0100
– year: 2016
  ident: 10.1016/j.asoc.2018.05.018_bib0380
– year: 2017
  ident: 10.1016/j.asoc.2018.05.018_bib0500
– volume: vol. 35
  start-page: 93
  year: 2016
  ident: 10.1016/j.asoc.2018.05.018_bib0145
  article-title: Automatic portrait segmentation for image stylization
– volume: 2
  start-page: 4
  issue: 3
  year: 2011
  ident: 10.1016/j.asoc.2018.05.018_bib0515
  article-title: Efficient inference in fully connected CRFs with Gaussian edge potentials
  publication-title: Adv. Neural Inf. Process. Syst.
– start-page: 321
  year: 2014
  ident: 10.1016/j.asoc.2018.05.018_bib0575
  article-title: Discriminative feature learning for video semantic segmentation
– start-page: 345
  year: 2014
  ident: 10.1016/j.asoc.2018.05.018_bib0055
  article-title: Learning rich features from RGB-D images for object detection and segmentation
– year: 2015
  ident: 10.1016/j.asoc.2018.05.018_bib0375
– year: 2016
  ident: 10.1016/j.asoc.2018.05.018_bib0610
– year: 2016
  ident: 10.1016/j.asoc.2018.05.018_bib0415
  article-title: ReSeg: a recurrent neural network-based model for semantic segmentation
  publication-title: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops
– year: 2016
  ident: 10.1016/j.asoc.2018.05.018_bib0630
– volume: 30
  start-page: 88
  issue: 2
  year: 2009
  ident: 10.1016/j.asoc.2018.05.018_bib0185
  article-title: Semantic object classes in video: a high-definition ground truth database
  publication-title: Pattern Recognit. Lett.
  doi: 10.1016/j.patrec.2008.04.005
– start-page: 740
  year: 2014
  ident: 10.1016/j.asoc.2018.05.018_bib0170
  article-title: Microsoft coco: common objects in context
– start-page: 5105
  year: 2017
  ident: 10.1016/j.asoc.2018.05.018_bib0465
  article-title: PointNet++: deep hierarchical feature learning on point sets in a metric space
– year: 2016
  ident: 10.1016/j.asoc.2018.05.018_bib0450
  article-title: A multipath network for object detection
– start-page: 2
  year: 2009
  ident: 10.1016/j.asoc.2018.05.018_bib0005
  article-title: Segmentation-based urban traffic scene understanding
  publication-title: BMVC, vol. 1
– start-page: 297
  year: 2014
  ident: 10.1016/j.asoc.2018.05.018_bib0050
  article-title: Simultaneous detection and segmentation
– year: 2015
  ident: 10.1016/j.asoc.2018.05.018_bib0180
  article-title: The cityscapes dataset
  publication-title: CVPR Workshop on The Future of Datasets in Vision
– start-page: 3431
  year: 2015
  ident: 10.1016/j.asoc.2018.05.018_bib0340
  article-title: Fully convolutional networks for semantic segmentation
  publication-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
– volume: 111
  start-page: 98
  issue: 1
  year: 2015
  ident: 10.1016/j.asoc.2018.05.018_bib0150
  article-title: The PASCAL visual object classes challenge: a retrospective
  publication-title: Int. J. Comput. Vis.
  doi: 10.1007/s11263-014-0733-5
– year: 2015
  ident: 10.1016/j.asoc.2018.05.018_bib0525
– start-page: 102
  year: 2016
  ident: 10.1016/j.asoc.2018.05.018_bib0135
– start-page: 1850
  year: 2015
  ident: 10.1016/j.asoc.2018.05.018_bib0210
  article-title: Sensor fusion for semantic segmentation of urban scenes
– start-page: 478
  year: 2016
  ident: 10.1016/j.asoc.2018.05.018_bib0545
  article-title: Deep contrast learning for salient object detection
  publication-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
– year: 2012
  ident: 10.1016/j.asoc.2018.05.018_bib0280
  article-title: An occlusion-aware feature for range images
– year: 2012
  ident: 10.1016/j.asoc.2018.05.018_bib0325
– year: 2015
  ident: 10.1016/j.asoc.2018.05.018_bib0330
– year: 2016
  ident: 10.1016/j.asoc.2018.05.018_bib0410
– start-page: 513
  year: 2013
  ident: 10.1016/j.asoc.2018.05.018_bib0520
  article-title: Parameter learning and convergent inference for dense random fields
  publication-title: ICML (3)
– start-page: 1534
  year: 2016
  ident: 10.1016/j.asoc.2018.05.018_bib0335
  article-title: 3D semantic parsing of large-scale indoor spaces
  publication-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
– year: 2016
  ident: 10.1016/j.asoc.2018.05.018_bib0540
– start-page: 2650
  year: 2015
  ident: 10.1016/j.asoc.2018.05.018_bib0390
  article-title: Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture
  publication-title: Proceedings of the IEEE International Conference on Computer Vision
– start-page: 1972
  year: 2009
  ident: 10.1016/j.asoc.2018.05.018_bib0220
  article-title: Nonparametric scene parsing: label transfer via dense scene alignment
– start-page: 3620
  year: 2016
  ident: 10.1016/j.asoc.2018.05.018_bib0435
  article-title: DAG-recurrent neural networks for scene labeling
  publication-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
– start-page: 186
  year: 2016
  ident: 10.1016/j.asoc.2018.05.018_bib0395
  article-title: A multi-scale CNN for affordance segmentation in RGB images
– year: 2014
  ident: 10.1016/j.asoc.2018.05.018_bib0155
  article-title: The role of context for object detection and semantic segmentation in the wild
  publication-title: IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
– start-page: 1097
  year: 2012
  ident: 10.1016/j.asoc.2018.05.018_bib0070
  article-title: ImageNet classification with deep convolutional neural networks
– start-page: 1817
  year: 2011
  ident: 10.1016/j.asoc.2018.05.018_bib0260
  article-title: A large-scale hierarchical multi-view RGB-D object dataset
– start-page: 656
  year: 2014
  ident: 10.1016/j.asoc.2018.05.018_bib0225
  article-title: Supervoxel-consistent foreground propagation in video
– start-page: 567
  year: 2015
  ident: 10.1016/j.asoc.2018.05.018_bib0255
  article-title: SUN RGB-D: a RGB-D scene understanding benchmark suite
  publication-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
– volume: 8
  start-page: 3627
  issue: 8
  year: 2017
  ident: 10.1016/j.asoc.2018.05.018_bib0590
  article-title: ReLayNet: retinal layer and fluid segmentation of macular optical coherence tomography using fully convolutional networks
  publication-title: Biomed. Opt. Express
  doi: 10.1364/BOE.8.003627
– volume: 14
  start-page: 1360
  issue: 9
  year: 2005
  ident: 10.1016/j.asoc.2018.05.018_bib0035
  article-title: Toward automatic phenotyping of developing embryos from videos
  publication-title: IEEE Trans. Image Process.
  doi: 10.1109/TIP.2005.852470
– volume: vol. 23
  start-page: 309
  year: 2004
  ident: 10.1016/j.asoc.2018.05.018_bib0505
  article-title: GrabCut: interactive foreground extraction using iterated graph cuts
– year: 2016
  ident: 10.1016/j.asoc.2018.05.018_bib0235
  article-title: A benchmark dataset and evaluation methodology for video object segmentation
  publication-title: Computer Vision and Pattern Recognition
– year: 2015
  ident: 10.1016/j.asoc.2018.05.018_bib0355
– start-page: 3547
  year: 2015
  ident: 10.1016/j.asoc.2018.05.018_bib0425
  article-title: Scene labeling with LSTM recurrent neural networks
  publication-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
– year: 2016
  ident: 10.1016/j.asoc.2018.05.018_bib0265
  article-title: A scalable active framework for region annotation in 3D shape collections
  publication-title: SIGGRAPH Asia
  doi: 10.1145/2980179.2980238
– volume: 28
  issue: 3
  year: 2009
  ident: 10.1016/j.asoc.2018.05.018_bib0275
  article-title: A benchmark for 3D mesh segmentation
  publication-title: ACM Trans. Graph. (Proc. SIGGRAPH)
  doi: 10.1145/1531326.1531379
– start-page: 376
  year: 2012
  ident: 10.1016/j.asoc.2018.05.018_bib0195
  article-title: Road scene segmentation from a single image
– start-page: 69
  year: 2008
  ident: 10.1016/j.asoc.2018.05.018_bib0105
  article-title: Training hierarchical feed-forward visual recognition models using transfer learning from pseudo-tasks
– start-page: 1990
  year: 2015
  ident: 10.1016/j.asoc.2018.05.018_bib0440
  article-title: Learning to segment object candidates
– start-page: 3234
  year: 2016
  ident: 10.1016/j.asoc.2018.05.018_bib0175
  article-title: The synthia dataset: a large collection of synthetic images for semantic segmentation of urban scenes
  publication-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
– start-page: 2843
  year: 2012
  ident: 10.1016/j.asoc.2018.05.018_bib0040
  article-title: Deep neural networks segment neuronal membranes in electron microscopy images
– volume: 34
  start-page: 12
  year: 2016
  ident: 10.1016/j.asoc.2018.05.018_bib0060
  article-title: Beyond pixels: a comprehensive survey from bottom-up to semantic image segmentation and cosegmentation
  publication-title: J. Vis. Commun. Image Represent.
  doi: 10.1016/j.jvcir.2015.10.012
– start-page: 234
  year: 2015
  ident: 10.1016/j.asoc.2018.05.018_bib0345
  article-title: U-Net: convolutional networks for biomedical image segmentation
– year: 2015
  ident: 10.1016/j.asoc.2018.05.018_bib0360
  article-title: Semantic image segmentation with deep convolutional nets and fully connected CRFs
  publication-title: International Conference on Learning Representations
– volume: 23
  start-page: 1222
  issue: 11
  year: 2001
  ident: 10.1016/j.asoc.2018.05.018_bib0580
  article-title: Fast approximate energy minimization via graph cuts
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/34.969114
– year: 2016
  ident: 10.1016/j.asoc.2018.05.018_bib0615
  article-title: Learning convolutional neural networks for graphs
– year: 2016
  ident: 10.1016/j.asoc.2018.05.018_bib0455
  article-title: Point cloud labeling using 3D convolutional neural network
  publication-title: Proc. of the International Conf. on Pattern Recognition (ICPR), vol. 2
– start-page: 3320
  year: 2014
  ident: 10.1016/j.asoc.2018.05.018_bib0115
  article-title: How transferable are features in deep neural networks?
– start-page: 537
  year: 2015
  ident: 10.1016/j.asoc.2018.05.018_bib0200
  article-title: Unsupervised image transformation for outdoor semantic labelling
– start-page: 852
  year: 2016
  ident: 10.1016/j.asoc.2018.05.018_bib0475
  article-title: Clockwork convnets for video semantic segmentation
– volume: 35
  start-page: 1915
  issue: 8
  year: 2013
  ident: 10.1016/j.asoc.2018.05.018_bib0045
  article-title: Learning hierarchical features for scene labeling
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2012.231
– year: 2014
  ident: 10.1016/j.asoc.2018.05.018_bib0075
– start-page: 3282
  year: 2012
  ident: 10.1016/j.asoc.2018.05.018_bib0300
  article-title: Learning object class detectors from weakly annotated video
– start-page: 1
  year: 2009
  ident: 10.1016/j.asoc.2018.05.018_bib0215
  article-title: Decomposing a scene into geometric and semantically consistent regions
– start-page: 1
  year: 2016
  ident: 10.1016/j.asoc.2018.05.018_bib0140
  article-title: Understanding data augmentation for classification: when to warp?
– volume: 39
  start-page: 2481
  issue: 12
  year: 2015
  ident: 10.1016/j.asoc.2018.05.018_bib0350
  article-title: Segnet: a deep convolutional encoder-decoder architecture for scene segmentation
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2016.2644615
– start-page: 17
  year: 2016
  ident: 10.1016/j.asoc.2018.05.018_bib0480
  article-title: Deep end2end voxel2voxel prediction
  publication-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops
– year: 2017
  ident: 10.1016/j.asoc.2018.05.018_bib0600
  article-title: Focal loss for dense object detection
– year: 2009
  ident: 10.1016/j.asoc.2018.05.018_bib0190
  article-title: Combining appearance and structure from motion features for road scene understanding
  publication-title: BMVC 2012 – 23rd British Machine Vision Conference, BMVA
– start-page: 541
  year: 2016
  ident: 10.1016/j.asoc.2018.05.018_bib0420
– start-page: 1520
  year: 2015
  ident: 10.1016/j.asoc.2018.05.018_bib0080
  article-title: Learning deconvolution network for semantic segmentation
  publication-title: Proceedings of the IEEE International Conference on Computer Vision
– start-page: 1
  year: 2015
  ident: 10.1016/j.asoc.2018.05.018_bib0085
  article-title: Going deeper with convolutions
  publication-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
– year: 2015
  ident: 10.1016/j.asoc.2018.05.018_bib0385
– start-page: 564
  year: 2013
  ident: 10.1016/j.asoc.2018.05.018_bib0315
  article-title: Perceptual organization and recognition of indoor scenes from RGB-D images
  publication-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
– year: 2017
  ident: 10.1016/j.asoc.2018.05.018_bib0565
– start-page: 3479
  year: 2015
  ident: 10.1016/j.asoc.2018.05.018_bib0230
  article-title: Material recognition in the wild with the materials in context database
  publication-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
– year: 2017
  ident: 10.1016/j.asoc.2018.05.018_bib0270
– start-page: 1529
  year: 2015
  ident: 10.1016/j.asoc.2018.05.018_bib0370
  article-title: Conditional random fields as recurrent neural networks
  publication-title: Proceedings of the IEEE International Conference on Computer Vision
SSID ssj0016928
Score 2.6765494
SecondaryResourceType review_article
Snippet •An in-depth review of deep learning methods for semantic segmentation applied to various areas.•An overview of background concepts and formulation for...
SourceID crossref
elsevier
SourceType Enrichment Source
Index Database
Publisher
StartPage 41
SubjectTerms Deep learning
Scene labeling
Semantic segmentation
Title A survey on deep learning techniques for image and video semantic segmentation
URI https://dx.doi.org/10.1016/j.asoc.2018.05.018
Volume 70
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV07a8MwEBYhXbr0XZo-goZuRY1tSbY0htCQvkJpU8hmJFkKKY0T8ih06W-vZMuhhZKhk2xzh8Xn850OvrsD4DLSIg4Zo4hynCHCY4mYEgyZSDMpJMdhUS722I97r-RuSIc10KlqYRyt0vv-0qcX3to_aXk0W7PxuPViMw9GOLGvwjZIFpNrCUmclV9_rWkeYcyL-apOGDlpXzhTcryERcDRu1jZvZP9HZx-BJzuHtjxJ0XYLjezD2o6PwC71RQG6H_KQ9Bvw8Vq_qE_4TSHmdYz6CdBjOC6QesC2rMpHE-s84Aiz6ArvpvChZ5YXMfKXowmvgYpPwKD7s2g00N-SgJSOAiWyFCbQQlFTBwzE3GlbVQyPDOSSkk1NsKoUASZNkmcxJRmQrGAkTBSJsCah_gY1PNprk8AzHAmbQZhNV2aoghPJKeJNvZTBlQy3ABhhU6qfAdxN8jiPa2oYm-pQzR1iKYBTe3SAFdrnVnZP2OjNK1AT39ZQWod_Aa903_qnYFtd1dyxs5BfTlf6Qt7yFjKZmFFTbDV7jw_PLn19r7X_waYcNMs
linkProvider Elsevier
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV3NT9swFH8CdoALMD40YGw-jBMKTeI4tQ87oG1VGbQXitSbZTt2VbSmFS0gLvxR_IV7bpwKpKmHSZwSJXmJ8_PT-5Deez-Ab6lVecI5i5igRZSJXEfcKB651HKttKDJvF2s083bN9nvPuuvwEvdC-PLKoPtr2z63FqHK42AZmMyHDauMfPgmcjwUxSdZFIzWF_ap0fM26bfL37iJp-kaetX70c7CtQCkaFxPIscw7RDmczlOXepMBZNuROF00xrZqlTziQqLqxr5s2csUIZHvMsSY2LqRUJxdeuwocMrYVnTTh7XpSVJLmY87n6xUV-daFRp6opU4i4Lyfj1bRQ_m9n-MrBtbZhM0Sm5Lz6-Y-wYssd2KpZH0gwArvQPSfT-7sH-0TGJSmsnZDAPDEgi4GwU4KxMBmO0FgRVRbEN_uNydSOcB-HBk8Go9DzVO5B7z2g24e1clzaT0AKWmjMWFDSp0UmE00tWNM6VJ2YaU4PIKnRkSZMLPfEGX9kXZp2Kz2i0iMqYybxcACnC5lJNa9j6dOsBl2-0TqJDmWJ3OF_yn2F9XavcyWvLrqXR7Dh71T1ap9hbXZ3b48xwJnpL3ONIiDfWYP_AodjDiA
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=A+survey+on+deep+learning+techniques+for+image+and+video+semantic+segmentation&rft.jtitle=Applied+soft+computing&rft.au=Garcia-Garcia%2C+Alberto&rft.au=Orts-Escolano%2C+Sergio&rft.au=Oprea%2C+Sergiu&rft.au=Villena-Martinez%2C+Victor&rft.date=2018-09-01&rft.pub=Elsevier+B.V&rft.issn=1568-4946&rft.eissn=1872-9681&rft.volume=70&rft.spage=41&rft.epage=65&rft_id=info:doi/10.1016%2Fj.asoc.2018.05.018&rft.externalDocID=S1568494618302813
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1568-4946&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1568-4946&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1568-4946&client=summon