Object detection in optical remote sensing images: A survey and a new benchmark

Substantial efforts have been devoted more recently to presenting various methods for object detection in optical remote sensing images. However, the current survey of datasets and deep learning based methods for object detection in optical remote sensing images is not adequate. Moreover, most of th...

Full description

Saved in:
Bibliographic Details
Published inISPRS journal of photogrammetry and remote sensing Vol. 159; pp. 296 - 307
Main Authors Li, Ke, Wan, Gang, Cheng, Gong, Meng, Liqiu, Han, Junwei
Format Journal Article
LanguageEnglish
Published Elsevier B.V 01.01.2020
Subjects
Online AccessGet full text
ISSN0924-2716
1872-8235
DOI10.1016/j.isprsjprs.2019.11.023

Cover

Loading…
Abstract Substantial efforts have been devoted more recently to presenting various methods for object detection in optical remote sensing images. However, the current survey of datasets and deep learning based methods for object detection in optical remote sensing images is not adequate. Moreover, most of the existing datasets have some shortcomings, for example, the numbers of images and object categories are small scale, and the image diversity and variations are insufficient. These limitations greatly affect the development of deep learning based object detection methods. In the paper, we provide a comprehensive review of the recent deep learning based object detection progress in both the computer vision and earth observation communities. Then, we propose a large-scale, publicly available benchmark for object DetectIon in Optical Remote sensing images, which we name as DIOR. The dataset contains 23,463 images and 192,472 instances, covering 20 object classes. The proposed DIOR dataset (1) is large-scale on the object categories, on the object instance number, and on the total image number; (2) has a large range of object size variations, not only in terms of spatial resolutions, but also in the aspect of inter- and intra-class size variability across objects; (3) holds big variations as the images are obtained with different imaging conditions, weathers, seasons, and image quality; and (4) has high inter-class similarity and intra-class diversity. The proposed benchmark can help the researchers to develop and validate their data-driven methods. Finally, we evaluate several state-of-the-art approaches on our DIOR dataset to establish a baseline for future research.
AbstractList Substantial efforts have been devoted more recently to presenting various methods for object detection in optical remote sensing images. However, the current survey of datasets and deep learning based methods for object detection in optical remote sensing images is not adequate. Moreover, most of the existing datasets have some shortcomings, for example, the numbers of images and object categories are small scale, and the image diversity and variations are insufficient. These limitations greatly affect the development of deep learning based object detection methods. In the paper, we provide a comprehensive review of the recent deep learning based object detection progress in both the computer vision and earth observation communities. Then, we propose a large-scale, publicly available benchmark for object DetectIon in Optical Remote sensing images, which we name as DIOR. The dataset contains 23,463 images and 192,472 instances, covering 20 object classes. The proposed DIOR dataset (1) is large-scale on the object categories, on the object instance number, and on the total image number; (2) has a large range of object size variations, not only in terms of spatial resolutions, but also in the aspect of inter- and intra-class size variability across objects; (3) holds big variations as the images are obtained with different imaging conditions, weathers, seasons, and image quality; and (4) has high inter-class similarity and intra-class diversity. The proposed benchmark can help the researchers to develop and validate their data-driven methods. Finally, we evaluate several state-of-the-art approaches on our DIOR dataset to establish a baseline for future research.
Author Wan, Gang
Meng, Liqiu
Cheng, Gong
Han, Junwei
Li, Ke
Author_xml – sequence: 1
  givenname: Ke
  orcidid: 0000-0002-7873-1554
  surname: Li
  fullname: Li, Ke
  organization: Zhengzhou Institute of Surveying and Mapping, Zhengzhou 450052, China
– sequence: 2
  givenname: Gang
  surname: Wan
  fullname: Wan, Gang
  organization: Zhengzhou Institute of Surveying and Mapping, Zhengzhou 450052, China
– sequence: 3
  givenname: Gong
  orcidid: 0000-0001-5030-0683
  surname: Cheng
  fullname: Cheng, Gong
  email: gcheng@nwpu.edu.cn
  organization: School of Automation, Northwestern Polytechnical University, Xi’an 710072, China
– sequence: 4
  givenname: Liqiu
  surname: Meng
  fullname: Meng, Liqiu
  organization: Department of Cartography, Technical University of Munich, Arcisstr. 21, 80333 Munich, Germany
– sequence: 5
  givenname: Junwei
  orcidid: 0000-0001-5545-7217
  surname: Han
  fullname: Han, Junwei
  email: junweihan2010@gmail.com
  organization: School of Automation, Northwestern Polytechnical University, Xi’an 710072, China
BookMark eNqNkDtPwzAUhS0EEuXxG_DIkuBrO4mDxFAhXlKlLjBbrnNTHFKn2Cmo_x5XRQwsMFyd5XxHut8JOfSDR0IugOXAoLzqchfXIXbpcs6gzgFyxsUBmYCqeKa4KA7JhNVcZryC8picxNgxxqAo1YTM54sO7UgbHFO4wVPn6bAenTU9DbgaRqQRfXR-Sd3KLDFe0ymNm_CBW2p8Qw31-EkX6O3ryoS3M3LUmj7i-Xeekpf7u-fbx2w2f3i6nc4yK0GOWd3WTCjVGl6WtlESDIOFFBZFWxkpC2EbCwKbcmGVkK1lpWK1qYXhta1YVYhTcrnfXYfhfYNx1CsXLfa98ThsouaSsQIUyDpVb_ZVG4YYA7bautHsfh2Dcb0Gpncidad_ROqdSA2gk8jEV7_4dUgqwvYf5HRPYjLx4TDoaF0yhY0LybZuBvfnxhfDEJXK
CitedBy_id crossref_primary_10_1109_TGRS_2024_3369178
crossref_primary_10_1109_TGRS_2022_3192013
crossref_primary_10_1117_1_JRS_16_026517
crossref_primary_10_1016_j_jag_2021_102548
crossref_primary_10_32604_cmes_2023_044863
crossref_primary_10_3390_rs15133265
crossref_primary_10_2478_jsiot_2022_0005
crossref_primary_10_1109_TGRS_2021_3110894
crossref_primary_10_3390_rs15081964
crossref_primary_10_2478_jsiot_2022_0007
crossref_primary_10_1109_TNNLS_2024_3367331
crossref_primary_10_1109_TGRS_2024_3364713
crossref_primary_10_3390_rs12213547
crossref_primary_10_3390_rs15020327
crossref_primary_10_34133_2021_9805389
crossref_primary_10_3390_rs15020325
crossref_primary_10_1109_JSTARS_2022_3199249
crossref_primary_10_1177_00368504241280765
crossref_primary_10_3390_drones9010029
crossref_primary_10_1109_JSTARS_2022_3206085
crossref_primary_10_1109_TGRS_2022_3215472
crossref_primary_10_1016_j_compeleceng_2025_110065
crossref_primary_10_3390_info15010063
crossref_primary_10_3390_rs15215200
crossref_primary_10_1080_01431161_2021_1892858
crossref_primary_10_1007_s00521_022_07664_w
crossref_primary_10_3390_rs15081971
crossref_primary_10_1109_TGRS_2021_3095186
crossref_primary_10_1109_TGRS_2022_3147224
crossref_primary_10_1109_TGRS_2022_3204345
crossref_primary_10_1109_TCYB_2022_3163152
crossref_primary_10_1134_S106273912404015X
crossref_primary_10_1007_s11042_023_16973_8
crossref_primary_10_1109_JSTARS_2022_3158905
crossref_primary_10_1109_TIP_2021_3058590
crossref_primary_10_1109_JSTARS_2023_3284309
crossref_primary_10_1109_TGRS_2021_3051383
crossref_primary_10_3390_electronics12244902
crossref_primary_10_1109_TGRS_2025_3534288
crossref_primary_10_1109_TGRS_2023_3336524
crossref_primary_10_1016_j_engappai_2023_107313
crossref_primary_10_1049_ipr2_13040
crossref_primary_10_1117_1_JRS_18_044519
crossref_primary_10_1109_JSTARS_2020_3034609
crossref_primary_10_1109_TGRS_2024_3471082
crossref_primary_10_1109_TIM_2023_3315392
crossref_primary_10_1007_s10462_024_10803_5
crossref_primary_10_1109_JSTARS_2021_3128994
crossref_primary_10_3390_app14125342
crossref_primary_10_1016_j_jag_2022_102804
crossref_primary_10_1109_ACCESS_2024_3466013
crossref_primary_10_1109_TCSVT_2023_3323879
crossref_primary_10_1109_LGRS_2023_3345946
crossref_primary_10_3390_s23156811
crossref_primary_10_1117_1_JEI_31_5_053017
crossref_primary_10_1109_JSTARS_2022_3179612
crossref_primary_10_1016_j_neucom_2024_129251
crossref_primary_10_1109_TPAMI_2024_3508072
crossref_primary_10_3390_app131810111
crossref_primary_10_1109_LGRS_2024_3382871
crossref_primary_10_1080_01431161_2020_1811918
crossref_primary_10_1109_JSTARS_2024_3357540
crossref_primary_10_1109_TGRS_2024_3373442
crossref_primary_10_3390_rs15194680
crossref_primary_10_1016_j_isprsjprs_2023_02_006
crossref_primary_10_3390_rs16040624
crossref_primary_10_1109_LGRS_2020_3046739
crossref_primary_10_1109_MGRS_2023_3293459
crossref_primary_10_1109_TGRS_2021_3123984
crossref_primary_10_1109_JMASS_2020_3040976
crossref_primary_10_1109_TGRS_2023_3325884
crossref_primary_10_1007_s11554_024_01601_x
crossref_primary_10_1016_j_imavis_2023_104697
crossref_primary_10_1109_JSTARS_2024_3492164
crossref_primary_10_3390_s22103891
crossref_primary_10_1007_s11432_022_3718_5
crossref_primary_10_1109_TGRS_2024_3426094
crossref_primary_10_1109_LGRS_2024_3361508
crossref_primary_10_3390_rs16081341
crossref_primary_10_1109_ACCESS_2023_3247967
crossref_primary_10_3390_sym14051020
crossref_primary_10_3390_app12104997
crossref_primary_10_3390_rs15020370
crossref_primary_10_3390_rs15020371
crossref_primary_10_1109_TGRS_2024_3442575
crossref_primary_10_3390_s20226485
crossref_primary_10_1109_JSTARS_2022_3190699
crossref_primary_10_1049_sil2_12194
crossref_primary_10_1007_s11042_023_17237_1
crossref_primary_10_1109_JSTARS_2022_3191544
crossref_primary_10_1109_LGRS_2022_3211325
crossref_primary_10_1007_s00500_021_06126_0
crossref_primary_10_1117_1_JRS_16_036509
crossref_primary_10_1109_TGRS_2023_3326992
crossref_primary_10_1109_TGRS_2025_3536015
crossref_primary_10_11834_jig_221085
crossref_primary_10_1016_j_jag_2023_103397
crossref_primary_10_3390_rs13244962
crossref_primary_10_3390_rs14061445
crossref_primary_10_1109_TGRS_2024_3356074
crossref_primary_10_3390_rs15030594
crossref_primary_10_3390_rs15153907
crossref_primary_10_1007_s11227_024_06237_z
crossref_primary_10_1109_TGRS_2024_3386934
crossref_primary_10_3390_rs14246329
crossref_primary_10_12677_SEA_2023_122031
crossref_primary_10_1007_s12524_023_01709_w
crossref_primary_10_1109_ACCESS_2025_3538575
crossref_primary_10_3390_rs13244999
crossref_primary_10_3390_land10111266
crossref_primary_10_1016_j_patcog_2024_111243
crossref_primary_10_1109_TGRS_2024_3475482
crossref_primary_10_3390_electronics9121993
crossref_primary_10_1109_JSTARS_2024_3483253
crossref_primary_10_3390_rs16050857
crossref_primary_10_1109_ACCESS_2024_3393835
crossref_primary_10_1109_JSTARS_2024_3525177
crossref_primary_10_1109_JSTARS_2022_3223845
crossref_primary_10_1016_j_neucom_2021_04_011
crossref_primary_10_3934_geosci_2022036
crossref_primary_10_1007_s10462_025_11150_9
crossref_primary_10_1109_ACCESS_2024_3368877
crossref_primary_10_1109_JSTARS_2022_3176141
crossref_primary_10_1016_j_eswa_2024_125176
crossref_primary_10_1109_LGRS_2022_3178479
crossref_primary_10_1109_MGRS_2023_3316438
crossref_primary_10_1109_TGRS_2023_3266781
crossref_primary_10_3390_rs14174380
crossref_primary_10_3390_jmse12081422
crossref_primary_10_3390_rs14040984
crossref_primary_10_1109_TGRS_2023_3296157
crossref_primary_10_3390_rs15102584
crossref_primary_10_1117_1_JEI_33_1_013006
crossref_primary_10_3390_app13106124
crossref_primary_10_1109_TGRS_2021_3133109
crossref_primary_10_1109_TGRS_2022_3222818
crossref_primary_10_1109_TITS_2025_3530678
crossref_primary_10_1016_j_dsp_2024_104938
crossref_primary_10_3390_rs15204974
crossref_primary_10_3390_rs14164107
crossref_primary_10_1016_j_neucom_2024_129067
crossref_primary_10_1109_TGRS_2024_3363614
crossref_primary_10_1109_JSTARS_2023_3327331
crossref_primary_10_1109_TGRS_2023_3289878
crossref_primary_10_1109_TGRS_2025_3546062
crossref_primary_10_3390_fi14100275
crossref_primary_10_1109_JSTARS_2023_3276781
crossref_primary_10_3390_rs15143462
crossref_primary_10_1109_TGRS_2022_3212592
crossref_primary_10_1109_TGRS_2023_3278933
crossref_primary_10_1109_TGRS_2023_3252544
crossref_primary_10_3390_rs17071155
crossref_primary_10_3788_LOP221744
crossref_primary_10_1016_j_isprsjprs_2021_01_008
crossref_primary_10_1109_TPAMI_2024_3378777
crossref_primary_10_3390_rs14225675
crossref_primary_10_1016_j_engappai_2025_110466
crossref_primary_10_3390_s23052424
crossref_primary_10_1109_TGRS_2021_3069056
crossref_primary_10_3390_rs14030579
crossref_primary_10_3390_rs14143255
crossref_primary_10_1109_JSTARS_2024_3504865
crossref_primary_10_1109_TGRS_2024_3407122
crossref_primary_10_1016_j_optlaseng_2024_108052
crossref_primary_10_1080_17538947_2024_2390443
crossref_primary_10_1109_TGRS_2024_3514741
crossref_primary_10_3390_rs16132405
crossref_primary_10_1109_LGRS_2020_3004061
crossref_primary_10_1109_JSTARS_2020_3015049
crossref_primary_10_1117_1_JEI_33_3_033018
crossref_primary_10_1080_10095020_2022_2162980
crossref_primary_10_1109_TGRS_2025_3525720
crossref_primary_10_3390_rs14194824
crossref_primary_10_3233_IDA_230929
crossref_primary_10_3390_rs12101544
crossref_primary_10_1109_JSTARS_2023_3304411
crossref_primary_10_1109_TGRS_2022_3180894
crossref_primary_10_3390_app13137501
crossref_primary_10_1080_20964471_2021_1964879
crossref_primary_10_1109_ACCESS_2024_3414620
crossref_primary_10_1038_s41597_025_04567_y
crossref_primary_10_1109_TCYB_2024_3392474
crossref_primary_10_1109_TGRS_2024_3450303
crossref_primary_10_1109_MGRS_2022_3145854
crossref_primary_10_1109_TGRS_2024_3407598
crossref_primary_10_1016_j_ins_2022_05_033
crossref_primary_10_1016_j_isprsjprs_2023_04_003
crossref_primary_10_1109_TCDS_2020_3035572
crossref_primary_10_1109_TGRS_2025_3530085
crossref_primary_10_3390_rs13132482
crossref_primary_10_3390_rs14236103
crossref_primary_10_1109_TGRS_2024_3424295
crossref_primary_10_3390_s24237711
crossref_primary_10_1109_TGRS_2023_3277626
crossref_primary_10_1088_1402_4896_ad92b0
crossref_primary_10_1016_j_apor_2023_103702
crossref_primary_10_1007_s13735_024_00342_8
crossref_primary_10_3390_ijgi10120813
crossref_primary_10_1016_j_engappai_2025_110488
crossref_primary_10_1109_TGRS_2021_3133956
crossref_primary_10_3390_rs16111988
crossref_primary_10_1109_LGRS_2024_3397717
crossref_primary_10_1109_JSTARS_2024_3408154
crossref_primary_10_1109_ACCESS_2025_3526180
crossref_primary_10_1109_JSTARS_2023_3274592
crossref_primary_10_1109_JSTARS_2021_3052869
crossref_primary_10_1109_TGRS_2023_3244953
crossref_primary_10_1109_TGRS_2024_3512586
crossref_primary_10_3390_rs14215398
crossref_primary_10_1007_s13369_021_06288_x
crossref_primary_10_1109_TGRS_2021_3064840
crossref_primary_10_1109_TGRS_2023_3343453
crossref_primary_10_1109_LGRS_2022_3144284
crossref_primary_10_1109_LGRS_2021_3116858
crossref_primary_10_3390_app13158647
crossref_primary_10_1109_TGRS_2021_3110601
crossref_primary_10_3390_rs13193890
crossref_primary_10_1109_TGRS_2020_3038803
crossref_primary_10_1109_JSTARS_2022_3150843
crossref_primary_10_1109_JSTARS_2022_3207302
crossref_primary_10_1109_TPAMI_2021_3117983
crossref_primary_10_1109_TGRS_2024_3476992
crossref_primary_10_1109_TGRS_2024_3442732
crossref_primary_10_1109_TNNLS_2023_3242323
crossref_primary_10_3390_land12091813
crossref_primary_10_3390_rs16071145
crossref_primary_10_1109_LGRS_2024_3374418
crossref_primary_10_1109_ACCESS_2021_3076596
crossref_primary_10_1109_TGRS_2024_3520161
crossref_primary_10_1109_JSTARS_2022_3230797
crossref_primary_10_3390_land12081602
crossref_primary_10_1109_TGRS_2024_3408895
crossref_primary_10_1109_TCSVT_2024_3367168
crossref_primary_10_1109_ACCESS_2020_3021508
crossref_primary_10_1109_ACCESS_2024_3480351
crossref_primary_10_1109_TGRS_2021_3059450
crossref_primary_10_1080_01431161_2024_2343137
crossref_primary_10_1109_JSTARS_2023_3266794
crossref_primary_10_1109_TGRS_2024_3392778
crossref_primary_10_1109_LGRS_2023_3305617
crossref_primary_10_1109_TGRS_2023_3241436
crossref_primary_10_3390_rs17040685
crossref_primary_10_1109_JSTARS_2024_3349541
crossref_primary_10_3390_rs14051246
crossref_primary_10_1016_j_measurement_2025_116704
crossref_primary_10_1080_01431161_2022_2131480
crossref_primary_10_1109_TGRS_2024_3432711
crossref_primary_10_1109_JSTARS_2025_3527213
crossref_primary_10_1016_j_jag_2021_102576
crossref_primary_10_3390_s23198213
crossref_primary_10_1016_j_image_2021_116458
crossref_primary_10_1109_JSTARS_2022_3146362
crossref_primary_10_1109_LGRS_2022_3222836
crossref_primary_10_1109_TGRS_2023_3323409
crossref_primary_10_1016_j_cja_2023_02_003
crossref_primary_10_1109_TBME_2024_3439820
crossref_primary_10_3390_rs14215362
crossref_primary_10_1109_TGRS_2023_3287863
crossref_primary_10_3390_s20061686
crossref_primary_10_3390_rs13112207
crossref_primary_10_1109_LGRS_2021_3089604
crossref_primary_10_1109_LGRS_2024_3357098
crossref_primary_10_1109_JSTARS_2022_3220503
crossref_primary_10_3390_s21165460
crossref_primary_10_1080_17538947_2024_2390427
crossref_primary_10_1109_ACCESS_2023_3292302
crossref_primary_10_3390_rs14133230
crossref_primary_10_1109_JSTARS_2023_3347561
crossref_primary_10_1080_01431161_2024_2359734
crossref_primary_10_1109_TGRS_2025_3546222
crossref_primary_10_1109_TGRS_2024_3379355
crossref_primary_10_1109_LGRS_2020_3040308
crossref_primary_10_3390_rs15133213
crossref_primary_10_1007_s11042_024_19768_7
crossref_primary_10_1109_JSTARS_2020_2981686
crossref_primary_10_1007_s00500_022_07106_8
crossref_primary_10_1109_TGRS_2025_3538848
crossref_primary_10_1109_JSTARS_2022_3196640
crossref_primary_10_1109_TGRS_2020_3020555
crossref_primary_10_1109_TGRS_2021_3128989
crossref_primary_10_1109_LGRS_2024_3431223
crossref_primary_10_1016_j_eswa_2024_124827
crossref_primary_10_1109_TGRS_2023_3270496
crossref_primary_10_3390_rs13193816
crossref_primary_10_3788_IRLA20220876
crossref_primary_10_1109_JSTARS_2023_3289585
crossref_primary_10_1002_widm_1371
crossref_primary_10_1109_TGRS_2023_3285747
crossref_primary_10_1109_LGRS_2022_3228591
crossref_primary_10_1109_JSTARS_2024_3349596
crossref_primary_10_3390_rs14215460
crossref_primary_10_1109_ACCESS_2024_3461337
crossref_primary_10_1109_TGRS_2023_3293147
crossref_primary_10_1109_TGRS_2023_3349373
crossref_primary_10_3390_rs13071298
crossref_primary_10_1109_TGRS_2023_3327123
crossref_primary_10_3390_rs13224537
crossref_primary_10_3788_IRLA20240253
crossref_primary_10_1109_LSP_2024_3472490
crossref_primary_10_1007_s11042_024_18872_y
crossref_primary_10_1007_s11042_024_19257_x
crossref_primary_10_3390_s20030871
crossref_primary_10_1016_j_engappai_2022_105549
crossref_primary_10_1088_1742_6596_1642_1_012003
crossref_primary_10_3390_app13042073
crossref_primary_10_1080_01431161_2023_2232543
crossref_primary_10_1117_1_JRS_16_044512
crossref_primary_10_3390_s25030657
crossref_primary_10_1016_j_imavis_2024_105381
crossref_primary_10_1016_j_neunet_2023_01_041
crossref_primary_10_3390_rs16020420
crossref_primary_10_1109_TGRS_2023_3248106
crossref_primary_10_1016_j_isprsjprs_2022_05_004
crossref_primary_10_3390_rs15174224
crossref_primary_10_1080_10095020_2023_2288179
crossref_primary_10_2139_ssrn_4137384
crossref_primary_10_1109_JSTARS_2021_3090418
crossref_primary_10_3390_rs13163192
crossref_primary_10_1109_TGRS_2023_3341437
crossref_primary_10_12677_CSA_2021_1110257
crossref_primary_10_1109_ACCESS_2024_3355154
crossref_primary_10_3390_su151411463
crossref_primary_10_5194_esurf_13_167_2025
crossref_primary_10_1016_j_isprsjprs_2020_09_022
crossref_primary_10_1109_LGRS_2021_3110584
crossref_primary_10_1109_LGRS_2024_3386311
crossref_primary_10_1109_JSTARS_2023_3277588
crossref_primary_10_3390_electronics13122358
crossref_primary_10_3390_rs14163969
crossref_primary_10_11834_jig_240009
crossref_primary_10_1109_TGRS_2025_3547903
crossref_primary_10_3390_rs16132398
crossref_primary_10_3390_rs14081850
crossref_primary_10_1109_TGRS_2024_3354031
crossref_primary_10_1109_ACCESS_2021_3128140
crossref_primary_10_1109_TGRS_2021_3091003
crossref_primary_10_1109_TGRS_2021_3089170
crossref_primary_10_3390_rs17060972
crossref_primary_10_1007_s10489_021_03147_y
crossref_primary_10_26833_ijeg_1587264
crossref_primary_10_1109_JSTARS_2025_3526982
crossref_primary_10_1007_s13369_024_08892_z
crossref_primary_10_3390_rs15174281
crossref_primary_10_1109_TIM_2024_3451572
crossref_primary_10_1080_2150704X_2020_1770364
crossref_primary_10_1109_TGRS_2023_3300071
crossref_primary_10_1016_j_inffus_2022_06_003
crossref_primary_10_1088_1402_4896_ad4f6a
crossref_primary_10_1109_TGRS_2024_3390764
crossref_primary_10_1109_TGRS_2025_3543583
crossref_primary_10_3390_s21134350
crossref_primary_10_1109_TGRS_2021_3076050
crossref_primary_10_1016_j_earscirev_2022_104110
crossref_primary_10_1038_s41598_024_67451_6
crossref_primary_10_3390_rs17060985
crossref_primary_10_3390_rs17010061
crossref_primary_10_3390_rs16234374
crossref_primary_10_1007_s11082_024_06322_w
crossref_primary_10_3390_rs17040719
crossref_primary_10_1109_JSTARS_2020_3046482
crossref_primary_10_1108_IJICC_08_2024_0383
crossref_primary_10_3390_electronics11172657
crossref_primary_10_1109_TGRS_2023_3281331
crossref_primary_10_1109_TGRS_2023_3316277
crossref_primary_10_1109_ACCESS_2021_3063681
crossref_primary_10_1515_geo_2022_0645
crossref_primary_10_3390_electronics13112140
crossref_primary_10_1016_j_knosys_2024_112534
crossref_primary_10_1109_LGRS_2024_3408875
crossref_primary_10_1007_s11042_023_15981_y
crossref_primary_10_1109_TGRS_2025_3541390
crossref_primary_10_1109_TGRS_2024_3375398
crossref_primary_10_1109_TGRS_2025_3533553
crossref_primary_10_3389_feart_2024_1381192
crossref_primary_10_15407_fmmit2020_30_019
crossref_primary_10_1080_2150704X_2023_2215896
crossref_primary_10_3390_jimaging8080215
crossref_primary_10_1007_s10489_021_02335_0
crossref_primary_10_1109_TGRS_2022_3175213
crossref_primary_10_1016_j_dsp_2024_104390
crossref_primary_10_1016_j_neucom_2024_127552
crossref_primary_10_1109_ACCESS_2020_3033469
crossref_primary_10_3390_rs12193115
crossref_primary_10_1109_TGRS_2022_3203983
crossref_primary_10_1109_TGRS_2024_3379744
crossref_primary_10_1109_TIM_2023_3246536
crossref_primary_10_1109_TGRS_2024_3369720
crossref_primary_10_1109_TGRS_2020_2999082
crossref_primary_10_1109_LGRS_2021_3101851
crossref_primary_10_1109_TGRS_2021_3127232
crossref_primary_10_3390_rs17050733
crossref_primary_10_1016_j_isprsjprs_2022_06_002
crossref_primary_10_1109_TGRS_2024_3391621
crossref_primary_10_1016_j_fuel_2024_130985
crossref_primary_10_1109_TGRS_2023_3305729
crossref_primary_10_1109_TGRS_2025_3540085
crossref_primary_10_14801_JAITC_2020_10_2_61
crossref_primary_10_1109_TGRS_2024_3417610
crossref_primary_10_3390_rs14102385
crossref_primary_10_3390_rs14153731
crossref_primary_10_3390_rs14102389
crossref_primary_10_1109_ACCESS_2024_3444606
crossref_primary_10_1109_LGRS_2022_3223069
crossref_primary_10_3390_rs14153735
crossref_primary_10_1109_LGRS_2020_3023124
crossref_primary_10_1109_LGRS_2022_3187566
crossref_primary_10_1109_TCSVT_2024_3415657
crossref_primary_10_1016_j_asoc_2024_112061
crossref_primary_10_1109_LGRS_2024_3404481
crossref_primary_10_1109_TGRS_2024_3410308
crossref_primary_10_3390_s20216070
crossref_primary_10_1016_j_neunet_2023_02_002
crossref_primary_10_1016_j_measurement_2023_113936
crossref_primary_10_1109_TIP_2022_3148874
crossref_primary_10_32604_cmc_2024_050879
crossref_primary_10_3390_rs17020242
crossref_primary_10_3390_rs15163997
crossref_primary_10_1016_j_isprsjprs_2023_01_001
crossref_primary_10_3390_electronics13244975
crossref_primary_10_1109_LGRS_2021_3130379
crossref_primary_10_3390_rs14010040
crossref_primary_10_1017_eds_2024_53
crossref_primary_10_1109_LGRS_2024_3356507
crossref_primary_10_1109_TGRS_2024_3457868
crossref_primary_10_1109_TGRS_2023_3243042
crossref_primary_10_1016_j_asr_2025_03_044
crossref_primary_10_3390_ijgi11110565
crossref_primary_10_1109_TGRS_2024_3458951
crossref_primary_10_1007_s11760_024_03639_7
crossref_primary_10_1038_s41598_024_84626_3
crossref_primary_10_3390_rs17020249
crossref_primary_10_1080_2150704X_2024_2388853
crossref_primary_10_1109_TCSVT_2024_3444795
crossref_primary_10_3390_rs16162885
crossref_primary_10_3390_rs16162884
crossref_primary_10_1109_TGRS_2024_3382099
crossref_primary_10_1002_cpe_6767
crossref_primary_10_7717_peerj_cs_2199
crossref_primary_10_3390_electronics12040946
crossref_primary_10_3390_rs15092343
crossref_primary_10_1016_j_isprsjprs_2022_07_019
crossref_primary_10_1109_TGRS_2025_3539698
crossref_primary_10_1109_LGRS_2023_3283403
crossref_primary_10_1109_JSEN_2023_3348097
crossref_primary_10_1109_TGRS_2023_3348479
crossref_primary_10_3390_rs14040950
crossref_primary_10_1016_j_isprsjprs_2021_12_004
crossref_primary_10_1109_TPAMI_2023_3275142
crossref_primary_10_3390_app13148161
crossref_primary_10_3390_rs15163975
crossref_primary_10_1109_TGRS_2023_3256373
crossref_primary_10_1016_j_isprsjprs_2020_06_016
crossref_primary_10_3390_rs15040914
crossref_primary_10_1109_LGRS_2024_3378531
crossref_primary_10_3390_rs15071878
crossref_primary_10_1109_TGRS_2023_3346041
crossref_primary_10_1109_ACCESS_2022_3190415
crossref_primary_10_1109_TPAMI_2024_3393024
crossref_primary_10_1007_s11042_023_15608_2
crossref_primary_10_3390_rs16234556
crossref_primary_10_1002_cpe_6785
crossref_primary_10_1088_1361_6501_ad82ff
crossref_primary_10_1109_TGRS_2024_3387572
crossref_primary_10_1109_TGRS_2022_3205052
crossref_primary_10_3390_rs12091432
crossref_primary_10_1109_TGRS_2024_3465496
crossref_primary_10_1007_s10489_024_05818_y
crossref_primary_10_1109_JSTARS_2024_3449335
crossref_primary_10_1016_j_compag_2020_105742
crossref_primary_10_1109_TGRS_2021_3123231
crossref_primary_10_1109_TGRS_2022_3183022
crossref_primary_10_1109_TGRS_2024_3411671
crossref_primary_10_21595_jme_2023_23510
crossref_primary_10_3390_rs16234522
crossref_primary_10_1109_TGRS_2024_3520715
crossref_primary_10_3390_rs13132620
crossref_primary_10_1007_s10044_022_01072_5
crossref_primary_10_1109_TGCN_2022_3158004
crossref_primary_10_1016_j_dt_2023_09_010
crossref_primary_10_5194_gi_11_195_2022
crossref_primary_10_1109_TGRS_2022_3229039
crossref_primary_10_1016_j_displa_2023_102618
crossref_primary_10_3390_rs13010039
crossref_primary_10_1109_LGRS_2022_3163161
crossref_primary_10_1109_TGRS_2020_3042554
crossref_primary_10_1109_MGRS_2022_3171836
crossref_primary_10_1109_TGRS_2024_3415809
crossref_primary_10_1109_JSTARS_2021_3104230
crossref_primary_10_1109_TGRS_2022_3231340
crossref_primary_10_3390_rs12152416
crossref_primary_10_3390_rs14225805
crossref_primary_10_3390_rs15071860
crossref_primary_10_3390_rs14184435
crossref_primary_10_3390_rs16173321
crossref_primary_10_1109_TGRS_2024_3458986
crossref_primary_10_1155_2022_5880959
crossref_primary_10_1109_TGRS_2023_3269642
crossref_primary_10_1016_j_dsp_2024_104789
crossref_primary_10_1109_TGRS_2025_3539499
crossref_primary_10_3389_fnbot_2022_1074862
crossref_primary_10_3390_electronics13234753
crossref_primary_10_1109_LGRS_2020_3009243
crossref_primary_10_1109_ACCESS_2021_3057172
crossref_primary_10_3390_rs13010054
crossref_primary_10_3390_rs12152449
crossref_primary_10_53941_ijndi_2024_100025
crossref_primary_10_1109_ACCESS_2024_3379362
crossref_primary_10_3390_rs16050906
crossref_primary_10_1088_1742_6596_2917_1_012030
crossref_primary_10_3390_rs13224517
crossref_primary_10_3390_rs16234510
crossref_primary_10_1109_TGRS_2024_3400595
crossref_primary_10_1109_JSTARS_2022_3224555
crossref_primary_10_1109_JSTARS_2023_3286912
crossref_primary_10_1109_TCDS_2020_3031604
crossref_primary_10_3390_s22020474
crossref_primary_10_1002_rse2_352
crossref_primary_10_3390_app142210331
crossref_primary_10_1109_JSTARS_2022_3224558
crossref_primary_10_1109_TGRS_2022_3229041
crossref_primary_10_1016_j_isprsjprs_2023_01_011
crossref_primary_10_1109_TGRS_2023_3307508
crossref_primary_10_3390_rs13010047
crossref_primary_10_1016_j_knosys_2024_112353
crossref_primary_10_1109_TGRS_2023_3343806
crossref_primary_10_3390_jimaging6080078
crossref_primary_10_1007_s00371_024_03434_y
crossref_primary_10_1109_ACCESS_2022_3162693
crossref_primary_10_1109_TGRS_2024_3374880
crossref_primary_10_3390_ijgi9040189
crossref_primary_10_1016_j_knosys_2021_107083
crossref_primary_10_3390_rs14092091
crossref_primary_10_1109_LGRS_2021_3104112
crossref_primary_10_1007_s11227_025_06987_4
crossref_primary_10_1007_s11554_024_01490_0
crossref_primary_10_1109_TPAMI_2023_3290594
crossref_primary_10_3390_s24123882
crossref_primary_10_1109_TIM_2022_3221101
crossref_primary_10_1016_j_eswa_2023_119997
crossref_primary_10_1109_MGRS_2020_3041450
crossref_primary_10_1016_j_isprsjprs_2024_05_023
crossref_primary_10_3390_rs17061078
crossref_primary_10_3390_drones8050189
crossref_primary_10_1109_LGRS_2022_3223470
crossref_primary_10_3390_rs14092088
crossref_primary_10_1109_LGRS_2023_3294395
crossref_primary_10_3390_app14114558
crossref_primary_10_3390_s25061738
crossref_primary_10_1016_j_jag_2022_102966
crossref_primary_10_1109_JSTARS_2023_3293395
crossref_primary_10_3390_rs16040729
crossref_primary_10_3390_rs16071203
crossref_primary_10_1109_JSTARS_2025_3530141
crossref_primary_10_1109_TGRS_2025_3545034
crossref_primary_10_1109_TGRS_2022_3181466
crossref_primary_10_1007_s00170_023_11098_6
crossref_primary_10_1109_TGRS_2021_3108476
crossref_primary_10_1088_1742_6596_1894_1_012104
crossref_primary_10_1007_s10489_023_05227_7
crossref_primary_10_1109_TGRS_2020_2985989
crossref_primary_10_1117_1_JEI_31_4_043049
crossref_primary_10_1364_OE_444501
crossref_primary_10_1109_TGRS_2025_3541937
crossref_primary_10_1016_j_isprsjprs_2024_12_001
crossref_primary_10_1109_TITS_2024_3404973
crossref_primary_10_1088_1361_6501_ad90ff
crossref_primary_10_1016_j_jag_2021_102409
crossref_primary_10_34133_remotesensing_0219
crossref_primary_10_1109_JIOT_2023_3287973
crossref_primary_10_1080_01431161_2024_2354132
crossref_primary_10_1109_TGRS_2024_3512865
crossref_primary_10_1016_j_compeleceng_2022_108041
crossref_primary_10_1109_TGRS_2023_3241342
crossref_primary_10_1007_s10489_023_05216_w
crossref_primary_10_1109_TGRS_2021_3124959
crossref_primary_10_3390_s23010062
crossref_primary_10_1016_j_inffus_2021_10_010
crossref_primary_10_1109_LGRS_2020_2993652
crossref_primary_10_1109_JSTARS_2023_3241969
crossref_primary_10_32604_cmc_2024_049710
crossref_primary_10_1109_TGRS_2023_3310026
crossref_primary_10_1109_JSTARS_2025_3528057
crossref_primary_10_1109_JSTARS_2023_3339235
crossref_primary_10_1016_j_eswa_2023_119980
crossref_primary_10_1016_j_datak_2022_102123
crossref_primary_10_1109_TGRS_2025_3526799
crossref_primary_10_1109_TGRS_2024_3407858
crossref_primary_10_1007_s41064_024_00311_0
crossref_primary_10_1109_LGRS_2022_3178787
crossref_primary_10_1109_TGRS_2022_3149780
crossref_primary_10_1016_j_patrec_2023_10_028
crossref_primary_10_1016_j_rser_2023_113276
crossref_primary_10_1109_LGRS_2020_3044422
crossref_primary_10_3390_rs14071595
crossref_primary_10_1109_LGRS_2021_3093577
crossref_primary_10_1017_eds_2023_42
crossref_primary_10_1109_TGRS_2022_3215146
crossref_primary_10_1186_s40537_023_00851_z
crossref_primary_10_3390_rs16244734
crossref_primary_10_1109_ACCESS_2024_3487492
crossref_primary_10_1109_TGRS_2024_3421512
crossref_primary_10_1109_TGRS_2022_3173811
crossref_primary_10_3390_jmse12111916
crossref_primary_10_2478_ijanmc_2022_0031
crossref_primary_10_1007_s12145_024_01447_8
crossref_primary_10_1038_s41598_025_89124_8
crossref_primary_10_1016_j_jksuci_2023_101863
crossref_primary_10_1109_TITS_2024_3506748
crossref_primary_10_1155_2021_8956396
crossref_primary_10_1109_TGRS_2023_3347798
crossref_primary_10_1109_JSTARS_2022_3188493
crossref_primary_10_3390_s22228694
crossref_primary_10_1007_s11042_022_13153_y
crossref_primary_10_1109_JSTARS_2021_3115796
crossref_primary_10_1109_TGRS_2023_3336665
crossref_primary_10_1016_j_jag_2022_102912
crossref_primary_10_1109_JSTARS_2020_3044733
crossref_primary_10_1109_TGRS_2020_3048384
crossref_primary_10_1007_s10707_022_00476_z
crossref_primary_10_1109_TGRS_2024_3360447
crossref_primary_10_1109_JSTARS_2025_3545828
crossref_primary_10_3390_app14198633
crossref_primary_10_1109_JSTARS_2021_3070368
crossref_primary_10_1109_LGRS_2020_3042465
crossref_primary_10_1109_TIM_2022_3200861
crossref_primary_10_3390_rs15123157
crossref_primary_10_1109_TGRS_2022_3194732
crossref_primary_10_1007_s00371_024_03687_7
crossref_primary_10_1016_j_aei_2020_101169
crossref_primary_10_1109_TGRS_2023_3339956
crossref_primary_10_1016_j_knosys_2022_108230
crossref_primary_10_1109_LGRS_2024_3372500
crossref_primary_10_1016_j_eswa_2023_120519
crossref_primary_10_1109_JSTARS_2023_3265700
crossref_primary_10_3390_rs15051187
crossref_primary_10_1109_JSTARS_2024_3520831
crossref_primary_10_1117_1_JRS_17_046505
crossref_primary_10_1080_2150704X_2024_2334194
crossref_primary_10_1109_TGRS_2024_3360456
crossref_primary_10_23919_cje_2022_00_044
crossref_primary_10_1016_j_jag_2024_104208
crossref_primary_10_1109_JSTARS_2021_3126427
crossref_primary_10_1109_TGRS_2021_3123666
crossref_primary_10_1109_TGRS_2022_3159544
crossref_primary_10_1007_s10115_025_02375_9
crossref_primary_10_56294_sctconf2023390
crossref_primary_10_1016_j_cja_2020_09_022
crossref_primary_10_3390_s24237472
crossref_primary_10_1109_TGRS_2023_3347329
crossref_primary_10_1109_TGRS_2024_3385985
crossref_primary_10_1109_ACCESS_2024_3470815
crossref_primary_10_3390_app13042578
crossref_primary_10_3390_rs14122861
crossref_primary_10_3390_rs17061066
crossref_primary_10_3390_rs12213476
crossref_primary_10_1109_JSTARS_2024_3518753
crossref_primary_10_1016_j_isprsjprs_2025_01_012
crossref_primary_10_1109_LGRS_2024_3510093
crossref_primary_10_1007_s11760_024_03337_4
crossref_primary_10_12677_airr_2025_141023
crossref_primary_10_1109_TGRS_2023_3311966
crossref_primary_10_1080_22797254_2021_1880975
crossref_primary_10_1109_ACCESS_2024_3495523
crossref_primary_10_1109_JSTARS_2024_3374820
crossref_primary_10_1109_JSTARS_2023_3303692
crossref_primary_10_1109_TIP_2023_3349004
crossref_primary_10_1109_LSP_2022_3229638
crossref_primary_10_1007_s11356_024_32404_z
crossref_primary_10_1109_JSTARS_2024_3443268
crossref_primary_10_1109_TGRS_2023_3330479
crossref_primary_10_3390_s23020724
crossref_primary_10_1016_j_isprsjprs_2025_01_020
crossref_primary_10_1109_TGRS_2023_3332652
crossref_primary_10_1038_s41598_024_80035_8
crossref_primary_10_3390_rs15112827
crossref_primary_10_1109_ACCESS_2023_3277227
crossref_primary_10_1109_ACCESS_2024_3409393
crossref_primary_10_1016_j_patrec_2023_05_031
crossref_primary_10_1109_JSTARS_2023_3335294
crossref_primary_10_1109_MGRS_2024_3404506
crossref_primary_10_3390_rs14194951
crossref_primary_10_1016_j_knosys_2025_113003
crossref_primary_10_1080_01431161_2021_2012294
crossref_primary_10_1109_TGRS_2023_3266838
crossref_primary_10_1109_LGRS_2022_3184836
crossref_primary_10_1109_TGRS_2020_3045474
crossref_primary_10_3390_math12091343
crossref_primary_10_3390_rs14030688
crossref_primary_10_1080_15481603_2022_2163063
crossref_primary_10_3390_rs14051166
crossref_primary_10_3390_rs12101667
crossref_primary_10_1016_j_patcog_2025_111404
crossref_primary_10_1109_JSTARS_2023_3316302
crossref_primary_10_1109_TGRS_2021_3085870
crossref_primary_10_1109_JSTARS_2021_3139926
crossref_primary_10_3390_rs16132317
crossref_primary_10_1049_cvi2_12317
crossref_primary_10_1080_01431161_2021_1954261
crossref_primary_10_1109_TGRS_2024_3367294
crossref_primary_10_3390_rs15215131
crossref_primary_10_1109_ACCESS_2022_3149052
crossref_primary_10_1109_ACCESS_2023_3267435
crossref_primary_10_1016_j_isprsjprs_2022_11_008
crossref_primary_10_1080_22797254_2023_2289616
crossref_primary_10_1007_s00521_024_09422_6
crossref_primary_10_1007_s11182_023_02990_5
crossref_primary_10_1109_JSTARS_2022_3206143
crossref_primary_10_1117_1_JRS_18_028503
crossref_primary_10_3390_s20164450
crossref_primary_10_3390_rs15082096
crossref_primary_10_1109_TGRS_2024_3492046
crossref_primary_10_3390_rs16214034
crossref_primary_10_1109_ACCESS_2020_3035839
crossref_primary_10_1109_LGRS_2024_3423796
crossref_primary_10_3390_rs13122318
crossref_primary_10_1109_JSTARS_2024_3394887
crossref_primary_10_1109_TIP_2021_3077144
crossref_primary_10_3390_s23135849
crossref_primary_10_1016_j_jag_2024_104019
crossref_primary_10_1142_S0218001423510175
crossref_primary_10_1109_TGRS_2023_3250448
crossref_primary_10_1109_ACCESS_2022_3161568
crossref_primary_10_1109_TGRS_2022_3215543
crossref_primary_10_3390_rs15030614
crossref_primary_10_1177_1094342020945026
crossref_primary_10_1109_TIP_2024_3424335
crossref_primary_10_1016_j_isprsjprs_2020_09_008
crossref_primary_10_1109_TGRS_2024_3381774
crossref_primary_10_1080_10106049_2024_2322061
crossref_primary_10_1109_TGRS_2024_3485590
crossref_primary_10_1109_TGRS_2023_3325927
crossref_primary_10_1109_TGRS_2024_3415002
crossref_primary_10_1109_MGRS_2021_3115137
crossref_primary_10_3788_LOP231615
crossref_primary_10_3390_s21051743
crossref_primary_10_1080_01431161_2022_2061316
crossref_primary_10_3390_rs15051307
crossref_primary_10_20965_jrm_2021_p1135
crossref_primary_10_3390_app12157622
crossref_primary_10_3390_drones6080188
crossref_primary_10_1016_j_sigpro_2023_109001
crossref_primary_10_1155_2022_8077563
crossref_primary_10_1108_IJICC_01_2024_0020
crossref_primary_10_7717_peerj_cs_1583
crossref_primary_10_1109_JSTARS_2024_3424954
crossref_primary_10_3390_rs14164049
crossref_primary_10_1016_j_engappai_2025_110315
crossref_primary_10_1109_TGRS_2022_3228612
crossref_primary_10_1109_TPAMI_2022_3166956
crossref_primary_10_1016_j_optlastec_2024_110727
crossref_primary_10_32604_cmc_2023_033038
crossref_primary_10_1016_j_neucom_2022_05_052
crossref_primary_10_3390_rs16224265
crossref_primary_10_3390_rs14215488
crossref_primary_10_1109_TGRS_2023_3294241
crossref_primary_10_1080_01431161_2024_2416592
crossref_primary_10_3788_AOS240606
crossref_primary_10_1109_JSTARS_2024_3454333
crossref_primary_10_1109_LGRS_2023_3291505
crossref_primary_10_3390_rs16071288
crossref_primary_10_1109_TGRS_2024_3369666
crossref_primary_10_1109_TGRS_2020_3035469
crossref_primary_10_1109_TGRS_2022_3202499
crossref_primary_10_1080_02726351_2023_2268567
crossref_primary_10_3390_rs13214386
crossref_primary_10_1007_s13042_024_02357_3
crossref_primary_10_3390_rs15225372
crossref_primary_10_1109_ACCESS_2023_3290480
crossref_primary_10_1016_j_jag_2024_104044
crossref_primary_10_1109_TGRS_2023_3298852
crossref_primary_10_1049_itr2_12421
crossref_primary_10_1109_TGRS_2021_3119344
crossref_primary_10_1109_TGRS_2023_3250471
crossref_primary_10_1016_j_isprsjprs_2022_10_003
crossref_primary_10_3390_jmse12020352
crossref_primary_10_1080_10095020_2024_2378920
crossref_primary_10_3390_ijgi11030158
crossref_primary_10_3390_rs16030516
crossref_primary_10_3390_rs14215471
crossref_primary_10_3390_rs17030502
crossref_primary_10_1109_TGRS_2023_3283137
crossref_primary_10_1016_j_jag_2023_103522
crossref_primary_10_1109_TGRS_2020_2991407
crossref_primary_10_1109_LGRS_2024_3462089
crossref_primary_10_3390_electronics9091356
crossref_primary_10_3390_rs14184519
crossref_primary_10_1016_j_isprsjprs_2024_01_015
crossref_primary_10_1109_TGRS_2024_3474925
crossref_primary_10_1016_j_neunet_2024_106416
crossref_primary_10_1109_JSTARS_2023_3269852
crossref_primary_10_3390_rs12152501
crossref_primary_10_3390_rs13040683
crossref_primary_10_1016_j_jksuci_2024_102113
crossref_primary_10_1109_TGRS_2024_3380645
crossref_primary_10_1109_TGRS_2021_3093556
crossref_primary_10_1080_01431161_2022_2128924
crossref_primary_10_1109_LGRS_2021_3125502
crossref_primary_10_1109_TGRS_2024_3480122
crossref_primary_10_3390_rs16224251
crossref_primary_10_3390_s23146423
crossref_primary_10_1080_10095020_2023_2244005
crossref_primary_10_1109_JSEN_2024_3444920
crossref_primary_10_1109_LGRS_2023_3292890
crossref_primary_10_1016_j_isprsjprs_2024_01_005
crossref_primary_10_1088_1402_4896_ad6e3b
crossref_primary_10_1109_LGRS_2020_2975541
crossref_primary_10_1109_JSTARS_2020_3005403
crossref_primary_10_3390_rs16193637
crossref_primary_10_1142_S1793962321500318
crossref_primary_10_1080_01431161_2022_2066487
crossref_primary_10_3390_rs15245788
crossref_primary_10_1016_j_jag_2023_103301
crossref_primary_10_1080_09720529_2022_2068602
crossref_primary_10_3390_rs16193630
crossref_primary_10_1109_TGRS_2024_3514376
crossref_primary_10_1016_j_neucom_2023_03_046
crossref_primary_10_1109_TGRS_2024_3379436
crossref_primary_10_3390_rs15205050
crossref_primary_10_3390_rs15153883
crossref_primary_10_1080_19361610_2022_2111184
crossref_primary_10_1109_TMM_2022_3210389
crossref_primary_10_3390_rs14020382
crossref_primary_10_1109_LGRS_2021_3052017
crossref_primary_10_1016_j_jag_2025_104394
crossref_primary_10_1080_07038992_2021_1898937
crossref_primary_10_1080_01431161_2021_1931537
crossref_primary_10_1109_TGRS_2024_3450732
crossref_primary_10_3390_app14177665
crossref_primary_10_1109_LGRS_2023_3325410
crossref_primary_10_3390_rs15205062
crossref_primary_10_1080_01431161_2021_1949069
crossref_primary_10_1016_j_sigpro_2024_109449
crossref_primary_10_1007_s42979_023_02434_4
crossref_primary_10_1109_JSEN_2024_3362982
crossref_primary_10_3390_rs12010143
crossref_primary_10_3390_electronics12143201
crossref_primary_10_3390_rs15061574
crossref_primary_10_3390_rs17010162
crossref_primary_10_1109_TNNLS_2023_3336563
crossref_primary_10_1515_comp_2023_0105
crossref_primary_10_5194_essd_13_5389_2021
crossref_primary_10_1109_TCYB_2021_3069920
crossref_primary_10_1109_LGRS_2023_3303896
crossref_primary_10_3390_rs17050818
crossref_primary_10_3389_fmars_2024_1480796
crossref_primary_10_26833_ijeg_1107890
crossref_primary_10_1016_j_ufug_2023_127943
crossref_primary_10_3390_app14188535
crossref_primary_10_3390_rs15205031
crossref_primary_10_1109_JSTARS_2024_3461165
crossref_primary_10_3390_rs17050893
crossref_primary_10_3390_rs16234485
crossref_primary_10_1016_j_patcog_2024_110976
crossref_primary_10_1080_01431161_2025_2479886
crossref_primary_10_1109_TGRS_2022_3173373
crossref_primary_10_1109_TGRS_2021_3067470
crossref_primary_10_1117_1_JRS_18_016514
crossref_primary_10_1016_j_mejo_2021_105319
crossref_primary_10_1109_ACCESS_2024_3382245
crossref_primary_10_1109_MGRS_2024_3383473
crossref_primary_10_1016_j_ophoto_2024_100069
crossref_primary_10_1109_ACCESS_2020_3002829
crossref_primary_10_1109_LGRS_2024_3406345
crossref_primary_10_1016_j_jclepro_2022_133026
crossref_primary_10_23919_JSEE_2023_000035
crossref_primary_10_1016_j_patcog_2024_110983
crossref_primary_10_3390_rs13193908
crossref_primary_10_1142_S0218001422540209
crossref_primary_10_7780_kjrs_2024_40_6_1_6
crossref_primary_10_1109_JSTARS_2022_3227322
crossref_primary_10_1007_s40747_024_01652_4
crossref_primary_10_1016_j_compeleceng_2024_110042
crossref_primary_10_1016_j_eswa_2024_125826
crossref_primary_10_3934_mbe_2023282
crossref_primary_10_3390_rs13245132
crossref_primary_10_1007_s40747_024_01676_w
crossref_primary_10_1007_s12145_025_01751_x
crossref_primary_10_1016_j_eswa_2022_116793
crossref_primary_10_1080_2150704X_2024_2439076
crossref_primary_10_1109_JSTARS_2023_3335891
crossref_primary_10_1109_MGRS_2024_3450681
crossref_primary_10_1109_TGRS_2024_3402825
crossref_primary_10_1016_j_ophoto_2024_100071
crossref_primary_10_1007_s11760_024_03288_w
crossref_primary_10_1117_1_JRS_18_016507
crossref_primary_10_14801_jkiit_2024_22_10_35
crossref_primary_10_1109_TGRS_2023_3316153
crossref_primary_10_1109_LGRS_2024_3385399
crossref_primary_10_1109_LGRS_2023_3314517
crossref_primary_10_3390_electronics12132758
crossref_primary_10_1109_LGRS_2022_3230973
crossref_primary_10_1109_TGRS_2023_3247578
crossref_primary_10_3390_rs16020327
crossref_primary_10_1109_JSTARS_2022_3203126
crossref_primary_10_3390_app13158694
crossref_primary_10_3390_rs15082200
crossref_primary_10_1016_j_ecoinf_2024_102952
crossref_primary_10_1016_j_isprsjprs_2022_12_004
crossref_primary_10_3390_rs15174142
crossref_primary_10_14358_PERS_24_00036R1
crossref_primary_10_1007_s13735_020_00200_3
crossref_primary_10_1109_TGRS_2021_3105575
crossref_primary_10_1016_j_neucom_2024_128527
crossref_primary_10_1109_TGRS_2024_3370826
crossref_primary_10_3390_rs17050868
crossref_primary_10_1016_j_cag_2024_104014
crossref_primary_10_1109_LGRS_2025_3538868
crossref_primary_10_1109_TAI_2021_3081057
crossref_primary_10_1109_TGRS_2021_3088398
crossref_primary_10_1109_TGRS_2025_3536931
crossref_primary_10_3390_rs14091970
crossref_primary_10_1109_LGRS_2021_3067313
crossref_primary_10_1007_s10115_023_01916_4
crossref_primary_10_1109_LGRS_2020_3046137
crossref_primary_10_14358_PERS_23_00004R3
crossref_primary_10_3390_rs15174130
crossref_primary_10_1109_TGRS_2023_3278075
crossref_primary_10_1016_j_rsase_2024_101447
crossref_primary_10_3390_photonics8090394
crossref_primary_10_1007_s12145_024_01265_y
crossref_primary_10_1109_JSTARS_2021_3101934
crossref_primary_10_1109_TGRS_2025_3532349
crossref_primary_10_1109_ACCESS_2021_3053546
crossref_primary_10_1117_1_JEI_33_5_053061
crossref_primary_10_1109_JSTARS_2020_3026724
crossref_primary_10_1016_j_jag_2024_103648
crossref_primary_10_1016_j_jclepro_2023_136060
crossref_primary_10_1109_JIOT_2023_3334742
crossref_primary_10_1109_JSTARS_2024_3355992
crossref_primary_10_1016_j_asoc_2024_112181
crossref_primary_10_1109_LGRS_2024_3368619
crossref_primary_10_3390_rs16234443
crossref_primary_10_1109_TGRS_2021_3103964
crossref_primary_10_1007_s11760_024_03560_z
crossref_primary_10_1109_TGRS_2024_3354999
crossref_primary_10_3390_rs16162992
crossref_primary_10_1109_TGRS_2022_3176603
crossref_primary_10_3390_rs14061507
crossref_primary_10_3390_ijgi13120433
crossref_primary_10_1109_JSTARS_2023_3295732
crossref_primary_10_1016_j_inffus_2024_102508
crossref_primary_10_1016_j_dsp_2024_104674
crossref_primary_10_1111_phor_12446
crossref_primary_10_3390_rs15092460
crossref_primary_10_3390_rs13071327
crossref_primary_10_1016_j_eswa_2024_123233
crossref_primary_10_3390_rs13050862
crossref_primary_10_1109_TGRS_2023_3268232
crossref_primary_10_1109_TGRS_2020_3042607
crossref_primary_10_1109_TNNLS_2023_3309889
crossref_primary_10_3390_rs15245709
crossref_primary_10_1016_j_isprsjprs_2021_05_005
crossref_primary_10_2478_amns_2024_3090
crossref_primary_10_1016_j_neucom_2020_06_011
crossref_primary_10_1080_19475683_2023_2165544
crossref_primary_10_1109_TGRS_2025_3550372
crossref_primary_10_1109_JSTARS_2023_3294624
crossref_primary_10_1016_j_isprsjprs_2021_08_023
crossref_primary_10_1109_TGRS_2024_3402216
crossref_primary_10_1109_JSTARS_2021_3056661
crossref_primary_10_1080_01431161_2022_2089539
crossref_primary_10_1145_3664598
crossref_primary_10_1109_JSTARS_2021_3139017
crossref_primary_10_3390_rs15245710
crossref_primary_10_3390_s23031261
crossref_primary_10_1109_ACCESS_2024_3381539
crossref_primary_10_1109_JSTARS_2022_3198577
crossref_primary_10_1109_LGRS_2022_3226201
crossref_primary_10_1021_acsami_1c15942
crossref_primary_10_1109_LGRS_2023_3337807
crossref_primary_10_1109_TGRS_2022_3200957
crossref_primary_10_3390_rs12223750
crossref_primary_10_1109_TGRS_2024_3354783
crossref_primary_10_1109_TGRS_2024_3519891
crossref_primary_10_1109_TGRS_2024_3499363
crossref_primary_10_1016_j_jag_2024_103675
crossref_primary_10_3390_drones6100292
crossref_primary_10_1080_17538947_2024_2432532
crossref_primary_10_1109_TGRS_2024_3510833
crossref_primary_10_32604_csse_2022_024265
crossref_primary_10_1007_s11042_023_15564_x
crossref_primary_10_1038_s41598_024_75807_1
crossref_primary_10_1109_ACCESS_2025_3538548
crossref_primary_10_1155_2022_3001939
crossref_primary_10_3390_rs14071536
crossref_primary_10_7717_peerj_cs_2053
crossref_primary_10_3390_rs14071534
crossref_primary_10_3390_rs16213989
crossref_primary_10_1007_s11042_024_18866_w
crossref_primary_10_1080_01431161_2021_1976873
crossref_primary_10_1080_10106049_2023_2280549
crossref_primary_10_3390_rs15092429
crossref_primary_10_1109_TIP_2020_2987161
crossref_primary_10_3390_rs16203904
crossref_primary_10_1109_TGRS_2023_3294943
crossref_primary_10_3390_electronics12234886
crossref_primary_10_1109_TGRS_2021_3064599
crossref_primary_10_1109_ACCESS_2021_3119562
crossref_primary_10_1109_TGRS_2022_3140856
crossref_primary_10_1016_j_isprsjprs_2020_12_015
crossref_primary_10_3390_rs16244697
crossref_primary_10_1038_s41597_024_03776_1
crossref_primary_10_3390_rs16244699
crossref_primary_10_1016_j_livsci_2021_104700
crossref_primary_10_3390_rs16244693
crossref_primary_10_1016_j_iswa_2025_200484
crossref_primary_10_1080_17538947_2023_2173318
crossref_primary_10_3390_ijgi11070385
crossref_primary_10_1109_TGRS_2024_3493886
crossref_primary_10_3390_rs14071743
crossref_primary_10_3390_app11093974
crossref_primary_10_3389_fpls_2022_1041514
crossref_primary_10_1016_j_isprsjprs_2024_02_003
crossref_primary_10_1080_07038992_2021_1894915
crossref_primary_10_1016_j_eswa_2023_119960
crossref_primary_10_1109_JSTARS_2023_3247455
crossref_primary_10_1109_TGRS_2021_3101359
crossref_primary_10_1109_TGRS_2022_3233637
crossref_primary_10_14358_PERS_23_00024R2
crossref_primary_10_1109_TGRS_2024_3481415
crossref_primary_10_3390_rs14235969
crossref_primary_10_1109_TGRS_2022_3233881
crossref_primary_10_1109_TGRS_2024_3423305
crossref_primary_10_1109_TGRS_2021_3078507
crossref_primary_10_1016_j_neucom_2024_128119
crossref_primary_10_1109_ACCESS_2025_3527881
crossref_primary_10_1007_s10489_022_03622_0
crossref_primary_10_1016_j_ejrs_2023_11_006
crossref_primary_10_1109_JSTARS_2023_3268176
crossref_primary_10_1109_TIM_2024_3387499
crossref_primary_10_32604_cmc_2022_022989
crossref_primary_10_1109_LGRS_2022_3222061
crossref_primary_10_1016_j_jag_2023_103345
crossref_primary_10_1007_s12145_022_00900_w
crossref_primary_10_3390_rs17010134
crossref_primary_10_1007_s11265_022_01799_8
crossref_primary_10_1109_TGRS_2024_3401573
crossref_primary_10_1109_ACCESS_2022_3199368
crossref_primary_10_3390_rs14184581
crossref_primary_10_1007_s11554_024_01617_3
crossref_primary_10_1016_j_rse_2023_113840
crossref_primary_10_1109_TGRS_2025_3541220
crossref_primary_10_1016_j_compag_2023_108106
crossref_primary_10_1109_JSTARS_2021_3128566
crossref_primary_10_1016_j_optlaseng_2021_106707
crossref_primary_10_1109_TGRS_2025_3541441
crossref_primary_10_3390_rs16091532
crossref_primary_10_1016_j_neunet_2024_106844
crossref_primary_10_1016_j_imavis_2022_104471
crossref_primary_10_1016_j_jes_2024_04_037
crossref_primary_10_3390_ani13142365
crossref_primary_10_1109_TGRS_2023_3238801
crossref_primary_10_1142_S0218001423540241
crossref_primary_10_1109_JSTARS_2023_3234161
crossref_primary_10_3390_rs14153616
crossref_primary_10_1007_s11042_023_15584_7
crossref_primary_10_26634_jip_9_2_18916
crossref_primary_10_1109_MGRS_2023_3312347
Cites_doi 10.1109/TGRS.2013.2296782
10.1109/TGRS.2016.2601622
10.1007/s11263-007-0090-8
10.1109/TIP.2018.2835143
10.1109/TPAMI.2015.2389824
10.1109/LGRS.2015.2432135
10.1145/2647868.2654889
10.1109/TPAMI.2012.231
10.1109/TGRS.2017.2783902
10.1007/s11045-015-0370-3
10.1117/1.JRS.11.042611
10.3390/s17122720
10.1016/j.isprsjprs.2014.10.002
10.1109/LGRS.2016.2565705
10.1080/01431161.2012.705443
10.1127/1432-8364/2010/0041
10.1109/LGRS.2018.2813094
10.1080/01431161.2014.999881
10.3390/rs9111170
10.3390/s17020336
10.1007/s11263-013-0620-5
10.1109/TPAMI.2011.94
10.3390/rs9060618
10.1109/TPAMI.2009.167
10.1016/j.isprsjprs.2018.02.014
10.3390/rs10010131
10.1109/LGRS.2016.2542358
10.1109/TGRS.2014.2299540
10.1109/LGRS.2015.2439517
10.1007/s11263-014-0733-5
10.1109/TGRS.2016.2572736
10.1109/TIP.2018.2867198
10.1016/j.isprsjprs.2016.03.014
10.1109/JSTARS.2015.2467377
10.1109/TGRS.2016.2645610
10.1109/LGRS.2017.2708722
10.3390/rs9070666
10.1109/TPAMI.2017.2699184
10.1109/TGRS.2011.2136381
10.1080/2150704X.2017.1415474
10.1109/MSP.2017.2749125
10.1007/978-3-030-01252-6_24
10.1109/JSTARS.2015.2404578
10.1109/TGRS.2017.2778300
10.1007/s11263-019-01247-4
10.1109/TPAMI.2016.2587642
10.1109/TGRS.2014.2374218
10.1109/LGRS.2017.2727515
10.3390/rs9121312
10.1109/JSTARS.2017.2694890
10.1016/j.jvcir.2015.11.002
10.1016/j.isprsjprs.2013.08.001
10.1109/MGRS.2017.2762307
10.3390/rs10050783
10.1109/TPAMI.2016.2577031
10.1007/s11263-009-0275-4
10.1109/TGRS.2016.2569141
10.1016/j.isprsjprs.2013.12.011
10.1109/MSP.2012.2205597
ContentType Journal Article
Copyright 2019 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS)
Copyright_xml – notice: 2019 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS)
DBID AAYXX
CITATION
7S9
L.6
DOI 10.1016/j.isprsjprs.2019.11.023
DatabaseName CrossRef
AGRICOLA
AGRICOLA - Academic
DatabaseTitle CrossRef
AGRICOLA
AGRICOLA - Academic
DatabaseTitleList AGRICOLA

DeliveryMethod fulltext_linktorsrc
Discipline Geography
Engineering
EISSN 1872-8235
EndPage 307
ExternalDocumentID 10_1016_j_isprsjprs_2019_11_023
S0924271619302825
GroupedDBID --K
--M
.~1
0R~
1B1
1RT
1~.
1~5
29J
4.4
457
4G.
5GY
5VS
7-5
71M
8P~
9JN
AACTN
AAEDT
AAEDW
AAIAV
AAIKC
AAIKJ
AAKOC
AALRI
AAMNW
AAOAW
AAQFI
AAQXK
AAXUO
AAYFN
ABBOA
ABFNM
ABJNI
ABMAC
ABQEM
ABQYD
ABXDB
ABYKQ
ACDAQ
ACGFS
ACLVX
ACNNM
ACRLP
ACSBN
ACZNC
ADBBV
ADEZE
ADJOM
ADMUD
AEBSH
AEKER
AENEX
AFKWA
AFTJW
AGHFR
AGUBO
AGYEJ
AHHHB
AHZHX
AIALX
AIEXJ
AIKHN
AITUG
AJBFU
AJOXV
ALMA_UNASSIGNED_HOLDINGS
AMFUW
AMRAJ
AOUOD
ASPBG
ATOGT
AVWKF
AXJTR
AZFZN
BKOJK
BLXMC
CS3
DU5
EBS
EFJIC
EFLBG
EJD
EO8
EO9
EP2
EP3
FDB
FEDTE
FGOYB
FIRID
FNPLU
FYGXN
G-2
G-Q
G8K
GBLVA
GBOLZ
HMA
HVGLF
HZ~
H~9
IHE
IMUCA
J1W
KOM
LY3
M41
MO0
N9A
O-L
O9-
OAUVE
OZT
P-8
P-9
P2P
PC.
Q38
R2-
RIG
RNS
ROL
RPZ
SDF
SDG
SEP
SES
SEW
SPC
SPCBC
SSE
SSV
SSZ
T5K
T9H
WUQ
ZMT
~02
~G-
AAHBH
AATTM
AAXKI
AAYWO
AAYXX
ABDPE
ABWVN
ACRPL
ACVFH
ADCNI
ADNMO
AEIPS
AEUPX
AFJKZ
AFPUW
AFXIZ
AGCQF
AGQPQ
AGRNS
AIGII
AIIUN
AKBMS
AKRWK
AKYEP
ANKPU
APXCP
BNPGV
CITATION
SSH
7S9
L.6
ID FETCH-LOGICAL-c414t-9f90388fa266cd841a01b43ce3f7a4453cdc13ed6bc834fc06809a93a29c70753
IEDL.DBID .~1
ISSN 0924-2716
IngestDate Fri Jul 11 16:18:45 EDT 2025
Tue Jul 01 03:46:42 EDT 2025
Thu Apr 24 23:13:14 EDT 2025
Fri Feb 23 02:49:29 EST 2024
IsPeerReviewed true
IsScholarly true
Keywords Deep learning
Benchmark dataset
Optical remote sensing images
Object detection
Convolutional Neural Network (CNN)
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c414t-9f90388fa266cd841a01b43ce3f7a4453cdc13ed6bc834fc06809a93a29c70753
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ORCID 0000-0001-5030-0683
0000-0001-5545-7217
0000-0002-7873-1554
PQID 2400518149
PQPubID 24069
PageCount 12
ParticipantIDs proquest_miscellaneous_2400518149
crossref_citationtrail_10_1016_j_isprsjprs_2019_11_023
crossref_primary_10_1016_j_isprsjprs_2019_11_023
elsevier_sciencedirect_doi_10_1016_j_isprsjprs_2019_11_023
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate January 2020
2020-01-00
20200101
PublicationDateYYYYMMDD 2020-01-01
PublicationDate_xml – month: 01
  year: 2020
  text: January 2020
PublicationDecade 2020
PublicationTitle ISPRS journal of photogrammetry and remote sensing
PublicationYear 2020
Publisher Elsevier B.V
Publisher_xml – name: Elsevier B.V
References Ding, Li, Xia, Wei, Zhang, Zhang (b0125) 2017; 9
Girshick (b0160) 2015
Szegedy, Liu, Jia, Sermanet, Reed, Anguelov, Erhan, Vanhoucke, Rabinovich (b0460) 2015
Mordan, Thome, Henaff, Cord (b0355) 2018
Salberg (b0410) 2015
Dai, Li, He, Sun (b0100) 2016; Syst
Jia, Shelhamer, Donahue, Karayev, Long, Girshick, Guadarrama, Darrell (b0245) 2014
Tian, Chen, Shah (b0485) 2017
Krizhevsky, Sutskever, Hinton (b0255) 2012
Zhong, Han, Zhang (b0570) 2018; 138
Tang, Zhou, Deng, Zou, Lei (b0475) 2017; 17
Abadi, Barham, Chen, Chen, Davis, Dean, Devin, Ghemawat, Irving, Isard (b0005) 2016
Han, Zhou, Zhang, Cheng, Guo, Liu, Bu, Wu (b0185) 2014; 89
Lin, Dollár, Girshick, He, Hariharan, Belongie (b0280) 2017
Liu, Qi, Qin, Shi, Jia (b0315) 2018
Hou, Chen, Shah (b0225) 2017
Xiao, Liu, Tang, Zhai (b0510) 2015; 36
Lin, Goyal, Girshick, He, Dollar (b0290) 2017
Luan, Chen, Zhang, Han, Liu (b0345) 2018; 27
Heitz, Koller (b0215) 2008; Vis
Li, Cheng, Bu, You (b0265) 2018; 56
Chen, Papandreou, Kokkinos, Murphy, Yuille (b0040) 2018; 40
Cheng, Han, Zhou, Guo (b0060) 2014; 98
Girshick, Donahue, Darrell, Malik (b0165) 2014
He, Zhang, Ren, Sun (b0205) 2014; 37
Mikolov, Deoras, Povey, Burget, Cernocky (b0350) 2012
Xu, Xu, Wang, Yang, Pu (b0515) 2017; 9
Cramer (b0095) 2010; 2010
He, Gkioxari, Dollar, Girshick (b0200) 2017
Yang, X., Fu, K., Sun, H., Yang, J., Guo, Z., Yan, M., Zhan, T., Xian, S., 2018b. R2CNN++: Multi-Dimensional Attention Based Rotation Invariant Detector with Robust Anchor Strategy. arXiv preprint arXiv:1811.07126.
Everingham, Eslami, Van Gool, Williams, Winn, Zisserman (b0130) 2015; 111
Liu, Anguelov, Erhan, Szegedy, Reed, Fu, Berg (b0320) 2016
Cheng, Han, Guo, Qian, Zhou, Yao, Hu (b0055) 2013; 85
Han, Zhang, Cheng, Guo, Ren (b0175) 2015; 53
Han, Zhong, Feng, Zhang (b0190) 2017
Felzenszwalb, Girshick, Mcallester, Ramanan (b0145) 2010; 32
Cheng, L., Liu, X., Li, L., Jiao, L., Tang, X., 2018b. Deep Adaptive Proposal Network for Object Detection in Optical Remote Sensing Images. arXiv preprint arXiv:1807.07327.
Xia, Bai, Ding, Zhu, Belongie, Luo, Datcu, Pelillo, Zhang (b0505) 2018
Bai, Zhang, Zhou (b0020) 2014; 52
Paszke, Gross, Chintala, Chanan, Yang, DeVito, Lin, Desmaison, Antiga, Lerer (b0375) 2017
Zeiler, Fergus (b0550) 2014; Vis
Tompson, Jain, LeCun, Bregler (b0490) 2014
Simonyan, Zisserman (b0435) 2015
Szegedy, Vanhoucke, Ioffe, Shlens, Wojna (b0465) 2016
Cheng, Zhou, Han (b0080) 2016
Singh, Davis (b0440) 2018
Razakarivony, Jurie (b0380) 2015; 34
Shrivastava, Gupta, Girshick (b0430) 2016
Sermanet, Eigen, Zhang, Mathieu, Fergus, Lecun (b0415) 2014
Liu, Wang, Weng, Yang (b0330) 2016; 13
Ševo, Avramović (b0420) 2017; 13
Yu, Guan, Ji (b0545) 2015; 12
Liu, Mattyus (b0295) 2015; 12
Tang, Zhou, Deng, Lei, Zou (b0470) 2017; 9
Wei, Zhang, Zhang, Tian, Zhang (b0500) 2018; 10
Zhang, Shi, Wu (b0560) 2017; 8
Cheng, Han (b0050) 2016; 117
Lin, Maire, Belongie, Hays, Perona, Ramanan, Dollár, Zitnick (b0285) 2014
Zhou, Cheng, Liu, Bu, Hu (b0575) 2016; 27
Zitnick, Dollár (b0595) 2014
Liu, Ma, Chen (b0325) 2018; 15
Yang, Zhuang, Bi, Shi, Xie (b0530) 2017; 14
Clément, Camille, Laurent, Yann (b0090) 2013; 35
Redmon, Divvala, Girshick, Farhadi (b0385) 2016
Bell, Lawrence Zitnick, Bala, Girshick (b0025) 2016
Gidaris, Komodakis (b0155) 2015
Yokoya, Iwasaki (b0540) 2015; 8
Newell, Yang, Deng (b0365) 2016; Vis
Ioffe, Szegedy (b0240) 2015
Fu, C.-Y., Liu, W., Ranga, A., Tyagi, A., Berg, A.C., 2017. DSSD: Deconvolutional single shot detector. arXiv preprint arXiv:1701.06659.
Shrivastava, Gupta (b0425) 2016
Liu, L., Ouyang, W., Wang, X., Fieguth, P., Chen, J., Liu, X., Pietikäinen, M., 2018a. Deep learning for generic object detection: A survey. arXiv preprint arXiv:1809.02165.
Agarwal, S., Terrail, J.O.D., Jurie, F., 2018. Recent Advances in Object Detection in the Age of Deep Convolutional Neural Networks. arXiv preprint arXiv:1809.03193.
Zou, Shi (b0600) 2016; 54
Singh, Li, Sharma, Davis (b0445) 2018
Aksoy (b0015) 2014; 52
Cai, Fan, Feris, Vasconcelos (b0035) 2016
Li, Z., Peng, C., Yu, G., Zhang, X., Deng, Y., Sun, J., 2017. Light-head r-cnn: In defense of two-stage object detector. arXiv preprint arXiv:1711.07264.
Cheng, Guo, Zhao, Han, Li, Fang (b0045) 2013; 34
Liu, L., Pan, Z., Lei, B., 2017a. Learning a Rotation Invariant Detector with Rotatable Bounding Box. arXiv preprint arXiv:1711.09405.
Redmon, Farhadi (b0390) 2017
Han, Zhong, Zhang (b0195) 2017; 9
Cheng, Han, Zhou, Xu (b0065) 2019; 28
Long, Shelhamer, Darrell (b0335) 2015
Kong, Yao, Chen, Sun (b0250) 2016
Redmon, J., Farhadi, A., 2018. Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767.
Hu, Shen, Sun (b0230) 2018
Lin, Shi, Zou (b0275) 2017; 14
Szegedy, Ioffe, Vanhoucke, Alemi (b0455) 2017
Yang, Zhu, Jiang, Gao, Xiao, Zheng (b0520) 2018; 9
Liu, S., Huang, D., Wang, Y., 2017b. Receptive Field Block Net for Accurate and Fast Object Detection. arXiv preprint arXiv:1711.07767.
Das, Mirnalinee, Varghese (b0110) 2011; 49
Han, Zhang, Cheng, Liu, Xu (b0180) 2018; 35
Cheng, Zhou, Han (b0075) 2016; 54
Everingham, Van Gool, Williams, Winn, Zisserman (b0135) 2010; 88
Ren, He, Girshick, Sun (b0400) 2017; 39
Ouyang, Zeng, Wang, Yan, Loy, Tang, Wang, Qiu, Luo, Tian (b0370) 2017; 39
Yao, Jiang, Zhang, Zhao, Cai (b0535) 2017; 11
Guo, Yang, Zhang, Hua (b0170) 2018; 10
Deng, Sun, Zhou, Zhao, Zou (b0120) 2017; 10
Zhu, Chen, Dai, Fu, Ye, Jiao (b0580) 2015
He, Zhang, Ren, Sun (b0210) 2016
Farooq, Hu, Jia (b0140) 2017
Uijlings, Van De Sande, Gevers, Smeulders (b0495) 2013; 104
Hinton, Deng, Yu, Dahl, Mohamed, Jaitly, Senior, Vanhoucke, Nguyen, Sainath (b0220) 2012; 29
Zhong, Lei, Yao (b0565) 2017; 17
Benedek, Descombes, Zerubia (b0030) 2011; 34
Zhang, Du, Zhang, Xu (b0555) 2016; 54
Russell, Torralba, Murphy, Freeman (b0405) 2008; 77
Tanner, Colder, Pullen, Heagy, Eppolito, Carlan, Oertel, Sallee (b0480) 2009
Law, Deng (b0260) 2018
Huang, Liu, Laurens, Weinberger (b0235) 2017
Cheng, Yang, Yao, Guo, Han (b0070) 2018; 56
Dai, Qi, Xiong, Li, Zhang, Hu, Wei (b0105) 2017; Vision
Mundhenk, Konjevod, Sakla, Boakye (b0360) 2016; Vis
Singh, Najibi, Davis (b0450) 2018
Zhu, Urtasun, Salakhutdinov, Fidler (b0590) 2015
Long, Gong, Xiao, Liu (b0340) 2017; 55
Deng, Dong, Socher, Li, Li, Fei-Fei (b0115) 2009
Zhu, Tuia, Mou, Xia, Zhang, Xu, Fraundorfer (b0585) 2017; 5
Xiao (10.1016/j.isprsjprs.2019.11.023_b0510) 2015; 36
Hou (10.1016/j.isprsjprs.2019.11.023_b0225) 2017
Girshick (10.1016/j.isprsjprs.2019.11.023_b0160) 2015
Redmon (10.1016/j.isprsjprs.2019.11.023_b0390) 2017
Dai (10.1016/j.isprsjprs.2019.11.023_b0100) 2016; Syst
He (10.1016/j.isprsjprs.2019.11.023_b0205) 2014; 37
Heitz (10.1016/j.isprsjprs.2019.11.023_b0215) 2008; Vis
Ding (10.1016/j.isprsjprs.2019.11.023_b0125) 2017; 9
Paszke (10.1016/j.isprsjprs.2019.11.023_b0375) 2017
Bell (10.1016/j.isprsjprs.2019.11.023_b0025) 2016
Luan (10.1016/j.isprsjprs.2019.11.023_b0345) 2018; 27
Wei (10.1016/j.isprsjprs.2019.11.023_b0500) 2018; 10
Zhang (10.1016/j.isprsjprs.2019.11.023_b0560) 2017; 8
Ševo (10.1016/j.isprsjprs.2019.11.023_b0420) 2017; 13
Benedek (10.1016/j.isprsjprs.2019.11.023_b0030) 2011; 34
Felzenszwalb (10.1016/j.isprsjprs.2019.11.023_b0145) 2010; 32
Bai (10.1016/j.isprsjprs.2019.11.023_b0020) 2014; 52
Liu (10.1016/j.isprsjprs.2019.11.023_b0325) 2018; 15
Mundhenk (10.1016/j.isprsjprs.2019.11.023_b0360) 2016; Vis
Zhong (10.1016/j.isprsjprs.2019.11.023_b0570) 2018; 138
Zhang (10.1016/j.isprsjprs.2019.11.023_b0555) 2016; 54
Clément (10.1016/j.isprsjprs.2019.11.023_b0090) 2013; 35
Ouyang (10.1016/j.isprsjprs.2019.11.023_b0370) 2017; 39
Deng (10.1016/j.isprsjprs.2019.11.023_b0120) 2017; 10
Tang (10.1016/j.isprsjprs.2019.11.023_b0475) 2017; 17
Zeiler (10.1016/j.isprsjprs.2019.11.023_b0550) 2014; Vis
Cheng (10.1016/j.isprsjprs.2019.11.023_b0065) 2019; 28
Cramer (10.1016/j.isprsjprs.2019.11.023_b0095) 2010; 2010
Tanner (10.1016/j.isprsjprs.2019.11.023_b0480) 2009
Lin (10.1016/j.isprsjprs.2019.11.023_b0280) 2017
Singh (10.1016/j.isprsjprs.2019.11.023_b0445) 2018
Cheng (10.1016/j.isprsjprs.2019.11.023_b0080) 2016
Lin (10.1016/j.isprsjprs.2019.11.023_b0290) 2017
Redmon (10.1016/j.isprsjprs.2019.11.023_b0385) 2016
Yao (10.1016/j.isprsjprs.2019.11.023_b0535) 2017; 11
He (10.1016/j.isprsjprs.2019.11.023_b0200) 2017
Shrivastava (10.1016/j.isprsjprs.2019.11.023_b0425) 2016
Han (10.1016/j.isprsjprs.2019.11.023_b0185) 2014; 89
Han (10.1016/j.isprsjprs.2019.11.023_b0175) 2015; 53
Girshick (10.1016/j.isprsjprs.2019.11.023_b0165) 2014
Tang (10.1016/j.isprsjprs.2019.11.023_b0470) 2017; 9
Krizhevsky (10.1016/j.isprsjprs.2019.11.023_b0255) 2012
Lin (10.1016/j.isprsjprs.2019.11.023_b0275) 2017; 14
Szegedy (10.1016/j.isprsjprs.2019.11.023_b0465) 2016
Tompson (10.1016/j.isprsjprs.2019.11.023_b0490) 2014
Uijlings (10.1016/j.isprsjprs.2019.11.023_b0495) 2013; 104
Zhu (10.1016/j.isprsjprs.2019.11.023_b0580) 2015
Zhou (10.1016/j.isprsjprs.2019.11.023_b0575) 2016; 27
Li (10.1016/j.isprsjprs.2019.11.023_b0265) 2018; 56
Yu (10.1016/j.isprsjprs.2019.11.023_b0545) 2015; 12
Cheng (10.1016/j.isprsjprs.2019.11.023_b0055) 2013; 85
Lin (10.1016/j.isprsjprs.2019.11.023_b0285) 2014
10.1016/j.isprsjprs.2019.11.023_b0395
Singh (10.1016/j.isprsjprs.2019.11.023_b0440) 2018
Singh (10.1016/j.isprsjprs.2019.11.023_b0450) 2018
Cheng (10.1016/j.isprsjprs.2019.11.023_b0060) 2014; 98
10.1016/j.isprsjprs.2019.11.023_b0270
10.1016/j.isprsjprs.2019.11.023_b0150
Han (10.1016/j.isprsjprs.2019.11.023_b0190) 2017
Liu (10.1016/j.isprsjprs.2019.11.023_b0295) 2015; 12
Dai (10.1016/j.isprsjprs.2019.11.023_b0105) 2017; Vision
Sermanet (10.1016/j.isprsjprs.2019.11.023_b0415) 2014
Xu (10.1016/j.isprsjprs.2019.11.023_b0515) 2017; 9
Cheng (10.1016/j.isprsjprs.2019.11.023_b0070) 2018; 56
10.1016/j.isprsjprs.2019.11.023_b0310
Aksoy (10.1016/j.isprsjprs.2019.11.023_b0015) 2014; 52
Ren (10.1016/j.isprsjprs.2019.11.023_b0400) 2017; 39
Abadi (10.1016/j.isprsjprs.2019.11.023_b0005) 2016
Huang (10.1016/j.isprsjprs.2019.11.023_b0235) 2017
Zhu (10.1016/j.isprsjprs.2019.11.023_b0590) 2015
Zou (10.1016/j.isprsjprs.2019.11.023_b0600) 2016; 54
Kong (10.1016/j.isprsjprs.2019.11.023_b0250) 2016
Shrivastava (10.1016/j.isprsjprs.2019.11.023_b0430) 2016
Cai (10.1016/j.isprsjprs.2019.11.023_b0035) 2016
Han (10.1016/j.isprsjprs.2019.11.023_b0195) 2017; 9
Long (10.1016/j.isprsjprs.2019.11.023_b0335) 2015
Farooq (10.1016/j.isprsjprs.2019.11.023_b0140) 2017
Razakarivony (10.1016/j.isprsjprs.2019.11.023_b0380) 2015; 34
Simonyan (10.1016/j.isprsjprs.2019.11.023_b0435) 2015
Zitnick (10.1016/j.isprsjprs.2019.11.023_b0595) 2014
10.1016/j.isprsjprs.2019.11.023_b0305
Szegedy (10.1016/j.isprsjprs.2019.11.023_b0455) 2017
Das (10.1016/j.isprsjprs.2019.11.023_b0110) 2011; 49
Yokoya (10.1016/j.isprsjprs.2019.11.023_b0540) 2015; 8
10.1016/j.isprsjprs.2019.11.023_b0300
Szegedy (10.1016/j.isprsjprs.2019.11.023_b0460) 2015
Tian (10.1016/j.isprsjprs.2019.11.023_b0485) 2017
Zhu (10.1016/j.isprsjprs.2019.11.023_b0585) 2017; 5
10.1016/j.isprsjprs.2019.11.023_b0010
Mikolov (10.1016/j.isprsjprs.2019.11.023_b0350) 2012
Xia (10.1016/j.isprsjprs.2019.11.023_b0505) 2018
Cheng (10.1016/j.isprsjprs.2019.11.023_b0045) 2013; 34
Yang (10.1016/j.isprsjprs.2019.11.023_b0520) 2018; 9
Everingham (10.1016/j.isprsjprs.2019.11.023_b0130) 2015; 111
Mordan (10.1016/j.isprsjprs.2019.11.023_b0355) 2018
He (10.1016/j.isprsjprs.2019.11.023_b0210) 2016
Cheng (10.1016/j.isprsjprs.2019.11.023_b0050) 2016; 117
Ioffe (10.1016/j.isprsjprs.2019.11.023_b0240) 2015
Deng (10.1016/j.isprsjprs.2019.11.023_b0115) 2009
Guo (10.1016/j.isprsjprs.2019.11.023_b0170) 2018; 10
Jia (10.1016/j.isprsjprs.2019.11.023_b0245) 2014
Cheng (10.1016/j.isprsjprs.2019.11.023_b0075) 2016; 54
Salberg (10.1016/j.isprsjprs.2019.11.023_b0410) 2015
Chen (10.1016/j.isprsjprs.2019.11.023_b0040) 2018; 40
10.1016/j.isprsjprs.2019.11.023_b0085
Long (10.1016/j.isprsjprs.2019.11.023_b0340) 2017; 55
Newell (10.1016/j.isprsjprs.2019.11.023_b0365) 2016; Vis
Han (10.1016/j.isprsjprs.2019.11.023_b0180) 2018; 35
Yang (10.1016/j.isprsjprs.2019.11.023_b0530) 2017; 14
Everingham (10.1016/j.isprsjprs.2019.11.023_b0135) 2010; 88
Hu (10.1016/j.isprsjprs.2019.11.023_b0230) 2018
Law (10.1016/j.isprsjprs.2019.11.023_b0260) 2018
Russell (10.1016/j.isprsjprs.2019.11.023_b0405) 2008; 77
Zhong (10.1016/j.isprsjprs.2019.11.023_b0565) 2017; 17
Hinton (10.1016/j.isprsjprs.2019.11.023_b0220) 2012; 29
Liu (10.1016/j.isprsjprs.2019.11.023_b0320) 2016
Gidaris (10.1016/j.isprsjprs.2019.11.023_b0155) 2015
10.1016/j.isprsjprs.2019.11.023_b0525
Liu (10.1016/j.isprsjprs.2019.11.023_b0330) 2016; 13
Liu (10.1016/j.isprsjprs.2019.11.023_b0315) 2018
References_xml – volume: 37
  start-page: 1904
  year: 2014
  end-page: 1916
  ident: b0205
  article-title: Spatial pyramid pooling in deep convolutional networks for visual recognition
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
– volume: 9
  start-page: 666
  year: 2017
  ident: b0195
  article-title: An efficient and robust integrated geospatial object detection framework for high spatial resolution remote sensing imagery
  publication-title: Remote Sens.
– volume: 88
  start-page: 303
  year: 2010
  end-page: 338
  ident: b0135
  article-title: The pascal visual object classes (voc) challenge
  publication-title: Int. J. Comput. Vis.
– start-page: 6517
  year: 2017
  end-page: 6525
  ident: b0390
  article-title: YOLO9000: better, faster, stronger
  publication-title: Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit.
– volume: 53
  start-page: 3325
  year: 2015
  end-page: 3337
  ident: b0175
  article-title: Object detection in optical remote sensing images based on weakly supervised learning and high-level feature learning
  publication-title: IEEE Trans. Geosci. Remote Sens.
– volume: Vis
  start-page: 30
  year: 2008
  end-page: 43
  ident: b0215
  article-title: Learning spatial context: using stuff to find things
  publication-title: Proc. Eur. Conf. Comput.
– start-page: 3337
  year: 2017
  end-page: 3340
  ident: b0140
  article-title: Efficient object proposals extraction for target detection in VHR remote sensing images
  publication-title: Proc. IEEE Int. Geosci. Remote Sens. Sympos.
– start-page: 3353
  year: 2017
  end-page: 3356
  ident: b0190
  article-title: Robust geospatial object detection based on pre-trained faster R-CNN framework for high spatial resolution imagery
  publication-title: Proc. IEEE Int. Geosci. Remote Sens. Sympos.
– start-page: 3431
  year: 2015
  end-page: 3440
  ident: b0335
  article-title: Fully convolutional networks for semantic segmentation
  publication-title: Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit.
– start-page: 1
  year: 2018
  end-page: 21
  ident: b0355
  article-title: End-to-end learning of latent deformable part-based representations for object detection
  publication-title: Int. J. Comput. Vis.
– start-page: 1134
  year: 2015
  end-page: 1142
  ident: b0155
  article-title: Object detection via a multi-region and semantic segmentation-aware CNN model
  publication-title: Proc. IEEE Int. Conf. Comput. Vision
– start-page: 2117
  year: 2017
  end-page: 2125
  ident: b0280
  article-title: Feature pyramid networks for object detection
  publication-title: Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit.
– volume: 9
  start-page: 1312
  year: 2017
  ident: b0515
  article-title: Deformable ConvNet with aspect ratio constrained NMS for object detection in remote sensing imagery
  publication-title: Remote Sens.
– volume: 13
  start-page: 1074
  year: 2016
  end-page: 1078
  ident: b0330
  article-title: Ship rotated bounding box space for ship extraction from high-resolution optical satellite images with complex backgrounds
  publication-title: IEEE Geosci. Remote Sens. Lett.
– volume: 34
  start-page: 45
  year: 2013
  end-page: 59
  ident: b0045
  article-title: Automatic landslide detection from remote-sensing imagery using a scene classification method based on BoVW and pLSA
  publication-title: Int. J. Remote Sens.
– volume: 85
  start-page: 32
  year: 2013
  end-page: 43
  ident: b0055
  article-title: Object detection in remote sensing imagery using a discriminatively trained mixture model
  publication-title: ISPRS J. Photogramm. Remote Sens.
– volume: 35
  start-page: 84
  year: 2018
  end-page: 100
  ident: b0180
  article-title: Advanced deep-learning techniques for salient and category-specific object detection: a survey
  publication-title: IEEE Signal Process. Magaz.
– start-page: 391
  year: 2014
  end-page: 405
  ident: b0595
  article-title: Edge boxes: locating object proposals from edges
  publication-title: Proc. Eur. Conf. Comput. Vis.
– start-page: 2874
  year: 2016
  end-page: 2883
  ident: b0025
  article-title: Inside-outside net: Detecting objects in context with skip pooling and recurrent neural networks
  publication-title: Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit.
– volume: 40
  start-page: 834
  year: 2018
  end-page: 848
  ident: b0040
  article-title: DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
– volume: 39
  start-page: 1320
  year: 2017
  end-page: 1334
  ident: b0370
  article-title: DeepID-Net: object detection with deformable part based convolutional neural networks
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
– volume: 14
  start-page: 1665
  year: 2017
  end-page: 1669
  ident: b0275
  article-title: Fully convolutional network with task partitioning for inshore ship detection in optical remote sensing images
  publication-title: IEEE Geosci. Remote Sens. Lett.
– start-page: 8759
  year: 2018
  end-page: 8768
  ident: b0315
  article-title: Path aggregation network for instance segmentation
  publication-title: Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit.
– volume: 34
  start-page: 33
  year: 2011
  end-page: 50
  ident: b0030
  article-title: Building development monitoring in multitemporal remotely sensed image pairs with stochastic birth-death dynamics
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
– start-page: 330
  year: 2016
  end-page: 348
  ident: b0425
  article-title: Contextual priming and feedback for faster r-cnn
  publication-title: Proc. Eur. Conf. Comput. Vis
– start-page: 734
  year: 2018
  end-page: 750
  ident: b0260
  article-title: Cornernet: Detecting objects as paired keypoints
  publication-title: Proc. Eur. Conf. Comput. Vis.
– reference: Yang, X., Fu, K., Sun, H., Yang, J., Guo, Z., Yan, M., Zhan, T., Xian, S., 2018b. R2CNN++: Multi-Dimensional Attention Based Rotation Invariant Detector with Robust Anchor Strategy. arXiv preprint arXiv:1811.07126.
– start-page: 3578
  year: 2018
  end-page: 3587
  ident: b0440
  article-title: An analysis of scale invariance in object detection - SNIP
  publication-title: Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit.
– start-page: 354
  year: 2016
  end-page: 370
  ident: b0035
  article-title: A unified multi-scale deep convolutional neural network for fast object detection
  publication-title: Proc. Eur. Conf. Comput. Vis.
– start-page: 580
  year: 2014
  end-page: 587
  ident: b0165
  article-title: Rich feature hierarchies for accurate object detection and semantic segmentation
  publication-title: Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit.
– volume: 111
  start-page: 98
  year: 2015
  end-page: 136
  ident: b0130
  article-title: The pascal visual object classes challenge: A retrospective
  publication-title: Int. J. Comput. Vis.
– volume: 32
  start-page: 1627
  year: 2010
  end-page: 1645
  ident: b0145
  article-title: Object detection with discriminatively trained part-based models
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
– start-page: 1
  year: 2017
  end-page: 4
  ident: b0375
  article-title: Automatic differentiation in pytorch
  publication-title: Proc. Conf. Adv. Neural Inform. Process. Syst. Workshop
– reference: Fu, C.-Y., Liu, W., Ranga, A., Tyagi, A., Berg, A.C., 2017. DSSD: Deconvolutional single shot detector. arXiv preprint arXiv:1701.06659.
– start-page: 4703
  year: 2015
  end-page: 4711
  ident: b0590
  article-title: segdeepm: Exploiting segmentation and context in deep neural networks for object detection
  publication-title: Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit.
– start-page: 2818
  year: 2016
  end-page: 2826
  ident: b0465
  article-title: Rethinking the inception architecture for computer vision
  publication-title: Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit.
– volume: 36
  start-page: 618
  year: 2015
  end-page: 644
  ident: b0510
  article-title: Elliptic Fourier transformation-based histograms of oriented gradients for rotationally invariant object detection in remote-sensing images
  publication-title: Int. J. Remote Sens.
– volume: Vis
  start-page: 483
  year: 2016
  end-page: 499
  ident: b0365
  article-title: Stacked hourglass networks for human pose estimation
  publication-title: Proc. Eur. Conf. Comput.
– volume: 52
  start-page: 6627
  year: 2014
  end-page: 6638
  ident: b0015
  article-title: Detection of compound structures using a Gaussian mixture model with spectral and spatial constraints
  publication-title: IEEE Trans. Geosci. Remote Sens.
– volume: 13
  start-page: 740
  year: 2017
  end-page: 744
  ident: b0420
  article-title: Convolutional neural network based automatic object detection on aerial images
  publication-title: IEEE Geosci. Remote Sens. Lett.
– start-page: 21
  year: 2016
  end-page: 37
  ident: b0320
  article-title: SSD: Single Shot MultiBox Detector
  publication-title: Proc. Eur. Conf. Comput. Vis.
– volume: 54
  start-page: 5553
  year: 2016
  end-page: 5563
  ident: b0555
  article-title: Weakly supervised learning based on coupled convolutional neural networks for aircraft detection
  publication-title: IEEE Trans. Geosci. Remote Sens.
– volume: 17
  start-page: 336
  year: 2017
  ident: b0475
  article-title: Vehicle detection in aerial images based on region convolutional neural networks and hard negative example mining
  publication-title: Sensors
– start-page: 12
  year: 2017
  ident: b0455
  article-title: Inception-v4, inception-resnet and the impact of residual connections on learning
  publication-title: AAAI
– reference: Redmon, J., Farhadi, A., 2018. Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767.
– start-page: 265
  year: 2016
  end-page: 283
  ident: b0005
  article-title: TensorFlow: a system for large-scale machine learning
  publication-title: Proc. Conf. Oper. Syst. Des. Implement
– volume: 56
  start-page: 2811
  year: 2018
  end-page: 2821
  ident: b0070
  article-title: When deep learning meets metric learning: remote sensing image scene classification via learning discriminative CNNs
  publication-title: IEEE Trans. Geosci. Remote Sens.
– start-page: 1
  year: 2015
  end-page: 13
  ident: b0435
  article-title: Very deep convolutional networks for large-scale image recognition
  publication-title: Proc. Int. Conf. Learn. Represent
– start-page: 779
  year: 2016
  end-page: 788
  ident: b0385
  article-title: You only look once: Unified, real-time object detection
  publication-title: Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit.
– reference: Liu, S., Huang, D., Wang, Y., 2017b. Receptive Field Block Net for Accurate and Fast Object Detection. arXiv preprint arXiv:1711.07767.
– start-page: 1440
  year: 2015
  end-page: 1448
  ident: b0160
  article-title: Fast r-cnn
  publication-title: Proc. IEEE Int. Conf. Comput. Vision
– start-page: 761
  year: 2016
  end-page: 769
  ident: b0430
  article-title: Training region-based object detectors with online hard example mining
  publication-title: Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit
– start-page: 1998
  year: 2017
  end-page: 2006
  ident: b0485
  article-title: Cross-view image matching for geo-localization in urban environments
  publication-title: Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit.
– volume: 9
  start-page: 229
  year: 2018
  end-page: 237
  ident: b0520
  article-title: Aircraft detection in remote sensing images based on a deep residual network and Super-Vector coding
  publication-title: Remote Sens. Lett.
– reference: Agarwal, S., Terrail, J.O.D., Jurie, F., 2018. Recent Advances in Object Detection in the Age of Deep Convolutional Neural Networks. arXiv preprint arXiv:1809.03193.
– volume: Vis
  start-page: 818
  year: 2014
  end-page: 833
  ident: b0550
  article-title: Visualizing and understanding convolutional networks
  publication-title: Proc. Eur. Conf. Comput.
– volume: 98
  start-page: 119
  year: 2014
  end-page: 132
  ident: b0060
  article-title: Multi-class geospatial object detection and geographic image classification based on collection of part detectors
  publication-title: ISPRS J. Photogramm. Remote Sens.
– volume: Vis
  start-page: 785
  year: 2016
  end-page: 800
  ident: b0360
  article-title: A large contextual dataset for classification, detection and counting of cars with deep learning
  publication-title: Proc. Eur. Conf. Comput.
– volume: 10
  start-page: 783
  year: 2018
  ident: b0500
  article-title: Deep cube-pair network for hyperspectral imagery classification
  publication-title: Remote Sens.
– volume: 10
  start-page: 3652
  year: 2017
  end-page: 3664
  ident: b0120
  article-title: Toward fast and accurate vehicle detection in aerial images using coupled region-based convolutional neural networks
  publication-title: IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens.
– start-page: 1
  year: 2009
  end-page: 8
  ident: b0480
  article-title: Overhead imagery research data set — an annotated data library & tools to aid in the development of computer vision algorithms
  publication-title: Proc. IEEE Appl. Imag. Pattern Recognit. Workshop
– volume: 104
  start-page: 154
  year: 2013
  end-page: 171
  ident: b0495
  article-title: Selective search for object recognition
  publication-title: Int. J. Comput. Vis.
– year: 2017
  ident: b0200
  article-title: Mask R.-C.N.N.
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
– volume: 9
  start-page: 1170
  year: 2017
  ident: b0470
  article-title: Arbitrary-oriented vehicle detection in aerial imagery with single convolutional neural networks
  publication-title: Remote Sens.
– reference: Liu, L., Pan, Z., Lei, B., 2017a. Learning a Rotation Invariant Detector with Rotatable Bounding Box. arXiv preprint arXiv:1711.09405.
– volume: 10
  start-page: 131
  year: 2018
  ident: b0170
  article-title: Geospatial object detection in high resolution satellite images based on multi-scale convolutional neural network
  publication-title: Remote Sens.
– volume: 39
  start-page: 1137
  year: 2017
  end-page: 1149
  ident: b0400
  article-title: Faster R-CNN: towards real-time object detection with region proposal networks
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
– start-page: 770
  year: 2016
  end-page: 778
  ident: b0210
  article-title: Deep residual learning for image recognition
  publication-title: Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit.
– start-page: 4700
  year: 2017
  end-page: 4708
  ident: b0235
  article-title: Densely connected convolutional networks
  publication-title: Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit.
– start-page: 1081
  year: 2018
  end-page: 1090
  ident: b0445
  article-title: R-FCN-3000 at 30fps: decoupling detection and classification
  publication-title: Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit.
– volume: 5
  start-page: 8
  year: 2017
  end-page: 36
  ident: b0585
  article-title: Deep learning in remote sensing: a comprehensive review and list of resources
  publication-title: IEEE Geosci. Remote Sens. Magaz.
– start-page: 1
  year: 2015
  end-page: 9
  ident: b0460
  article-title: Going deeper with convolutions
  publication-title: Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit.
– volume: 34
  start-page: 187
  year: 2015
  end-page: 203
  ident: b0380
  article-title: Vehicle detection in aerial imagery: A small target detection benchmark
  publication-title: J. Vis. Commun. Image Represent.
– volume: 35
  start-page: 1915
  year: 2013
  end-page: 1929
  ident: b0090
  article-title: Learning hierarchical features for scene labeling
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
– volume: 27
  start-page: 4357
  year: 2018
  end-page: 4366
  ident: b0345
  article-title: Gabor convolutional networks
  publication-title: IEEE Trans. Image Process.
– start-page: 1799
  year: 2014
  end-page: 1807
  ident: b0490
  article-title: Joint training of a convolutional network and a graphical model for human pose estimation
  publication-title: Proc. Conf. Adv. Neural Inform. Process. Syst.
– volume: 11
  start-page: 1
  year: 2017
  ident: b0535
  article-title: Ship detection in optical remote sensing images based on deep convolutional neural networks
  publication-title: J. Appl. Remote Sens.
– volume: 9
  start-page: 618
  year: 2017
  ident: b0125
  article-title: Convolutional neural networks based hyperspectral image classification method with adaptive kernels
  publication-title: Remote Sens.
– start-page: 1893
  year: 2015
  end-page: 1896
  ident: b0410
  article-title: Detection of seals in remote sensing images using features extracted from deep convolutional neural networks
  publication-title: Proc. IEEE Int. Geosci. Remote Sens. Sympos.
– start-page: 675
  year: 2014
  end-page: 678
  ident: b0245
  article-title: Caffe: Convolutional architecture for fast feature embedding
  publication-title: Proc. ACM Int. Conf. Multimedia
– volume: 77
  start-page: 157
  year: 2008
  end-page: 173
  ident: b0405
  article-title: LabelMe: A database and web-based tool for image annotation
  publication-title: Int. J. Comput. Vis.
– start-page: 9310
  year: 2018
  end-page: 9320
  ident: b0450
  article-title: SNIPER: Efficient multi-scale training
  publication-title: Proc. Conf. Adv. Neural Inform. Process. Syst.
– volume: Syst
  start-page: 379
  year: 2016
  end-page: 387
  ident: b0100
  article-title: R-FCN: Object detection via region-based fully convolutional networks
  publication-title: Proc. Conf. Adv. Neural Inform. Process.
– start-page: 2999
  year: 2017
  end-page: 3007
  ident: b0290
  article-title: Focal loss for dense object detection
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
– volume: 55
  start-page: 2486
  year: 2017
  end-page: 2498
  ident: b0340
  article-title: Accurate object localization in remote sensing images based on convolutional neural networks
  publication-title: IEEE Trans. Geosci. Remote Sens.
– reference: Cheng, L., Liu, X., Li, L., Jiao, L., Tang, X., 2018b. Deep Adaptive Proposal Network for Object Detection in Optical Remote Sensing Images. arXiv preprint arXiv:1807.07327.
– volume: 56
  start-page: 2337
  year: 2018
  end-page: 2348
  ident: b0265
  article-title: Rotation-insensitive and context-augmented object detection in remote sensing images
  publication-title: IEEE Trans. Geosci. Remote Sens.
– volume: 8
  start-page: 4895
  year: 2017
  end-page: 4909
  ident: b0560
  article-title: A hierarchical oil tank detector with deep surrounding features for high-resolution optical satellite imagery
  publication-title: IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens.
– start-page: 196
  year: 2012
  end-page: 201
  ident: b0350
  article-title: Strategies for training large scale neural network language models
  publication-title: Proc. IEEE Workshop Autom. Speech Recognit. Underst.
– volume: 17
  start-page: 2720
  year: 2017
  ident: b0565
  article-title: Robust vehicle detection in aerial images based on cascaded convolutional neural networks
  publication-title: Sensors
– start-page: 248
  year: 2009
  end-page: 255
  ident: b0115
  article-title: Imagenet: A large-scale hierarchical image database
  publication-title: Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit
– volume: Vision
  start-page: 764
  year: 2017
  end-page: 773
  ident: b0105
  article-title: Deformable convolutional networks
  publication-title: Proc. IEEE Int. Conf. Comput.
– volume: 2010
  start-page: 73
  year: 2010
  end-page: 82
  ident: b0095
  article-title: The DGPF-test on digital airborne camera evaluation-overview and test design
  publication-title: Photogrammetrie - Fernerkundung - Geoinformation
– start-page: 845
  year: 2016
  end-page: 853
  ident: b0250
  article-title: Hypernet: Towards accurate region proposal generation and joint object detection
  publication-title: Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit.
– start-page: 1097
  year: 2012
  end-page: 1105
  ident: b0255
  article-title: ImageNet classification with deep convolutional neural networks
  publication-title: Proc. Conf. Adv. Neural Inform. Process. Syst.
– volume: 117
  start-page: 11
  year: 2016
  end-page: 28
  ident: b0050
  article-title: A survey on object detection in optical remote sensing images
  publication-title: ISPRS J. Photogramm. Remote Sens.
– volume: 54
  start-page: 7405
  year: 2016
  end-page: 7415
  ident: b0075
  article-title: Learning rotation-invariant convolutional neural networks for object detection in VHR optical remote sensing images
  publication-title: IEEE Trans. Geosci. Remote Sens.
– volume: 138
  start-page: 281
  year: 2018
  end-page: 294
  ident: b0570
  article-title: Multi-class geospatial object detection based on a position-sensitive balancing framework for high spatial resolution remote sensing imagery
  publication-title: ISPRS J. Photogramm. Remote Sens.
– volume: 15
  start-page: 937
  year: 2018
  end-page: 941
  ident: b0325
  article-title: Arbitrary-oriented ship detection framework in optical remote-sensing images
  publication-title: IEEE Geosci. Remote Sens. Lett.
– start-page: 448
  year: 2015
  end-page: 456
  ident: b0240
  article-title: Batch normalization: accelerating deep network training by reducing internal covariate shift
  publication-title: Proc. IEEE Int. Conf. Machine Learning
– start-page: 1
  year: 2014
  end-page: 16
  ident: b0415
  article-title: OverFeat: integrated recognition, localization and detection using convolutional networks
  publication-title: Proc. Int. Conf. Learn. Represent
– volume: 52
  start-page: 6508
  year: 2014
  end-page: 6520
  ident: b0020
  article-title: VHR object detection based on structural feature extraction and query expansion
  publication-title: IEEE Trans. Geosci. Remote Sens.
– start-page: 7132
  year: 2018
  end-page: 7141
  ident: b0230
  article-title: Squeeze-and-excitation networks
  publication-title: Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit.
– volume: 54
  start-page: 5832
  year: 2016
  end-page: 5845
  ident: b0600
  article-title: Ship detection in spaceborne optical image with SVD networks
  publication-title: IEEE Trans. Geosci. Remote Sens.
– volume: 14
  start-page: 1293
  year: 2017
  end-page: 1297
  ident: b0530
  article-title: M-FCN: effective fully convolutional network-based airplane detection framework
  publication-title: IEEE Geosci. Remote Sens. Lett.
– volume: 89
  start-page: 37
  year: 2014
  end-page: 48
  ident: b0185
  article-title: Efficient, simultaneous detection of multi-class geospatial targets based on visual saliency modeling and discriminative learning of sparse coding
  publication-title: ISPRS J. Photogramm. Remote Sens.
– start-page: 2884
  year: 2016
  end-page: 2893
  ident: b0080
  article-title: RIFD-CNN: rotation-invariant and fisher discriminative convolutional neural networks for object detection
  publication-title: Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit.
– volume: 12
  start-page: 1938
  year: 2015
  end-page: 1942
  ident: b0295
  article-title: Fast multiclass vehicle detection on aerial images
  publication-title: IEEE Geosci. Remote Sens. Lett.
– volume: 29
  start-page: 82
  year: 2012
  end-page: 97
  ident: b0220
  article-title: Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups
  publication-title: IEEE Signal Process. Magaz.
– volume: 27
  start-page: 925
  year: 2016
  end-page: 944
  ident: b0575
  article-title: Weakly supervised target detection in remote sensing images based on transferred deep features and negative bootstrapping
  publication-title: Multidimens. Syst. Signal Process.
– reference: Li, Z., Peng, C., Yu, G., Zhang, X., Deng, Y., Sun, J., 2017. Light-head r-cnn: In defense of two-stage object detector. arXiv preprint arXiv:1711.07264.
– start-page: 3974
  year: 2018
  end-page: 3983
  ident: b0505
  article-title: DOTA: A large-scale dataset for object detection in aerial images
  publication-title: Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit.
– start-page: 5822
  year: 2017
  end-page: 5831
  ident: b0225
  article-title: Tube convolutional neural network (T-CNN) for action detection in videos
  publication-title: Proc. IEEE Int. Conf. Comput. Vision
– volume: 49
  start-page: 3906
  year: 2011
  end-page: 3931
  ident: b0110
  article-title: Use of salient features for the design of a multistage framework to extract roads from high-resolution multispectral satellite images
  publication-title: IEEE Trans. Geosci. Remote Sens.
– start-page: 740
  year: 2014
  end-page: 755
  ident: b0285
  article-title: Microsoft coco: Common objects in context
  publication-title: Proc. Eur. Conf. Comput. Vis.
– volume: 28
  start-page: 265
  year: 2019
  end-page: 278
  ident: b0065
  article-title: Learning rotation-invariant and fisher discriminative convolutional neural networks for object detection
  publication-title: IEEE Trans. Image Process.
– start-page: 3735
  year: 2015
  end-page: 3739
  ident: b0580
  article-title: Orientation robust object detection in aerial images using deep convolutional neural network
  publication-title: Proc. IEEE Int. Conf. Image Processing
– reference: Liu, L., Ouyang, W., Wang, X., Fieguth, P., Chen, J., Liu, X., Pietikäinen, M., 2018a. Deep learning for generic object detection: A survey. arXiv preprint arXiv:1809.02165.
– volume: 8
  start-page: 2053
  year: 2015
  end-page: 2062
  ident: b0540
  article-title: Object detection based on sparse representation and hough voting for optical remote sensing imagery
  publication-title: IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens.
– volume: 12
  start-page: 2183
  year: 2015
  end-page: 2187
  ident: b0545
  article-title: Rotation-invariant object detection in high-resolution satellite imagery using superpixel-based deep hough forests
  publication-title: IEEE Geosci. Remote Sens. Lett.
– volume: 52
  start-page: 6508
  year: 2014
  ident: 10.1016/j.isprsjprs.2019.11.023_b0020
  article-title: VHR object detection based on structural feature extraction and query expansion
  publication-title: IEEE Trans. Geosci. Remote Sens.
  doi: 10.1109/TGRS.2013.2296782
– start-page: 2117
  year: 2017
  ident: 10.1016/j.isprsjprs.2019.11.023_b0280
  article-title: Feature pyramid networks for object detection
  publication-title: Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit.
– start-page: 5822
  year: 2017
  ident: 10.1016/j.isprsjprs.2019.11.023_b0225
  article-title: Tube convolutional neural network (T-CNN) for action detection in videos
  publication-title: Proc. IEEE Int. Conf. Comput. Vision
– start-page: 2874
  year: 2016
  ident: 10.1016/j.isprsjprs.2019.11.023_b0025
  article-title: Inside-outside net: Detecting objects in context with skip pooling and recurrent neural networks
  publication-title: Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit.
– volume: 54
  start-page: 7405
  year: 2016
  ident: 10.1016/j.isprsjprs.2019.11.023_b0075
  article-title: Learning rotation-invariant convolutional neural networks for object detection in VHR optical remote sensing images
  publication-title: IEEE Trans. Geosci. Remote Sens.
  doi: 10.1109/TGRS.2016.2601622
– start-page: 1097
  year: 2012
  ident: 10.1016/j.isprsjprs.2019.11.023_b0255
  article-title: ImageNet classification with deep convolutional neural networks
  publication-title: Proc. Conf. Adv. Neural Inform. Process. Syst.
– volume: 77
  start-page: 157
  year: 2008
  ident: 10.1016/j.isprsjprs.2019.11.023_b0405
  article-title: LabelMe: A database and web-based tool for image annotation
  publication-title: Int. J. Comput. Vis.
  doi: 10.1007/s11263-007-0090-8
– ident: 10.1016/j.isprsjprs.2019.11.023_b0150
– volume: 27
  start-page: 4357
  year: 2018
  ident: 10.1016/j.isprsjprs.2019.11.023_b0345
  article-title: Gabor convolutional networks
  publication-title: IEEE Trans. Image Process.
  doi: 10.1109/TIP.2018.2835143
– start-page: 1799
  year: 2014
  ident: 10.1016/j.isprsjprs.2019.11.023_b0490
  article-title: Joint training of a convolutional network and a graphical model for human pose estimation
  publication-title: Proc. Conf. Adv. Neural Inform. Process. Syst.
– ident: 10.1016/j.isprsjprs.2019.11.023_b0525
– volume: 37
  start-page: 1904
  year: 2014
  ident: 10.1016/j.isprsjprs.2019.11.023_b0205
  article-title: Spatial pyramid pooling in deep convolutional networks for visual recognition
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2015.2389824
– volume: 12
  start-page: 2183
  year: 2015
  ident: 10.1016/j.isprsjprs.2019.11.023_b0545
  article-title: Rotation-invariant object detection in high-resolution satellite imagery using superpixel-based deep hough forests
  publication-title: IEEE Geosci. Remote Sens. Lett.
  doi: 10.1109/LGRS.2015.2432135
– start-page: 675
  year: 2014
  ident: 10.1016/j.isprsjprs.2019.11.023_b0245
  article-title: Caffe: Convolutional architecture for fast feature embedding
  publication-title: Proc. ACM Int. Conf. Multimedia
  doi: 10.1145/2647868.2654889
– start-page: 354
  year: 2016
  ident: 10.1016/j.isprsjprs.2019.11.023_b0035
  article-title: A unified multi-scale deep convolutional neural network for fast object detection
  publication-title: Proc. Eur. Conf. Comput. Vis.
– volume: 35
  start-page: 1915
  year: 2013
  ident: 10.1016/j.isprsjprs.2019.11.023_b0090
  article-title: Learning hierarchical features for scene labeling
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2012.231
– volume: 56
  start-page: 2811
  year: 2018
  ident: 10.1016/j.isprsjprs.2019.11.023_b0070
  article-title: When deep learning meets metric learning: remote sensing image scene classification via learning discriminative CNNs
  publication-title: IEEE Trans. Geosci. Remote Sens.
  doi: 10.1109/TGRS.2017.2783902
– volume: 27
  start-page: 925
  year: 2016
  ident: 10.1016/j.isprsjprs.2019.11.023_b0575
  article-title: Weakly supervised target detection in remote sensing images based on transferred deep features and negative bootstrapping
  publication-title: Multidimens. Syst. Signal Process.
  doi: 10.1007/s11045-015-0370-3
– start-page: 779
  year: 2016
  ident: 10.1016/j.isprsjprs.2019.11.023_b0385
  article-title: You only look once: Unified, real-time object detection
  publication-title: Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit.
– volume: 11
  start-page: 1
  year: 2017
  ident: 10.1016/j.isprsjprs.2019.11.023_b0535
  article-title: Ship detection in optical remote sensing images based on deep convolutional neural networks
  publication-title: J. Appl. Remote Sens.
  doi: 10.1117/1.JRS.11.042611
– volume: 17
  start-page: 2720
  year: 2017
  ident: 10.1016/j.isprsjprs.2019.11.023_b0565
  article-title: Robust vehicle detection in aerial images based on cascaded convolutional neural networks
  publication-title: Sensors
  doi: 10.3390/s17122720
– start-page: 1998
  year: 2017
  ident: 10.1016/j.isprsjprs.2019.11.023_b0485
  article-title: Cross-view image matching for geo-localization in urban environments
  publication-title: Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit.
– volume: Vis
  start-page: 818
  year: 2014
  ident: 10.1016/j.isprsjprs.2019.11.023_b0550
  article-title: Visualizing and understanding convolutional networks
  publication-title: Proc. Eur. Conf. Comput.
– volume: 98
  start-page: 119
  year: 2014
  ident: 10.1016/j.isprsjprs.2019.11.023_b0060
  article-title: Multi-class geospatial object detection and geographic image classification based on collection of part detectors
  publication-title: ISPRS J. Photogramm. Remote Sens.
  doi: 10.1016/j.isprsjprs.2014.10.002
– start-page: 3431
  year: 2015
  ident: 10.1016/j.isprsjprs.2019.11.023_b0335
  article-title: Fully convolutional networks for semantic segmentation
  publication-title: Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit.
– start-page: 1
  year: 2009
  ident: 10.1016/j.isprsjprs.2019.11.023_b0480
  article-title: Overhead imagery research data set — an annotated data library & tools to aid in the development of computer vision algorithms
– volume: 13
  start-page: 1074
  year: 2016
  ident: 10.1016/j.isprsjprs.2019.11.023_b0330
  article-title: Ship rotated bounding box space for ship extraction from high-resolution optical satellite images with complex backgrounds
  publication-title: IEEE Geosci. Remote Sens. Lett.
  doi: 10.1109/LGRS.2016.2565705
– volume: 34
  start-page: 45
  year: 2013
  ident: 10.1016/j.isprsjprs.2019.11.023_b0045
  article-title: Automatic landslide detection from remote-sensing imagery using a scene classification method based on BoVW and pLSA
  publication-title: Int. J. Remote Sens.
  doi: 10.1080/01431161.2012.705443
– start-page: 1
  year: 2018
  ident: 10.1016/j.isprsjprs.2019.11.023_b0355
  article-title: End-to-end learning of latent deformable part-based representations for object detection
  publication-title: Int. J. Comput. Vis.
– volume: 2010
  start-page: 73
  year: 2010
  ident: 10.1016/j.isprsjprs.2019.11.023_b0095
  article-title: The DGPF-test on digital airborne camera evaluation-overview and test design
  publication-title: Photogrammetrie - Fernerkundung - Geoinformation
  doi: 10.1127/1432-8364/2010/0041
– start-page: 1134
  year: 2015
  ident: 10.1016/j.isprsjprs.2019.11.023_b0155
  article-title: Object detection via a multi-region and semantic segmentation-aware CNN model
– volume: Vision
  start-page: 764
  year: 2017
  ident: 10.1016/j.isprsjprs.2019.11.023_b0105
  article-title: Deformable convolutional networks
  publication-title: Proc. IEEE Int. Conf. Comput.
– start-page: 7132
  year: 2018
  ident: 10.1016/j.isprsjprs.2019.11.023_b0230
  article-title: Squeeze-and-excitation networks
  publication-title: Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit.
– start-page: 1440
  year: 2015
  ident: 10.1016/j.isprsjprs.2019.11.023_b0160
  article-title: Fast r-cnn
  publication-title: Proc. IEEE Int. Conf. Comput. Vision
– volume: 15
  start-page: 937
  year: 2018
  ident: 10.1016/j.isprsjprs.2019.11.023_b0325
  article-title: Arbitrary-oriented ship detection framework in optical remote-sensing images
  publication-title: IEEE Geosci. Remote Sens. Lett.
  doi: 10.1109/LGRS.2018.2813094
– start-page: 21
  year: 2016
  ident: 10.1016/j.isprsjprs.2019.11.023_b0320
  article-title: SSD: Single Shot MultiBox Detector
– volume: 36
  start-page: 618
  year: 2015
  ident: 10.1016/j.isprsjprs.2019.11.023_b0510
  article-title: Elliptic Fourier transformation-based histograms of oriented gradients for rotationally invariant object detection in remote-sensing images
  publication-title: Int. J. Remote Sens.
  doi: 10.1080/01431161.2014.999881
– volume: 9
  start-page: 1170
  year: 2017
  ident: 10.1016/j.isprsjprs.2019.11.023_b0470
  article-title: Arbitrary-oriented vehicle detection in aerial imagery with single convolutional neural networks
  publication-title: Remote Sens.
  doi: 10.3390/rs9111170
– volume: 17
  start-page: 336
  year: 2017
  ident: 10.1016/j.isprsjprs.2019.11.023_b0475
  article-title: Vehicle detection in aerial images based on region convolutional neural networks and hard negative example mining
  publication-title: Sensors
  doi: 10.3390/s17020336
– volume: Vis
  start-page: 785
  year: 2016
  ident: 10.1016/j.isprsjprs.2019.11.023_b0360
  article-title: A large contextual dataset for classification, detection and counting of cars with deep learning
  publication-title: Proc. Eur. Conf. Comput.
– volume: 104
  start-page: 154
  year: 2013
  ident: 10.1016/j.isprsjprs.2019.11.023_b0495
  article-title: Selective search for object recognition
  publication-title: Int. J. Comput. Vis.
  doi: 10.1007/s11263-013-0620-5
– volume: 34
  start-page: 33
  year: 2011
  ident: 10.1016/j.isprsjprs.2019.11.023_b0030
  article-title: Building development monitoring in multitemporal remotely sensed image pairs with stochastic birth-death dynamics
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2011.94
– volume: 9
  start-page: 618
  year: 2017
  ident: 10.1016/j.isprsjprs.2019.11.023_b0125
  article-title: Convolutional neural networks based hyperspectral image classification method with adaptive kernels
  publication-title: Remote Sens.
  doi: 10.3390/rs9060618
– volume: Vis
  start-page: 483
  year: 2016
  ident: 10.1016/j.isprsjprs.2019.11.023_b0365
  article-title: Stacked hourglass networks for human pose estimation
  publication-title: Proc. Eur. Conf. Comput.
– volume: 32
  start-page: 1627
  year: 2010
  ident: 10.1016/j.isprsjprs.2019.11.023_b0145
  article-title: Object detection with discriminatively trained part-based models
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2009.167
– volume: 138
  start-page: 281
  year: 2018
  ident: 10.1016/j.isprsjprs.2019.11.023_b0570
  article-title: Multi-class geospatial object detection based on a position-sensitive balancing framework for high spatial resolution remote sensing imagery
  publication-title: ISPRS J. Photogramm. Remote Sens.
  doi: 10.1016/j.isprsjprs.2018.02.014
– start-page: 3735
  year: 2015
  ident: 10.1016/j.isprsjprs.2019.11.023_b0580
  article-title: Orientation robust object detection in aerial images using deep convolutional neural network
– start-page: 2999
  year: 2017
  ident: 10.1016/j.isprsjprs.2019.11.023_b0290
  article-title: Focal loss for dense object detection
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
– start-page: 734
  year: 2018
  ident: 10.1016/j.isprsjprs.2019.11.023_b0260
  article-title: Cornernet: Detecting objects as paired keypoints
  publication-title: Proc. Eur. Conf. Comput. Vis.
– start-page: 1
  year: 2015
  ident: 10.1016/j.isprsjprs.2019.11.023_b0460
  article-title: Going deeper with convolutions
  publication-title: Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit.
– volume: 10
  start-page: 131
  year: 2018
  ident: 10.1016/j.isprsjprs.2019.11.023_b0170
  article-title: Geospatial object detection in high resolution satellite images based on multi-scale convolutional neural network
  publication-title: Remote Sens.
  doi: 10.3390/rs10010131
– volume: 13
  start-page: 740
  year: 2017
  ident: 10.1016/j.isprsjprs.2019.11.023_b0420
  article-title: Convolutional neural network based automatic object detection on aerial images
  publication-title: IEEE Geosci. Remote Sens. Lett.
  doi: 10.1109/LGRS.2016.2542358
– volume: 52
  start-page: 6627
  year: 2014
  ident: 10.1016/j.isprsjprs.2019.11.023_b0015
  article-title: Detection of compound structures using a Gaussian mixture model with spectral and spatial constraints
  publication-title: IEEE Trans. Geosci. Remote Sens.
  doi: 10.1109/TGRS.2014.2299540
– start-page: 2884
  year: 2016
  ident: 10.1016/j.isprsjprs.2019.11.023_b0080
  article-title: RIFD-CNN: rotation-invariant and fisher discriminative convolutional neural networks for object detection
– start-page: 580
  year: 2014
  ident: 10.1016/j.isprsjprs.2019.11.023_b0165
  article-title: Rich feature hierarchies for accurate object detection and semantic segmentation
  publication-title: Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit.
– volume: 12
  start-page: 1938
  year: 2015
  ident: 10.1016/j.isprsjprs.2019.11.023_b0295
  article-title: Fast multiclass vehicle detection on aerial images
  publication-title: IEEE Geosci. Remote Sens. Lett.
  doi: 10.1109/LGRS.2015.2439517
– volume: 111
  start-page: 98
  year: 2015
  ident: 10.1016/j.isprsjprs.2019.11.023_b0130
  article-title: The pascal visual object classes challenge: A retrospective
  publication-title: Int. J. Comput. Vis.
  doi: 10.1007/s11263-014-0733-5
– start-page: 1
  year: 2017
  ident: 10.1016/j.isprsjprs.2019.11.023_b0375
  article-title: Automatic differentiation in pytorch
  publication-title: Proc. Conf. Adv. Neural Inform. Process. Syst. Workshop
– volume: 54
  start-page: 5832
  year: 2016
  ident: 10.1016/j.isprsjprs.2019.11.023_b0600
  article-title: Ship detection in spaceborne optical image with SVD networks
  publication-title: IEEE Trans. Geosci. Remote Sens.
  doi: 10.1109/TGRS.2016.2572736
– start-page: 391
  year: 2014
  ident: 10.1016/j.isprsjprs.2019.11.023_b0595
  article-title: Edge boxes: locating object proposals from edges
  publication-title: Proc. Eur. Conf. Comput. Vis.
– volume: 28
  start-page: 265
  year: 2019
  ident: 10.1016/j.isprsjprs.2019.11.023_b0065
  article-title: Learning rotation-invariant and fisher discriminative convolutional neural networks for object detection
  publication-title: IEEE Trans. Image Process.
  doi: 10.1109/TIP.2018.2867198
– volume: 117
  start-page: 11
  year: 2016
  ident: 10.1016/j.isprsjprs.2019.11.023_b0050
  article-title: A survey on object detection in optical remote sensing images
  publication-title: ISPRS J. Photogramm. Remote Sens.
  doi: 10.1016/j.isprsjprs.2016.03.014
– start-page: 4700
  year: 2017
  ident: 10.1016/j.isprsjprs.2019.11.023_b0235
  article-title: Densely connected convolutional networks
– start-page: 6517
  year: 2017
  ident: 10.1016/j.isprsjprs.2019.11.023_b0390
  article-title: YOLO9000: better, faster, stronger
– volume: 8
  start-page: 4895
  year: 2017
  ident: 10.1016/j.isprsjprs.2019.11.023_b0560
  article-title: A hierarchical oil tank detector with deep surrounding features for high-resolution optical satellite imagery
  publication-title: IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens.
  doi: 10.1109/JSTARS.2015.2467377
– volume: 55
  start-page: 2486
  year: 2017
  ident: 10.1016/j.isprsjprs.2019.11.023_b0340
  article-title: Accurate object localization in remote sensing images based on convolutional neural networks
  publication-title: IEEE Trans. Geosci. Remote Sens.
  doi: 10.1109/TGRS.2016.2645610
– ident: 10.1016/j.isprsjprs.2019.11.023_b0395
– volume: 14
  start-page: 1293
  year: 2017
  ident: 10.1016/j.isprsjprs.2019.11.023_b0530
  article-title: M-FCN: effective fully convolutional network-based airplane detection framework
  publication-title: IEEE Geosci. Remote Sens. Lett.
  doi: 10.1109/LGRS.2017.2708722
– start-page: 3353
  year: 2017
  ident: 10.1016/j.isprsjprs.2019.11.023_b0190
  article-title: Robust geospatial object detection based on pre-trained faster R-CNN framework for high spatial resolution imagery
  publication-title: Proc. IEEE Int. Geosci. Remote Sens. Sympos.
– start-page: 1
  year: 2014
  ident: 10.1016/j.isprsjprs.2019.11.023_b0415
  article-title: OverFeat: integrated recognition, localization and detection using convolutional networks
  publication-title: Proc. Int. Conf. Learn. Represent
– volume: 9
  start-page: 666
  year: 2017
  ident: 10.1016/j.isprsjprs.2019.11.023_b0195
  article-title: An efficient and robust integrated geospatial object detection framework for high spatial resolution remote sensing imagery
  publication-title: Remote Sens.
  doi: 10.3390/rs9070666
– volume: Syst
  start-page: 379
  year: 2016
  ident: 10.1016/j.isprsjprs.2019.11.023_b0100
  article-title: R-FCN: Object detection via region-based fully convolutional networks
  publication-title: Proc. Conf. Adv. Neural Inform. Process.
– ident: 10.1016/j.isprsjprs.2019.11.023_b0305
– year: 2017
  ident: 10.1016/j.isprsjprs.2019.11.023_b0200
  article-title: Mask R.-C.N.N.
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
– start-page: 1081
  year: 2018
  ident: 10.1016/j.isprsjprs.2019.11.023_b0445
  article-title: R-FCN-3000 at 30fps: decoupling detection and classification
  publication-title: Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit.
– start-page: 3578
  year: 2018
  ident: 10.1016/j.isprsjprs.2019.11.023_b0440
  article-title: An analysis of scale invariance in object detection - SNIP
– volume: 40
  start-page: 834
  year: 2018
  ident: 10.1016/j.isprsjprs.2019.11.023_b0040
  article-title: DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2017.2699184
– start-page: 196
  year: 2012
  ident: 10.1016/j.isprsjprs.2019.11.023_b0350
  article-title: Strategies for training large scale neural network language models
  publication-title: Proc. IEEE Workshop Autom. Speech Recognit. Underst.
– start-page: 1
  year: 2015
  ident: 10.1016/j.isprsjprs.2019.11.023_b0435
  article-title: Very deep convolutional networks for large-scale image recognition
  publication-title: Proc. Int. Conf. Learn. Represent
– volume: 49
  start-page: 3906
  year: 2011
  ident: 10.1016/j.isprsjprs.2019.11.023_b0110
  article-title: Use of salient features for the design of a multistage framework to extract roads from high-resolution multispectral satellite images
  publication-title: IEEE Trans. Geosci. Remote Sens.
  doi: 10.1109/TGRS.2011.2136381
– volume: 9
  start-page: 229
  year: 2018
  ident: 10.1016/j.isprsjprs.2019.11.023_b0520
  article-title: Aircraft detection in remote sensing images based on a deep residual network and Super-Vector coding
  publication-title: Remote Sens. Lett.
  doi: 10.1080/2150704X.2017.1415474
– volume: 35
  start-page: 84
  year: 2018
  ident: 10.1016/j.isprsjprs.2019.11.023_b0180
  article-title: Advanced deep-learning techniques for salient and category-specific object detection: a survey
  publication-title: IEEE Signal Process. Magaz.
  doi: 10.1109/MSP.2017.2749125
– ident: 10.1016/j.isprsjprs.2019.11.023_b0310
  doi: 10.1007/978-3-030-01252-6_24
– start-page: 265
  year: 2016
  ident: 10.1016/j.isprsjprs.2019.11.023_b0005
  article-title: TensorFlow: a system for large-scale machine learning
  publication-title: Proc. Conf. Oper. Syst. Des. Implement
– start-page: 330
  year: 2016
  ident: 10.1016/j.isprsjprs.2019.11.023_b0425
  article-title: Contextual priming and feedback for faster r-cnn
  publication-title: Proc. Eur. Conf. Comput. Vis
– volume: 8
  start-page: 2053
  year: 2015
  ident: 10.1016/j.isprsjprs.2019.11.023_b0540
  article-title: Object detection based on sparse representation and hough voting for optical remote sensing imagery
  publication-title: IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens.
  doi: 10.1109/JSTARS.2015.2404578
– start-page: 4703
  year: 2015
  ident: 10.1016/j.isprsjprs.2019.11.023_b0590
  article-title: segdeepm: Exploiting segmentation and context in deep neural networks for object detection
  publication-title: Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit.
– start-page: 8759
  year: 2018
  ident: 10.1016/j.isprsjprs.2019.11.023_b0315
  article-title: Path aggregation network for instance segmentation
  publication-title: Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit.
– volume: 56
  start-page: 2337
  year: 2018
  ident: 10.1016/j.isprsjprs.2019.11.023_b0265
  article-title: Rotation-insensitive and context-augmented object detection in remote sensing images
  publication-title: IEEE Trans. Geosci. Remote Sens.
  doi: 10.1109/TGRS.2017.2778300
– ident: 10.1016/j.isprsjprs.2019.11.023_b0010
– ident: 10.1016/j.isprsjprs.2019.11.023_b0270
– ident: 10.1016/j.isprsjprs.2019.11.023_b0300
  doi: 10.1007/s11263-019-01247-4
– volume: 39
  start-page: 1320
  year: 2017
  ident: 10.1016/j.isprsjprs.2019.11.023_b0370
  article-title: DeepID-Net: object detection with deformable part based convolutional neural networks
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2016.2587642
– start-page: 3337
  year: 2017
  ident: 10.1016/j.isprsjprs.2019.11.023_b0140
  article-title: Efficient object proposals extraction for target detection in VHR remote sensing images
  publication-title: Proc. IEEE Int. Geosci. Remote Sens. Sympos.
– volume: 53
  start-page: 3325
  year: 2015
  ident: 10.1016/j.isprsjprs.2019.11.023_b0175
  article-title: Object detection in optical remote sensing images based on weakly supervised learning and high-level feature learning
  publication-title: IEEE Trans. Geosci. Remote Sens.
  doi: 10.1109/TGRS.2014.2374218
– volume: 14
  start-page: 1665
  year: 2017
  ident: 10.1016/j.isprsjprs.2019.11.023_b0275
  article-title: Fully convolutional network with task partitioning for inshore ship detection in optical remote sensing images
  publication-title: IEEE Geosci. Remote Sens. Lett.
  doi: 10.1109/LGRS.2017.2727515
– volume: 9
  start-page: 1312
  year: 2017
  ident: 10.1016/j.isprsjprs.2019.11.023_b0515
  article-title: Deformable ConvNet with aspect ratio constrained NMS for object detection in remote sensing imagery
  publication-title: Remote Sens.
  doi: 10.3390/rs9121312
– ident: 10.1016/j.isprsjprs.2019.11.023_b0085
– start-page: 740
  year: 2014
  ident: 10.1016/j.isprsjprs.2019.11.023_b0285
  article-title: Microsoft coco: Common objects in context
  publication-title: Proc. Eur. Conf. Comput. Vis.
– volume: 10
  start-page: 3652
  year: 2017
  ident: 10.1016/j.isprsjprs.2019.11.023_b0120
  article-title: Toward fast and accurate vehicle detection in aerial images using coupled region-based convolutional neural networks
  publication-title: IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens.
  doi: 10.1109/JSTARS.2017.2694890
– volume: 34
  start-page: 187
  year: 2015
  ident: 10.1016/j.isprsjprs.2019.11.023_b0380
  article-title: Vehicle detection in aerial imagery: A small target detection benchmark
  publication-title: J. Vis. Commun. Image Represent.
  doi: 10.1016/j.jvcir.2015.11.002
– volume: 85
  start-page: 32
  year: 2013
  ident: 10.1016/j.isprsjprs.2019.11.023_b0055
  article-title: Object detection in remote sensing imagery using a discriminatively trained mixture model
  publication-title: ISPRS J. Photogramm. Remote Sens.
  doi: 10.1016/j.isprsjprs.2013.08.001
– start-page: 761
  year: 2016
  ident: 10.1016/j.isprsjprs.2019.11.023_b0430
  article-title: Training region-based object detectors with online hard example mining
  publication-title: Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit
– start-page: 9310
  year: 2018
  ident: 10.1016/j.isprsjprs.2019.11.023_b0450
  article-title: SNIPER: Efficient multi-scale training
  publication-title: Proc. Conf. Adv. Neural Inform. Process. Syst.
– volume: 5
  start-page: 8
  year: 2017
  ident: 10.1016/j.isprsjprs.2019.11.023_b0585
  article-title: Deep learning in remote sensing: a comprehensive review and list of resources
  publication-title: IEEE Geosci. Remote Sens. Magaz.
  doi: 10.1109/MGRS.2017.2762307
– start-page: 1893
  year: 2015
  ident: 10.1016/j.isprsjprs.2019.11.023_b0410
  article-title: Detection of seals in remote sensing images using features extracted from deep convolutional neural networks
  publication-title: Proc. IEEE Int. Geosci. Remote Sens. Sympos.
– start-page: 448
  year: 2015
  ident: 10.1016/j.isprsjprs.2019.11.023_b0240
  article-title: Batch normalization: accelerating deep network training by reducing internal covariate shift
– volume: 10
  start-page: 783
  year: 2018
  ident: 10.1016/j.isprsjprs.2019.11.023_b0500
  article-title: Deep cube-pair network for hyperspectral imagery classification
  publication-title: Remote Sens.
  doi: 10.3390/rs10050783
– volume: Vis
  start-page: 30
  year: 2008
  ident: 10.1016/j.isprsjprs.2019.11.023_b0215
  article-title: Learning spatial context: using stuff to find things
  publication-title: Proc. Eur. Conf. Comput.
– volume: 39
  start-page: 1137
  year: 2017
  ident: 10.1016/j.isprsjprs.2019.11.023_b0400
  article-title: Faster R-CNN: towards real-time object detection with region proposal networks
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2016.2577031
– start-page: 12
  year: 2017
  ident: 10.1016/j.isprsjprs.2019.11.023_b0455
  article-title: Inception-v4, inception-resnet and the impact of residual connections on learning
– volume: 88
  start-page: 303
  year: 2010
  ident: 10.1016/j.isprsjprs.2019.11.023_b0135
  article-title: The pascal visual object classes (voc) challenge
  publication-title: Int. J. Comput. Vis.
  doi: 10.1007/s11263-009-0275-4
– start-page: 845
  year: 2016
  ident: 10.1016/j.isprsjprs.2019.11.023_b0250
  article-title: Hypernet: Towards accurate region proposal generation and joint object detection
  publication-title: Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit.
– start-page: 770
  year: 2016
  ident: 10.1016/j.isprsjprs.2019.11.023_b0210
  article-title: Deep residual learning for image recognition
  publication-title: Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit.
– start-page: 2818
  year: 2016
  ident: 10.1016/j.isprsjprs.2019.11.023_b0465
  article-title: Rethinking the inception architecture for computer vision
  publication-title: Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit.
– volume: 54
  start-page: 5553
  year: 2016
  ident: 10.1016/j.isprsjprs.2019.11.023_b0555
  article-title: Weakly supervised learning based on coupled convolutional neural networks for aircraft detection
  publication-title: IEEE Trans. Geosci. Remote Sens.
  doi: 10.1109/TGRS.2016.2569141
– start-page: 248
  year: 2009
  ident: 10.1016/j.isprsjprs.2019.11.023_b0115
  article-title: Imagenet: A large-scale hierarchical image database
  publication-title: Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit
– volume: 89
  start-page: 37
  year: 2014
  ident: 10.1016/j.isprsjprs.2019.11.023_b0185
  article-title: Efficient, simultaneous detection of multi-class geospatial targets based on visual saliency modeling and discriminative learning of sparse coding
  publication-title: ISPRS J. Photogramm. Remote Sens.
  doi: 10.1016/j.isprsjprs.2013.12.011
– start-page: 3974
  year: 2018
  ident: 10.1016/j.isprsjprs.2019.11.023_b0505
  article-title: DOTA: A large-scale dataset for object detection in aerial images
  publication-title: Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit.
– volume: 29
  start-page: 82
  year: 2012
  ident: 10.1016/j.isprsjprs.2019.11.023_b0220
  article-title: Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups
  publication-title: IEEE Signal Process. Magaz.
  doi: 10.1109/MSP.2012.2205597
SSID ssj0001568
Score 2.7128475
Snippet Substantial efforts have been devoted more recently to presenting various methods for object detection in optical remote sensing images. However, the current...
SourceID proquest
crossref
elsevier
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 296
SubjectTerms Benchmark dataset
computer vision
Convolutional Neural Network (CNN)
data collection
Deep learning
image analysis
Object detection
Optical remote sensing images
remote sensing
surveys
Title Object detection in optical remote sensing images: A survey and a new benchmark
URI https://dx.doi.org/10.1016/j.isprsjprs.2019.11.023
https://www.proquest.com/docview/2400518149
Volume 159
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1LS8NAEF5ED-pBtCrWR1nBa2yy2bx6K6JUxXpQwVvYVzTVpiVphV787c6kSX2AePAQCGGWhNnNzDe738wQciKV7wmeCAtulMW19q3QC4wlmNaODBSzPUxOvun7vQd-9eg9LpGzOhcGaZWV7Z_b9NJaV0_alTbb4zRt39kQOjCA-wBBygxMzGDnAdL6Tt8_aR7OPB0OhS2U_sbxSotxXgzgQo5XdIrlPJn7m4f6YatLB3SxSTYq5Ei784_bIksma5D1L_UEG2S1amn-PNsmt7cSt1ioNpOSbZXRNKOjcbl1TXMDM2Rogez17ImmQ7AqRYd2aTHN38yMikxTQQFxUwkaeR6K_GWHPFyc35_1rKp7gqW4wydWlERY6SUR4IKVDrkjbEdyVxk3CQTnnqu0clyjfalClycKm3BEInIFi1QAQMLdJcvZKDN7hCoWKuPIKOTG54EBUGHAjdmSMemzxA-bxK81FquqtDh2uHiNaw7ZIF6oOkZVQ-ARg6qbxF4MHM-ra_w9pFNPSfxtocTgA_4efFxPYgy_EZ6NiMyMpiDE0TyFEC_u_-cFB2SNYUBe7tEckuVJPjVHgFomslUuyxZZ6V5e9_ofFtnuaA
linkProvider Elsevier
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1JS8NAFH5oPagHccXdEbzGJpPJ5q2IUrd6UMHbMFtsXNKStIL_3jdpUlQQDx4CIcwj4c3ke9-beQvAkVRhIFgqHLxRDtM6dOIgMo6gWnsyUtQNbHLyTS_sPrDLx-BxBk6bXBgbVllj_wTTK7Sun7RrbbaHWda-c9F1oEj3kYJUGZizMGerU7EWzHUurrq9KSB7k4w4O96xAt_CvLJyWJTPeNkwr-TYVvSk_m9G6gdcVzbofBmWavJIOpPvW4EZk6_C4peSgqswX3c173-swe2ttLssRJtRFXCVkywng2G1e00Kg5NkSGkD2PMnkr0hsJQnpEPKcfFuPojINREESTeRqJT-myhe1uHh_Oz-tOvUDRQcxTw2cpI0scVeUoFWWOmYecL1JPOV8dNIMBb4SivPNzqUKvZZqmwfjkQkvqCJipBL-BvQyge52QSiaKyMJ5OYmZBFBnmFQUvmSkplSNMw3oKw0RhXdXVx2-TilTdhZM98qmpuVY2-B0dVb4E7FRxOCmz8LXLSTAn_tlY4moG_hQ-bSeT4J9njEZGbwRgHMYtQMbqM2_95wQHMd-9vrvn1Re9qBxao9c-rLZtdaI2KsdlDEjOS-_Ui_QRDKfEZ
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Object+detection+in+optical+remote+sensing+images%3A+A+survey+and+a+new+benchmark&rft.jtitle=ISPRS+journal+of+photogrammetry+and+remote+sensing&rft.au=Li%2C+Ke&rft.au=Wan%2C+Gang&rft.au=Cheng%2C+Gong&rft.au=Meng%2C+Liqiu&rft.date=2020-01-01&rft.pub=Elsevier+B.V&rft.issn=0924-2716&rft.eissn=1872-8235&rft.volume=159&rft.spage=296&rft.epage=307&rft_id=info:doi/10.1016%2Fj.isprsjprs.2019.11.023&rft.externalDocID=S0924271619302825
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0924-2716&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0924-2716&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0924-2716&client=summon