UAVid: A semantic segmentation dataset for UAV imagery

[Display omitted] Semantic segmentation has been one of the leading research interests in computer vision recently. It serves as a perception foundation for many fields, such as robotics and autonomous driving. The fast development of semantic segmentation attributes enormously to the large scale da...

Full description

Saved in:
Bibliographic Details
Published inISPRS journal of photogrammetry and remote sensing Vol. 165; pp. 108 - 119
Main Authors Lyu, Ye, Vosselman, George, Xia, Gui-Song, Yilmaz, Alper, Yang, Michael Ying
Format Journal Article
LanguageEnglish
Published Elsevier B.V 01.07.2020
Subjects
Online AccessGet full text

Cover

Loading…
Abstract [Display omitted] Semantic segmentation has been one of the leading research interests in computer vision recently. It serves as a perception foundation for many fields, such as robotics and autonomous driving. The fast development of semantic segmentation attributes enormously to the large scale datasets, especially for the deep learning related methods. There already exist several semantic segmentation datasets for comparison among semantic segmentation methods in complex urban scenes, such as the Cityscapes and CamVid datasets, where the side views of the objects are captured with a camera mounted on the driving car. There also exist semantic labeling datasets for the airborne images and the satellite images, where the nadir views of the objects are captured. However, only a few datasets capture urban scenes from an oblique Unmanned Aerial Vehicle (UAV) perspective, where both of the top view and the side view of the objects can be observed, providing more information for object recognition. In this paper, we introduce our UAVid dataset, a new high-resolution UAV semantic segmentation dataset as a complement, which brings new challenges, including large scale variation, moving object recognition and temporal consistency preservation. Our UAV dataset consists of 30 video sequences capturing high-resolution images in oblique views. In total, 300 images have been densely labeled with 8 classes for the semantic labeling task. We have provided several deep learning baseline methods with pre-training, among which the proposed Multi-Scale-Dilation net performs the best via multi-scale feature extraction, reaching a mean intersection-over-union (IoU) score around 50%. We have also explored the influence of spatial-temporal regularization for sequence data by leveraging on feature space optimization (FSO) and 3D conditional random field (CRF). Our UAVid website and the labeling tool have been published online (https://uavid.nl/).
AbstractList Semantic segmentation has been one of the leading research interests in computer vision recently. It serves as a perception foundation for many fields, such as robotics and autonomous driving. The fast development of semantic segmentation attributes enormously to the large scale datasets, especially for the deep learning related methods. There already exist several semantic segmentation datasets for comparison among semantic segmentation methods in complex urban scenes, such as the Cityscapes and CamVid datasets, where the side views of the objects are captured with a camera mounted on the driving car. There also exist semantic labeling datasets for the airborne images and the satellite images, where the nadir views of the objects are captured. However, only a few datasets capture urban scenes from an oblique Unmanned Aerial Vehicle (UAV) perspective, where both of the top view and the side view of the objects can be observed, providing more information for object recognition. In this paper, we introduce our UAVid dataset, a new high-resolution UAV semantic segmentation dataset as a complement, which brings new challenges, including large scale variation, moving object recognition and temporal consistency preservation. Our UAV dataset consists of 30 video sequences capturing high-resolution images in oblique views. In total, 300 images have been densely labeled with 8 classes for the semantic labeling task. We have provided several deep learning baseline methods with pre-training, among which the proposed Multi-Scale-Dilation net performs the best via multi-scale feature extraction, reaching a mean intersection-over-union (IoU) score around 50%. We have also explored the influence of spatial-temporal regularization for sequence data by leveraging on feature space optimization (FSO) and 3D conditional random field (CRF). Our UAVid website and the labeling tool have been published online (https://uavid.nl/).
[Display omitted] Semantic segmentation has been one of the leading research interests in computer vision recently. It serves as a perception foundation for many fields, such as robotics and autonomous driving. The fast development of semantic segmentation attributes enormously to the large scale datasets, especially for the deep learning related methods. There already exist several semantic segmentation datasets for comparison among semantic segmentation methods in complex urban scenes, such as the Cityscapes and CamVid datasets, where the side views of the objects are captured with a camera mounted on the driving car. There also exist semantic labeling datasets for the airborne images and the satellite images, where the nadir views of the objects are captured. However, only a few datasets capture urban scenes from an oblique Unmanned Aerial Vehicle (UAV) perspective, where both of the top view and the side view of the objects can be observed, providing more information for object recognition. In this paper, we introduce our UAVid dataset, a new high-resolution UAV semantic segmentation dataset as a complement, which brings new challenges, including large scale variation, moving object recognition and temporal consistency preservation. Our UAV dataset consists of 30 video sequences capturing high-resolution images in oblique views. In total, 300 images have been densely labeled with 8 classes for the semantic labeling task. We have provided several deep learning baseline methods with pre-training, among which the proposed Multi-Scale-Dilation net performs the best via multi-scale feature extraction, reaching a mean intersection-over-union (IoU) score around 50%. We have also explored the influence of spatial-temporal regularization for sequence data by leveraging on feature space optimization (FSO) and 3D conditional random field (CRF). Our UAVid website and the labeling tool have been published online (https://uavid.nl/).
Author Yilmaz, Alper
Lyu, Ye
Vosselman, George
Xia, Gui-Song
Yang, Michael Ying
Author_xml – sequence: 1
  givenname: Ye
  orcidid: 0000-0002-6665-7748
  surname: Lyu
  fullname: Lyu, Ye
  organization: Faculty of Geo-Information Science and Earth Observation (ITC), University of Twente, the Netherlands
– sequence: 2
  givenname: George
  surname: Vosselman
  fullname: Vosselman, George
  organization: Faculty of Geo-Information Science and Earth Observation (ITC), University of Twente, the Netherlands
– sequence: 3
  givenname: Gui-Song
  orcidid: 0000-0001-7660-6090
  surname: Xia
  fullname: Xia, Gui-Song
  organization: School of Computer Science, State Key Lab. of LIESMARS, Wuhan University, China
– sequence: 4
  givenname: Alper
  surname: Yilmaz
  fullname: Yilmaz, Alper
  organization: Department of Civil, Environmental and Geodetic Engineering, Ohio State University, USA
– sequence: 5
  givenname: Michael Ying
  orcidid: 0000-0002-0649-9987
  surname: Yang
  fullname: Yang, Michael Ying
  email: michael.yang@utwente.nl
  organization: Faculty of Geo-Information Science and Earth Observation (ITC), University of Twente, the Netherlands
BookMark eNqNkD1PwzAQhi1UJNrCbyAjS8LZjp0EiaGq-JIqsVBWy3EulaN8FNtF6r8nVRADCwynu-F9XumeBZn1Q4-EXFNIKFB52yTW751vxkkYMEhAJADFGZnTPGNxzriYkTkULI1ZRuUFWXjfAAAVMp8TuV292-ouWkUeO90Ha8Zj12EfdLBDH1U6aI8hqgcXjdHIdnqH7nhJzmvderz63kuyfXx4Wz_Hm9enl_VqExteZCFmVJZU1yWUICnHtIK8rHkqayNSxitRFVpoyLXUlJd5ipxBJrTRLEWDQtZ8SW6m3r0bPg7og-qsN9i2usfh4BUrclEwkTE6Ru-nqHGD9w5rZez0RHDatoqCOvlSjfrxpU6-FAg1-hr57Be_d-O37vgPcjWROJr4tOiUNxZ7g5V1aIKqBvtnxxdWvIyo
CitedBy_id crossref_primary_10_1061_JITSE4_ISENG_2351
crossref_primary_10_1109_TGRS_2021_3117851
crossref_primary_10_1080_01431161_2023_2295835
crossref_primary_10_1109_TGRS_2024_3523505
crossref_primary_10_3390_rs16152827
crossref_primary_10_1016_j_jag_2023_103646
crossref_primary_10_3390_aerospace10100880
crossref_primary_10_1109_TCSVT_2024_3454227
crossref_primary_10_1007_s11263_024_02247_9
crossref_primary_10_1016_j_jag_2022_102677
crossref_primary_10_1016_j_jag_2023_103362
crossref_primary_10_3390_rs16183449
crossref_primary_10_1007_s12145_024_01355_x
crossref_primary_10_1016_j_isprsjprs_2025_01_017
crossref_primary_10_1016_j_isprsjprs_2025_03_015
crossref_primary_10_1109_JSTARS_2024_3452250
crossref_primary_10_1080_01431161_2023_2297176
crossref_primary_10_1016_j_isprsjprs_2024_03_012
crossref_primary_10_1109_TPAMI_2021_3119563
crossref_primary_10_3390_rs15082139
crossref_primary_10_1016_j_ifacol_2023_03_052
crossref_primary_10_1155_2022_6010912
crossref_primary_10_1109_TGRS_2024_3493963
crossref_primary_10_1109_JSTARS_2024_3443268
crossref_primary_10_1109_JIOT_2024_3418859
crossref_primary_10_1109_JSTARS_2023_3275068
crossref_primary_10_1016_j_eswa_2024_124019
crossref_primary_10_61186_jgit_10_4_87
crossref_primary_10_1109_JSTARS_2025_3531984
crossref_primary_10_1016_j_jag_2025_104440
crossref_primary_10_1016_j_measurement_2023_112612
crossref_primary_10_1007_s10462_020_09943_1
crossref_primary_10_1109_ACCESS_2021_3061102
crossref_primary_10_1109_TGRS_2023_3278133
crossref_primary_10_1109_TGRS_2023_3234549
crossref_primary_10_1109_TGRS_2024_3502401
crossref_primary_10_1016_j_tust_2020_103677
crossref_primary_10_1109_TCAD_2022_3184928
crossref_primary_10_1016_j_isprsjprs_2021_06_006
crossref_primary_10_1016_j_isprsjprs_2022_03_001
crossref_primary_10_3390_rs15133275
crossref_primary_10_1016_j_isprsjprs_2021_03_024
crossref_primary_10_1016_j_wasman_2023_10_023
crossref_primary_10_1007_s11760_022_02463_1
crossref_primary_10_3390_rs14164008
crossref_primary_10_1016_j_ufug_2022_127785
crossref_primary_10_3389_fevo_2023_1201125
crossref_primary_10_1007_s11760_024_03106_3
crossref_primary_10_1109_TGRS_2022_3225843
crossref_primary_10_3390_aerospace10070604
crossref_primary_10_1007_s11042_024_19391_6
crossref_primary_10_1109_TGRS_2022_3150917
crossref_primary_10_3390_rs16061036
crossref_primary_10_1016_j_cageo_2022_105196
crossref_primary_10_1109_TITS_2024_3432761
crossref_primary_10_1016_j_jag_2022_103062
crossref_primary_10_1049_cvi2_12313
crossref_primary_10_3390_app14051986
crossref_primary_10_3390_rs16122077
crossref_primary_10_1016_j_jag_2024_103661
crossref_primary_10_1109_JSTARS_2024_3378695
crossref_primary_10_3233_JIFS_210349
crossref_primary_10_1016_j_isprsjprs_2021_12_006
crossref_primary_10_1109_TITS_2023_3312290
crossref_primary_10_1109_TNNLS_2023_3338732
crossref_primary_10_3390_ijgi9120728
crossref_primary_10_1145_3486616
crossref_primary_10_1109_TGRS_2024_3385318
crossref_primary_10_1007_s00521_021_06564_9
crossref_primary_10_3390_rs15194668
crossref_primary_10_1109_TGRS_2022_3223416
crossref_primary_10_3390_s25061786
crossref_primary_10_3390_rs16193622
crossref_primary_10_1016_j_ijcce_2021_11_005
crossref_primary_10_1016_j_comcom_2022_10_011
crossref_primary_10_1109_TGRS_2023_3345475
crossref_primary_10_1109_TNNLS_2021_3106391
crossref_primary_10_1109_TIV_2022_3193418
crossref_primary_10_3390_rs15061690
crossref_primary_10_3390_en13246496
crossref_primary_10_3390_rs14194969
crossref_primary_10_1109_JSTARS_2024_3472296
crossref_primary_10_1109_TGRS_2024_3465496
crossref_primary_10_3390_drones7020066
crossref_primary_10_3390_rs13163065
crossref_primary_10_1080_01431161_2024_2388875
crossref_primary_10_15622_ia_23_4_5
crossref_primary_10_1109_JSTARS_2024_3486724
crossref_primary_10_1109_LGRS_2025_3534994
crossref_primary_10_1109_TGRS_2024_3386918
crossref_primary_10_1109_TGRS_2024_3457674
crossref_primary_10_1109_LGRS_2022_3187760
crossref_primary_10_32604_cmes_2023_027764
crossref_primary_10_3390_electronics11213450
crossref_primary_10_1016_j_autcon_2024_105746
crossref_primary_10_3390_s23146514
crossref_primary_10_3390_drones8020046
crossref_primary_10_1080_01431161_2022_2142081
crossref_primary_10_1109_JSTARS_2024_3470316
crossref_primary_10_3788_LOP241148
crossref_primary_10_1109_TGRS_2024_3453868
crossref_primary_10_1109_ACCESS_2025_3548513
crossref_primary_10_3390_s23115288
crossref_primary_10_1016_j_landurbplan_2022_104569
crossref_primary_10_1109_TGRS_2021_3108781
crossref_primary_10_1109_TGRS_2024_3453501
crossref_primary_10_21923_jesd_1087477
crossref_primary_10_3390_rs14030533
crossref_primary_10_3390_a17120594
crossref_primary_10_1109_TVT_2023_3349312
crossref_primary_10_1109_JSTARS_2025_3525801
crossref_primary_10_1109_JSTARS_2023_3247455
crossref_primary_10_3390_drones9020097
crossref_primary_10_1088_1361_6501_ad6026
crossref_primary_10_3390_math10244735
crossref_primary_10_1080_01431161_2022_2030071
crossref_primary_10_1016_j_ecolind_2022_109615
crossref_primary_10_1109_LGRS_2024_3432922
crossref_primary_10_1109_TGRS_2024_3463204
crossref_primary_10_1109_JSTARS_2024_3437737
crossref_primary_10_1177_02783649241305153
crossref_primary_10_3390_rs13061053
crossref_primary_10_1016_j_robot_2022_104288
crossref_primary_10_1109_JSTARS_2021_3070368
crossref_primary_10_3390_rs15123157
crossref_primary_10_1109_JMASS_2023_3332948
crossref_primary_10_1109_JSTARS_2021_3104382
crossref_primary_10_1109_JSTARS_2023_3280029
crossref_primary_10_3390_ijgi13110402
crossref_primary_10_1109_TGRS_2022_3179379
crossref_primary_10_1109_TGRS_2023_3316166
crossref_primary_10_53297_0002306X_2022_v75_1_72
crossref_primary_10_1088_2634_4386_ac9b86
crossref_primary_10_1109_TCSVT_2020_3037234
crossref_primary_10_3390_rs16173278
crossref_primary_10_3390_ijgi11090494
crossref_primary_10_1007_s00521_024_10165_7
crossref_primary_10_3390_rs14215513
crossref_primary_10_3390_rs15102580
crossref_primary_10_1109_TGRS_2024_3487576
crossref_primary_10_3390_rs15051325
crossref_primary_10_2139_ssrn_4129749
crossref_primary_10_3390_app10207347
crossref_primary_10_1016_j_knosys_2024_111939
crossref_primary_10_1109_TGRS_2022_3214246
crossref_primary_10_1155_2021_2996940
crossref_primary_10_1109_TGRS_2021_3133109
crossref_primary_10_3390_s20174855
crossref_primary_10_1016_j_isprsjprs_2022_06_008
crossref_primary_10_1016_j_isprsjprs_2024_03_022
crossref_primary_10_1016_j_patcog_2022_109019
crossref_primary_10_1109_ACCESS_2025_3538906
crossref_primary_10_3390_app112311494
crossref_primary_10_1016_j_isprsjprs_2021_02_009
Cites_doi 10.1016/j.biosystemseng.2010.11.010
10.1016/j.isprsjprs.2017.07.010
10.1109/CVPR.2017.565
10.1109/JSTARS.2016.2569162
10.5244/C.30.144
10.1109/TPAMI.2014.2377715
10.1016/j.isprsjprs.2013.10.004
10.1109/WACV.2018.00168
10.1109/CVPR.2015.7298965
10.1177/0278364913491297
10.1109/JSTARS.2014.2305441
10.3390/rs8080689
10.1007/s11263-014-0733-5
10.1016/j.rse.2019.111322
10.1109/CVPRW.2018.00031
10.3390/rs11212505
10.3390/rs9020171
10.1109/CVPR.2016.350
10.1007/978-3-319-46448-0_27
10.1109/CVPR.2017.544
10.1109/CVPR.2018.00132
10.1109/TPAMI.2010.143
10.1007/978-3-030-01234-2_49
10.1007/s10846-012-9759-5
10.1109/TPAMI.2012.120
ContentType Journal Article
Copyright 2020 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS)
Copyright_xml – notice: 2020 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS)
DBID AAYXX
CITATION
7S9
L.6
DOI 10.1016/j.isprsjprs.2020.05.009
DatabaseName CrossRef
AGRICOLA
AGRICOLA - Academic
DatabaseTitle CrossRef
AGRICOLA
AGRICOLA - Academic
DatabaseTitleList AGRICOLA

DeliveryMethod fulltext_linktorsrc
Discipline Geography
Engineering
EISSN 1872-8235
EndPage 119
ExternalDocumentID 10_1016_j_isprsjprs_2020_05_009
S0924271620301295
GroupedDBID --K
--M
.~1
0R~
1B1
1RT
1~.
1~5
29J
4.4
457
4G.
5GY
5VS
7-5
71M
8P~
9JN
AACTN
AAEDT
AAEDW
AAIAV
AAIKC
AAIKJ
AAKOC
AALRI
AAMNW
AAOAW
AAQFI
AAQXK
AAXUO
AAYFN
ABBOA
ABFNM
ABJNI
ABMAC
ABQEM
ABQYD
ABXDB
ABYKQ
ACDAQ
ACGFS
ACLVX
ACNNM
ACRLP
ACSBN
ACZNC
ADBBV
ADEZE
ADJOM
ADMUD
AEBSH
AEKER
AENEX
AFKWA
AFTJW
AGHFR
AGUBO
AGYEJ
AHHHB
AHZHX
AIALX
AIEXJ
AIKHN
AITUG
AJBFU
AJOXV
ALMA_UNASSIGNED_HOLDINGS
AMFUW
AMRAJ
AOUOD
ASPBG
ATOGT
AVWKF
AXJTR
AZFZN
BKOJK
BLXMC
CS3
DU5
EBS
EFJIC
EFLBG
EJD
EO8
EO9
EP2
EP3
FDB
FEDTE
FGOYB
FIRID
FNPLU
FYGXN
G-2
G-Q
G8K
GBLVA
GBOLZ
HMA
HVGLF
HZ~
H~9
IHE
IMUCA
J1W
KOM
LY3
M41
MO0
N9A
O-L
O9-
OAUVE
OZT
P-8
P-9
P2P
PC.
Q38
R2-
RIG
RNS
ROL
RPZ
SDF
SDG
SEP
SES
SEW
SPC
SPCBC
SSE
SSV
SSZ
T5K
T9H
WUQ
ZMT
~02
~G-
AAHBH
AATTM
AAXKI
AAYWO
AAYXX
ABDPE
ABWVN
ACRPL
ACVFH
ADCNI
ADNMO
AEIPS
AEUPX
AFJKZ
AFPUW
AFXIZ
AGCQF
AGQPQ
AGRNS
AIGII
AIIUN
AKBMS
AKRWK
AKYEP
ANKPU
APXCP
BNPGV
CITATION
SSH
7S9
EFKBS
L.6
ID FETCH-LOGICAL-c397t-216b1afb0b0613e4d08bf346fc5423d5d9a5a08a6a13b84e32075aca24ece56f3
IEDL.DBID .~1
ISSN 0924-2716
IngestDate Tue Aug 05 11:20:36 EDT 2025
Tue Jul 01 03:46:43 EDT 2025
Thu Apr 24 23:01:50 EDT 2025
Fri Feb 23 02:47:40 EST 2024
IsDoiOpenAccess false
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Keywords Deep learning
UAV
Semantic segmentation
Dataset
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c397t-216b1afb0b0613e4d08bf346fc5423d5d9a5a08a6a13b84e32075aca24ece56f3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ORCID 0000-0002-6665-7748
0000-0001-7660-6090
0000-0002-0649-9987
OpenAccessLink https://research.utwente.nl/en/publications/0839a8f9-4463-48c1-905e-10e03a7d62f7
PQID 2985925721
PQPubID 24069
PageCount 12
ParticipantIDs proquest_miscellaneous_2985925721
crossref_citationtrail_10_1016_j_isprsjprs_2020_05_009
crossref_primary_10_1016_j_isprsjprs_2020_05_009
elsevier_sciencedirect_doi_10_1016_j_isprsjprs_2020_05_009
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate July 2020
2020-07-00
20200701
PublicationDateYYYYMMDD 2020-07-01
PublicationDate_xml – month: 07
  year: 2020
  text: July 2020
PublicationDecade 2020
PublicationTitle ISPRS journal of photogrammetry and remote sensing
PublicationYear 2020
Publisher Elsevier B.V
Publisher_xml – name: Elsevier B.V
References Caesar, H., Uijlings, J., Ferrari, V., 2018. Coco-stuff: Thing and stuff classes in context. In: CVPR.
Du, Qi, Yu, Yang, Duan, Li, Zhang, Huang, Tian (b0095) 2018
Everingham, Eslami, Van Gool, Williams, Winn, Zisserman (b0100) 2015; 111
Lottes, Khanna, Pfeifer, Siegwart, Stachniss (b0140) 2017
Brostow, Shotton, Fauqueur, Cipolla (b0020) 2008
Chen, L.C., Zhu, Y., Papandreou, G., Schroff, F., Adam, H., 2018. Encoder-decoder with atrous separable convolution for semantic image segmentation. In: ECCV.
Geiger, Lenz, Stiller, Urtasun (b0105) 2013; 32
Semsch, Jakob, Pavlicek, Pechoucek (b0190) 2009
Dollár, Zitnick (b0090) 2015; 37
Perez, Maza, Caballero, Scarlatti, Casado, Ollero (b0160) 2013; 69
Richmond, D., Kainmueller, D., Yang, M.Y., Myers, G., Rother, C., 2016. Mapping auto-context to a deep, sparse convnet for semantic segmenation. In: British Machine Vision Conference (BMVC).
Hosseini, O., Groth, O., Kirillov, A., Yang, M.Y., Rother, C., 2017. Analyzing modular cnn architectures for joint depth prediction and semantic segmentation. In: International Conference on Robotics and Automation (ICRA).
Long, J., Shelhamer, E., Darrell, T., 2015. Fully convolutional networks for semantic segmentation. In: CVPR.
Brox, Malik (b0025) 2011; 33
Yu, F., Xian, W., Chen, Y., Liu, F., Liao, M., Madhavan, V., Darrell, T., 2018. Bdd100k: A diverse driving video database with scalable annotation tooling. arXiv preprint arXiv:1805.04687.
Liu, Fan, Wang, Bai, Xiang, Pan (b0130) 2018
Milioto, Lottes, Stachniss (b0145) 2017; 4
Sundaram, Brox, Keutzer (b0195) 2010
Tong, X.Y., Xia, G.S., Lu, Q., Shen, H., Li, S., You, S., Zhang, L., Land-cover classification with high-resolution remote sensing images using transferable deep models. Remote Sens. Environ. 237, 111322.
Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Devin, M., Ghemawat, S., Irving, G., Isard, M., et al., 2016. Tensorflow: a system for large-scale machine learning. In: OSDI, pp. 265–283.
Demir, I., Koperski, K., Lindenbaum, D., Pang, G., Huang, J., Basu, S., Hughes, F., Tuia, D., Raska, R., 2018. Deepglobe 2018: A challenge to parse the earth through satellite images. In: CVPRW.
Chebrolu, Läbe, Stachniss (b0050) 2018; 3
Zhu, P., Wen, L., Bian, X., Haibin, L., Hu, Q., 2018. Vision meets drones: A challenge. arXiv preprint arXiv:1804.07437.
Zhao, Shi, Qi, Wang, Jia (b0225) 2017
Zhou, B., Zhao, H., Puig, X., Fidler, S., Barriuso, A., Torralba, A., 2017. Scene parsing through ade20k dataset. In: CVPR.
Yu, F., Koltun, V., 2016. Multi-scale context aggregation by dilated convolutions. In: ICLR.
Nigam, I., Huang, C., Ramanan, D., 2018. Ensemble Knowledge Transfer for Semantic Segmentation. In IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1499-1508.
Debes, Merentitis, Heremans, Hahn, Frangiadakis, van Kasteren, Liao, Bellens, Pizurica, Gautama, Philips, Prasad, Du, Pacifici (b0080) 2014; 7
Robicquet, Sadeghian, Alahi, Savarese (b0170) 2016
Rottensteiner, Sohn, Gerke, Wegner, Breitkopf, Jung (b0180) 2014; 93
Yang, Liao, Ackermann, Rosenhahn (b0210) 2017; 131
Achanta, Shaji, Smith, Lucchi, Fua, Süsstrunk (b0010) 2012; 34
Caelles, S., Maninis, K.K., Pont-Tuset, J., Leal-Taixé, L., Cremers, D., Van Gool, L., 2017. One-shot video object segmentation. In: CVPR.
Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler, M., Benenson, R., Franke, U., Roth, S., Schiele, B., 2016. The cityscapes dataset for semantic urban scene understanding. In: CVPR.
Kim, B., Yim, J., Kim, J., 2018. Highway driving dataset for semantic video segmentation. In: BMVC.
Adelson, Anderson, Bergen, Burt, Ogden (b0015) 1984; 29
Kundu, Vineet, Koltun (b0120) 2016
Crommelinck, S., Bennett, R., Gerke, M., Yang, M.Y., Vosselman, G., 2017. Contour detection for uav-based cadastral mapping. Remote Sens.
Crommelinck, S., Bennett, R., Gerke, M., Nex, F., Yang, M.Y., Vosselman, G., 2016. Review of automatic feature extraction from high-resolution optical sensor data for uav-based cadastral mapping. Remote Sens.
Brox, Bruhn, Papenberg, Weickert (b0030) 2004
Crommelinck, Koeva, Yang, Vosselman (b0075) 2019; 11
Lin, Maire, Belongie, Hays, Perona, Ramanan, Dollár, Zitnick (b0125) 2014
Scharwächter, Enzweiler, Franke, Roth (b0185) 2013
Ronneberger, Fischer, Brox (b0175) 2015
Campos-Taberner, Romero-Soriano, Gatta, Camps-Valls, Lagrange, Saux, Beaupère, Boulch, Chan-Hon-Tong, Herbin, Randrianarivo, Ferecatu, Shimoni, Moser, Tuia (b0045) 2016
Xiang, Tian (b0205) 2011; 108
Mueller, M., Smith, N., Ghanem, B., 2016. A benchmark and simulator for uav tracking. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (Eds.), ECCV.
10.1016/j.isprsjprs.2020.05.009_b0070
Semsch (10.1016/j.isprsjprs.2020.05.009_b0190) 2009
Everingham (10.1016/j.isprsjprs.2020.05.009_b0100) 2015; 111
Zhao (10.1016/j.isprsjprs.2020.05.009_b0225) 2017
Brox (10.1016/j.isprsjprs.2020.05.009_b0025) 2011; 33
Campos-Taberner (10.1016/j.isprsjprs.2020.05.009_b0045) 2016
10.1016/j.isprsjprs.2020.05.009_b0005
Robicquet (10.1016/j.isprsjprs.2020.05.009_b0170) 2016
Ronneberger (10.1016/j.isprsjprs.2020.05.009_b0175) 2015
10.1016/j.isprsjprs.2020.05.009_b0200
Chebrolu (10.1016/j.isprsjprs.2020.05.009_b0050) 2018; 3
10.1016/j.isprsjprs.2020.05.009_b0165
10.1016/j.isprsjprs.2020.05.009_b0220
10.1016/j.isprsjprs.2020.05.009_b0065
Kundu (10.1016/j.isprsjprs.2020.05.009_b0120) 2016
10.1016/j.isprsjprs.2020.05.009_b0085
10.1016/j.isprsjprs.2020.05.009_b0040
Perez (10.1016/j.isprsjprs.2020.05.009_b0160) 2013; 69
10.1016/j.isprsjprs.2020.05.009_b0060
Milioto (10.1016/j.isprsjprs.2020.05.009_b0145) 2017; 4
Crommelinck (10.1016/j.isprsjprs.2020.05.009_b0075) 2019; 11
Liu (10.1016/j.isprsjprs.2020.05.009_b0130) 2018
Lottes (10.1016/j.isprsjprs.2020.05.009_b0140) 2017
Sundaram (10.1016/j.isprsjprs.2020.05.009_b0195) 2010
Brostow (10.1016/j.isprsjprs.2020.05.009_b0020) 2008
Geiger (10.1016/j.isprsjprs.2020.05.009_b0105) 2013; 32
Scharwächter (10.1016/j.isprsjprs.2020.05.009_b0185) 2013
Du (10.1016/j.isprsjprs.2020.05.009_b0095) 2018
Achanta (10.1016/j.isprsjprs.2020.05.009_b0010) 2012; 34
Brox (10.1016/j.isprsjprs.2020.05.009_b0030) 2004
10.1016/j.isprsjprs.2020.05.009_b0215
10.1016/j.isprsjprs.2020.05.009_b0115
10.1016/j.isprsjprs.2020.05.009_b0235
10.1016/j.isprsjprs.2020.05.009_b0135
10.1016/j.isprsjprs.2020.05.009_b0035
10.1016/j.isprsjprs.2020.05.009_b0155
Adelson (10.1016/j.isprsjprs.2020.05.009_b0015) 1984; 29
10.1016/j.isprsjprs.2020.05.009_b0055
Dollár (10.1016/j.isprsjprs.2020.05.009_b0090) 2015; 37
10.1016/j.isprsjprs.2020.05.009_b0110
Rottensteiner (10.1016/j.isprsjprs.2020.05.009_b0180) 2014; 93
10.1016/j.isprsjprs.2020.05.009_b0230
Yang (10.1016/j.isprsjprs.2020.05.009_b0210) 2017; 131
Debes (10.1016/j.isprsjprs.2020.05.009_b0080) 2014; 7
Lin (10.1016/j.isprsjprs.2020.05.009_b0125) 2014
10.1016/j.isprsjprs.2020.05.009_b0150
Xiang (10.1016/j.isprsjprs.2020.05.009_b0205) 2011; 108
References_xml – reference: Crommelinck, S., Bennett, R., Gerke, M., Yang, M.Y., Vosselman, G., 2017. Contour detection for uav-based cadastral mapping. Remote Sens.
– volume: 29
  start-page: 33
  year: 1984
  end-page: 41
  ident: b0015
  article-title: Pyramid methods in image processing
  publication-title: RCA Eng.
– year: 2018
  ident: b0095
  article-title: The unmanned aerial vehicle benchmark: object detection and tracking
  publication-title: ECCV
– reference: Kim, B., Yim, J., Kim, J., 2018. Highway driving dataset for semantic video segmentation. In: BMVC.
– start-page: 234
  year: 2015
  end-page: 241
  ident: b0175
  article-title: U-net: Convolutional networks for biomedical image segmentation
  publication-title: MICCAI
– reference: Yu, F., Xian, W., Chen, Y., Liu, F., Liao, M., Madhavan, V., Darrell, T., 2018. Bdd100k: A diverse driving video database with scalable annotation tooling. arXiv preprint arXiv:1805.04687.
– reference: Long, J., Shelhamer, E., Darrell, T., 2015. Fully convolutional networks for semantic segmentation. In: CVPR.
– year: 2018
  ident: b0130
  article-title: Semantic labeling in very high resolution images via a self-cascaded convolutional neural network
  publication-title: ISPRS J. Photogram. Remote Sens.
– reference: Richmond, D., Kainmueller, D., Yang, M.Y., Myers, G., Rother, C., 2016. Mapping auto-context to a deep, sparse convnet for semantic segmenation. In: British Machine Vision Conference (BMVC).
– reference: Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Devin, M., Ghemawat, S., Irving, G., Isard, M., et al., 2016. Tensorflow: a system for large-scale machine learning. In: OSDI, pp. 265–283.
– year: 2016
  ident: b0045
  article-title: Processing of extremely high-resolution lidar and rgb data: Outcome of the 2015 ieee grss data fusion contest part a: 2-d contest
  publication-title: IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens
– reference: Yu, F., Koltun, V., 2016. Multi-scale context aggregation by dilated convolutions. In: ICLR.
– start-page: 3024
  year: 2017
  end-page: 3031
  ident: b0140
  article-title: Uav-based crop and weed classification for smart farming
  publication-title: ICRA
– reference: Crommelinck, S., Bennett, R., Gerke, M., Nex, F., Yang, M.Y., Vosselman, G., 2016. Review of automatic feature extraction from high-resolution optical sensor data for uav-based cadastral mapping. Remote Sens.
– reference: Zhou, B., Zhao, H., Puig, X., Fidler, S., Barriuso, A., Torralba, A., 2017. Scene parsing through ade20k dataset. In: CVPR.
– start-page: 82
  year: 2009
  end-page: 85
  ident: b0190
  article-title: Autonomous uav surveillance in complex urban environments
  publication-title: International Joint Conference on Web Intelligence and Intelligent Agent Technology
– start-page: 25
  year: 2004
  end-page: 36
  ident: b0030
  article-title: High accuracy optical flow estimation based on a theory for warping
  publication-title: ECCV
– volume: 3
  year: 2018
  ident: b0050
  article-title: Robust long-term registration of uav images of crop fields for precision agriculture
  publication-title: IEEE Robot. Automat. Lett.
– volume: 93
  start-page: 256
  year: 2014
  end-page: 271
  ident: b0180
  article-title: Results of the isprs benchmark on urban object detection and 3d building reconstruction
  publication-title: ISPRS J. Photogram. Remote Sens.
– volume: 32
  start-page: 1231
  year: 2013
  end-page: 1237
  ident: b0105
  article-title: Vision meets robotics: The kitti dataset
  publication-title: Int. J. Robot. Res.
– reference: Zhu, P., Wen, L., Bian, X., Haibin, L., Hu, Q., 2018. Vision meets drones: A challenge. arXiv preprint arXiv:1804.07437.
– reference: Mueller, M., Smith, N., Ghanem, B., 2016. A benchmark and simulator for uav tracking. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (Eds.), ECCV.
– start-page: 3168
  year: 2016
  end-page: 3175
  ident: b0120
  article-title: Feature space optimization for semantic video segmentation
  publication-title: CVPR
– reference: Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler, M., Benenson, R., Franke, U., Roth, S., Schiele, B., 2016. The cityscapes dataset for semantic urban scene understanding. In: CVPR.
– start-page: 435
  year: 2013
  end-page: 445
  ident: b0185
  article-title: Efficient multi-cue scene segmentation
  publication-title: GCPR
– reference: Nigam, I., Huang, C., Ramanan, D., 2018. Ensemble Knowledge Transfer for Semantic Segmentation. In IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1499-1508.
– volume: 37
  start-page: 1558
  year: 2015
  end-page: 1570
  ident: b0090
  article-title: Fast edge detection using structured forests
  publication-title: PAMI
– volume: 34
  start-page: 2274
  year: 2012
  end-page: 2282
  ident: b0010
  article-title: Slic superpixels compared to state-of-the-art superpixel methods
  publication-title: PAMI
– start-page: 549
  year: 2016
  end-page: 565
  ident: b0170
  article-title: Learning social etiquette: Human trajectory understanding in crowded scenes
  publication-title: ECCV
– volume: 111
  start-page: 98
  year: 2015
  end-page: 136
  ident: b0100
  article-title: The pascal visual object classes challenge: A retrospective
  publication-title: IJCV
– reference: Tong, X.Y., Xia, G.S., Lu, Q., Shen, H., Li, S., You, S., Zhang, L., Land-cover classification with high-resolution remote sensing images using transferable deep models. Remote Sens. Environ. 237, 111322.
– start-page: 2881
  year: 2017
  end-page: 2890
  ident: b0225
  article-title: Pyramid scene parsing network
  publication-title: CVPR
– volume: 4
  start-page: 41
  year: 2017
  ident: b0145
  article-title: Real-time blob-wise sugar beets vs weeds classification for monitoring fields using convolutional neural networks
  publication-title: ISPRS Ann.
– volume: 108
  start-page: 174
  year: 2011
  end-page: 190
  ident: b0205
  article-title: Development of a low-cost agricultural remote sensing system based on an autonomous unmanned aerial vehicle (uav)
  publication-title: Biosyst. Eng.
– reference: Hosseini, O., Groth, O., Kirillov, A., Yang, M.Y., Rother, C., 2017. Analyzing modular cnn architectures for joint depth prediction and semantic segmentation. In: International Conference on Robotics and Automation (ICRA).
– volume: 33
  start-page: 500
  year: 2011
  end-page: 513
  ident: b0025
  article-title: Large displacement optical flow: descriptor matching in variational motion estimation
  publication-title: PAMI
– reference: Chen, L.C., Zhu, Y., Papandreou, G., Schroff, F., Adam, H., 2018. Encoder-decoder with atrous separable convolution for semantic image segmentation. In: ECCV.
– start-page: 44
  year: 2008
  end-page: 57
  ident: b0020
  article-title: Segmentation and recognition using structure from motion point clouds
  publication-title: ECCV
– volume: 131
  start-page: 15
  year: 2017
  end-page: 25
  ident: b0210
  article-title: On support relations and semantic scene graphs
  publication-title: ISPRS J. Photogramm. Remote Sens.
– reference: Caesar, H., Uijlings, J., Ferrari, V., 2018. Coco-stuff: Thing and stuff classes in context. In: CVPR.
– volume: 7
  year: 2014
  ident: b0080
  article-title: Hyperspectral and lidar data fusion: Outcome of the 2013 grss data fusion contest
  publication-title: IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens.
– reference: Demir, I., Koperski, K., Lindenbaum, D., Pang, G., Huang, J., Basu, S., Hughes, F., Tuia, D., Raska, R., 2018. Deepglobe 2018: A challenge to parse the earth through satellite images. In: CVPRW.
– volume: 11
  year: 2019
  ident: b0075
  article-title: Application of deep learning for delineation of visible cadastral boundaries from remote sensing imagery
  publication-title: Remote Sens.
– start-page: 438
  year: 2010
  end-page: 451
  ident: b0195
  article-title: Dense point trajectories by gpu-accelerated large displacement optical flow
  publication-title: ECCV
– reference: Caelles, S., Maninis, K.K., Pont-Tuset, J., Leal-Taixé, L., Cremers, D., Van Gool, L., 2017. One-shot video object segmentation. In: CVPR.
– start-page: 740
  year: 2014
  end-page: 755
  ident: b0125
  article-title: Microsoft coco: Common objects in context
  publication-title: ECCV
– volume: 69
  start-page: 119
  year: 2013
  end-page: 130
  ident: b0160
  article-title: A ground control station for a multi-uav surveillance system
  publication-title: J. Intell. Robot. Syst.
– start-page: 549
  year: 2016
  ident: 10.1016/j.isprsjprs.2020.05.009_b0170
  article-title: Learning social etiquette: Human trajectory understanding in crowded scenes
– start-page: 3168
  year: 2016
  ident: 10.1016/j.isprsjprs.2020.05.009_b0120
  article-title: Feature space optimization for semantic video segmentation
– start-page: 740
  year: 2014
  ident: 10.1016/j.isprsjprs.2020.05.009_b0125
  article-title: Microsoft coco: Common objects in context
– volume: 108
  start-page: 174
  year: 2011
  ident: 10.1016/j.isprsjprs.2020.05.009_b0205
  article-title: Development of a low-cost agricultural remote sensing system based on an autonomous unmanned aerial vehicle (uav)
  publication-title: Biosyst. Eng.
  doi: 10.1016/j.biosystemseng.2010.11.010
– start-page: 2881
  year: 2017
  ident: 10.1016/j.isprsjprs.2020.05.009_b0225
  article-title: Pyramid scene parsing network
– volume: 3
  year: 2018
  ident: 10.1016/j.isprsjprs.2020.05.009_b0050
  article-title: Robust long-term registration of uav images of crop fields for precision agriculture
  publication-title: IEEE Robot. Automat. Lett.
– volume: 131
  start-page: 15
  year: 2017
  ident: 10.1016/j.isprsjprs.2020.05.009_b0210
  article-title: On support relations and semantic scene graphs
  publication-title: ISPRS J. Photogramm. Remote Sens.
  doi: 10.1016/j.isprsjprs.2017.07.010
– ident: 10.1016/j.isprsjprs.2020.05.009_b0035
  doi: 10.1109/CVPR.2017.565
– ident: 10.1016/j.isprsjprs.2020.05.009_b0005
– year: 2016
  ident: 10.1016/j.isprsjprs.2020.05.009_b0045
  article-title: Processing of extremely high-resolution lidar and rgb data: Outcome of the 2015 ieee grss data fusion contest part a: 2-d contest
  publication-title: IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens
  doi: 10.1109/JSTARS.2016.2569162
– ident: 10.1016/j.isprsjprs.2020.05.009_b0165
  doi: 10.5244/C.30.144
– volume: 37
  start-page: 1558
  year: 2015
  ident: 10.1016/j.isprsjprs.2020.05.009_b0090
  article-title: Fast edge detection using structured forests
  publication-title: PAMI
  doi: 10.1109/TPAMI.2014.2377715
– volume: 93
  start-page: 256
  year: 2014
  ident: 10.1016/j.isprsjprs.2020.05.009_b0180
  article-title: Results of the isprs benchmark on urban object detection and 3d building reconstruction
  publication-title: ISPRS J. Photogram. Remote Sens.
  doi: 10.1016/j.isprsjprs.2013.10.004
– ident: 10.1016/j.isprsjprs.2020.05.009_b0235
– ident: 10.1016/j.isprsjprs.2020.05.009_b0110
– ident: 10.1016/j.isprsjprs.2020.05.009_b0155
  doi: 10.1109/WACV.2018.00168
– volume: 29
  start-page: 33
  year: 1984
  ident: 10.1016/j.isprsjprs.2020.05.009_b0015
  article-title: Pyramid methods in image processing
  publication-title: RCA Eng.
– ident: 10.1016/j.isprsjprs.2020.05.009_b0135
  doi: 10.1109/CVPR.2015.7298965
– start-page: 435
  year: 2013
  ident: 10.1016/j.isprsjprs.2020.05.009_b0185
  article-title: Efficient multi-cue scene segmentation
– start-page: 25
  year: 2004
  ident: 10.1016/j.isprsjprs.2020.05.009_b0030
  article-title: High accuracy optical flow estimation based on a theory for warping
– year: 2018
  ident: 10.1016/j.isprsjprs.2020.05.009_b0130
  article-title: Semantic labeling in very high resolution images via a self-cascaded convolutional neural network
  publication-title: ISPRS J. Photogram. Remote Sens.
– start-page: 3024
  year: 2017
  ident: 10.1016/j.isprsjprs.2020.05.009_b0140
  article-title: Uav-based crop and weed classification for smart farming
– volume: 32
  start-page: 1231
  year: 2013
  ident: 10.1016/j.isprsjprs.2020.05.009_b0105
  article-title: Vision meets robotics: The kitti dataset
  publication-title: Int. J. Robot. Res.
  doi: 10.1177/0278364913491297
– ident: 10.1016/j.isprsjprs.2020.05.009_b0115
– volume: 7
  year: 2014
  ident: 10.1016/j.isprsjprs.2020.05.009_b0080
  article-title: Hyperspectral and lidar data fusion: Outcome of the 2013 grss data fusion contest
  publication-title: IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens.
  doi: 10.1109/JSTARS.2014.2305441
– ident: 10.1016/j.isprsjprs.2020.05.009_b0065
  doi: 10.3390/rs8080689
– start-page: 82
  year: 2009
  ident: 10.1016/j.isprsjprs.2020.05.009_b0190
  article-title: Autonomous uav surveillance in complex urban environments
– start-page: 44
  year: 2008
  ident: 10.1016/j.isprsjprs.2020.05.009_b0020
  article-title: Segmentation and recognition using structure from motion point clouds
– volume: 111
  start-page: 98
  year: 2015
  ident: 10.1016/j.isprsjprs.2020.05.009_b0100
  article-title: The pascal visual object classes challenge: A retrospective
  publication-title: IJCV
  doi: 10.1007/s11263-014-0733-5
– ident: 10.1016/j.isprsjprs.2020.05.009_b0200
  doi: 10.1016/j.rse.2019.111322
– ident: 10.1016/j.isprsjprs.2020.05.009_b0085
  doi: 10.1109/CVPRW.2018.00031
– volume: 11
  year: 2019
  ident: 10.1016/j.isprsjprs.2020.05.009_b0075
  article-title: Application of deep learning for delineation of visible cadastral boundaries from remote sensing imagery
  publication-title: Remote Sens.
  doi: 10.3390/rs11212505
– year: 2018
  ident: 10.1016/j.isprsjprs.2020.05.009_b0095
  article-title: The unmanned aerial vehicle benchmark: object detection and tracking
– ident: 10.1016/j.isprsjprs.2020.05.009_b0070
  doi: 10.3390/rs9020171
– ident: 10.1016/j.isprsjprs.2020.05.009_b0060
  doi: 10.1109/CVPR.2016.350
– ident: 10.1016/j.isprsjprs.2020.05.009_b0150
  doi: 10.1007/978-3-319-46448-0_27
– start-page: 234
  year: 2015
  ident: 10.1016/j.isprsjprs.2020.05.009_b0175
  article-title: U-net: Convolutional networks for biomedical image segmentation
– ident: 10.1016/j.isprsjprs.2020.05.009_b0230
  doi: 10.1109/CVPR.2017.544
– ident: 10.1016/j.isprsjprs.2020.05.009_b0215
– ident: 10.1016/j.isprsjprs.2020.05.009_b0040
  doi: 10.1109/CVPR.2018.00132
– ident: 10.1016/j.isprsjprs.2020.05.009_b0220
– start-page: 438
  year: 2010
  ident: 10.1016/j.isprsjprs.2020.05.009_b0195
  article-title: Dense point trajectories by gpu-accelerated large displacement optical flow
– volume: 33
  start-page: 500
  year: 2011
  ident: 10.1016/j.isprsjprs.2020.05.009_b0025
  article-title: Large displacement optical flow: descriptor matching in variational motion estimation
  publication-title: PAMI
  doi: 10.1109/TPAMI.2010.143
– ident: 10.1016/j.isprsjprs.2020.05.009_b0055
  doi: 10.1007/978-3-030-01234-2_49
– volume: 69
  start-page: 119
  year: 2013
  ident: 10.1016/j.isprsjprs.2020.05.009_b0160
  article-title: A ground control station for a multi-uav surveillance system
  publication-title: J. Intell. Robot. Syst.
  doi: 10.1007/s10846-012-9759-5
– volume: 34
  start-page: 2274
  year: 2012
  ident: 10.1016/j.isprsjprs.2020.05.009_b0010
  article-title: Slic superpixels compared to state-of-the-art superpixel methods
  publication-title: PAMI
  doi: 10.1109/TPAMI.2012.120
– volume: 4
  start-page: 41
  year: 2017
  ident: 10.1016/j.isprsjprs.2020.05.009_b0145
  article-title: Real-time blob-wise sugar beets vs weeds classification for monitoring fields using convolutional neural networks
  publication-title: ISPRS Ann.
SSID ssj0001568
Score 2.6784563
Snippet [Display omitted] Semantic segmentation has been one of the leading research interests in computer vision recently. It serves as a perception foundation for...
Semantic segmentation has been one of the leading research interests in computer vision recently. It serves as a perception foundation for many fields, such as...
SourceID proquest
crossref
elsevier
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 108
SubjectTerms cameras
computer vision
data collection
Dataset
Deep learning
remote sensing
robots
Semantic segmentation
UAV
unmanned aerial vehicles
Title UAVid: A semantic segmentation dataset for UAV imagery
URI https://dx.doi.org/10.1016/j.isprsjprs.2020.05.009
https://www.proquest.com/docview/2985925721
Volume 165
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV07T8MwELYQDMCAoIAoLwWJNdRx7dRmqxBVAcECRd0sJ7UhFU2rJh268Nu5a5LyEBIDQ6QkOifR5XL3Of7ujpBzriIpnTJ-SA0W1abWV7GxPgPjEKZFnXP4Q__-Iez2-G1f9FfIVZULg7TK0vcXPn3hrcszjVKbjUmSNB4pTB0YFkBCVM8UJppz3kIrv3j_pHkERTocCvso_Y3jlWSTaTaEDSaKjC5KeCIz8fcI9cNXLwJQZ5tslcjRaxcPt0NWbFojm1_qCdbIetnS_HW-S8Je-zkZXHptL7MjUF8Sw87LqEw1Sj2khmY29wC0eiDqJSOsZjHfI73O9dNV1y-bJPgxQIncZ0EYBcZFNMLIbPmAysg1eehiAUhpIAbKCEOlCU3QjCS3TQYgwcSGcRtbEbrmPllNx6k9IJ6SLWmdwQiuOMMlSGz4IsBfwvVtIOskrBSj47KCODayeNMVVWyolxrVqFFNhQaN1gldDpwURTT-HnJZaV5_swcNrv7vwWfVu9LwteASiEnteAZCSgoFXooFh_-5wRHZwKOCtntMVvPpzJ4AOMmj04X1nZK19s1d9-ED-Qbj7Q
linkProvider Elsevier
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV07T8MwED5BGYAB8RTlGSTWqI4bp3a3qgKVVxcoYrOc1IYgGipaBv49d41TAUJiYIgUJb4kujjffY7P3wGcxiqV0ikTJsyQqDazocqMDTl2DmFazDlHP_Rv-klvEF8-iIcF6FZrYSit0mN_iekztPZHGt6bjXGeN24ZDh04CSARq-dKLMISqVOJGix1Lq56_TkgR-WKOGofksG3NK98Mn6bPOOGY0XOZiqelJz4e5D6AdezGHS-DmuePAad8vk2YMEWm7D6RVJwE5Z9VfOnjy1IBp37fNgOOsHEjtCDeYY7jyO_2qgIKDt0YqcB8tYAmwb5iAQtPrZhcH521-2Fvk5CmCGbmIY8StLIuJSlFJxtPGQydc04cZlAsjQUQ2WEYdIkJmqmMrZNjjzBZIbHNrMicc0dqBWvhd2FQMmWtM5QEFcxp1lIqvkiEDLx-jaSdUgqx-jMi4hTLYsXXWWLPeu5RzV5VDOh0aN1YHPDcamj8bdJu_K8_tYlNKL938Yn1bvS-MHQLIgp7Os7NlJSKAQqHu395wbHsNy7u7nW1xf9q31YoTNlFu8B1KZv7_YQuco0PfJ98RPQZuae
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=UAVid%3A+A+semantic+segmentation+dataset+for+UAV+imagery&rft.jtitle=ISPRS+journal+of+photogrammetry+and+remote+sensing&rft.au=Lyu%2C+Ye&rft.au=Vosselman%2C+George&rft.au=Xia%2C+Gui-Song&rft.au=Yilmaz%2C+Alper&rft.date=2020-07-01&rft.issn=0924-2716&rft.volume=165+p.108-119&rft.spage=108&rft.epage=119&rft_id=info:doi/10.1016%2Fj.isprsjprs.2020.05.009&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0924-2716&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0924-2716&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0924-2716&client=summon