Building a Fully-Automatized Active Learning Framework for the Semantic Segmentation of Geospatial 3D Point Clouds

In recent years, significant progress has been made in developing supervised Machine Learning (ML) systems like Convolutional Neural Networks. However, it’s crucial to recognize that the performance of these systems heavily relies on the quality of labeled training data. To address this, we propose...

Full description

Saved in:
Bibliographic Details
Published inJournal of photogrammetry, remote sensing and geoinformation science Vol. 92; no. 2; pp. 131 - 161
Main Authors Kölle, Michael, Walter, Volker, Sörgel, Uwe
Format Journal Article
LanguageEnglish
Published Cham Springer International Publishing 01.04.2024
Subjects
Online AccessGet full text
ISSN2512-2789
2512-2819
DOI10.1007/s41064-024-00281-3

Cover

Loading…
Abstract In recent years, significant progress has been made in developing supervised Machine Learning (ML) systems like Convolutional Neural Networks. However, it’s crucial to recognize that the performance of these systems heavily relies on the quality of labeled training data. To address this, we propose a shift in focus towards developing sustainable methods of acquiring such data instead of solely building new classifiers in the ever-evolving ML field. Specifically, in the geospatial domain, the process of generating training data for ML systems has been largely neglected in research. Traditionally, experts have been burdened with the laborious task of labeling, which is not only time-consuming but also inefficient. In our system for the semantic interpretation of Airborne Laser Scanning point clouds, we break with this convention and completely remove labeling obligations from domain experts who have completed special training in geosciences and instead adopt a hybrid intelligence approach. This involves active and iterative collaboration between the ML model and humans through Active Learning, which identifies the most critical samples justifying manual inspection. Only these samples (typically ≪ 1 % of Passive Learning training points) are subject to human annotation. To carry out this annotation, we choose to outsource the task to a large group of non-specialists, referred to as the crowd, which comes with the inherent challenge of guiding those inexperienced annotators (i.e., “short-term employees”) to still produce labels of sufficient quality. However, we acknowledge that attracting enough volunteers for crowdsourcing campaigns can be challenging due to the tedious nature of labeling tasks. To address this, we propose employing paid crowdsourcing and providing monetary incentives to crowdworkers. This approach ensures access to a vast pool of prospective workers through respective platforms, ensuring timely completion of jobs. Effectively, crowdworkers become human processing units in our hybrid intelligence system mirroring the functionality of electronic processing units .
AbstractList In recent years, significant progress has been made in developing supervised Machine Learning (ML) systems like Convolutional Neural Networks. However, it’s crucial to recognize that the performance of these systems heavily relies on the quality of labeled training data. To address this, we propose a shift in focus towards developing sustainable methods of acquiring such data instead of solely building new classifiers in the ever-evolving ML field. Specifically, in the geospatial domain, the process of generating training data for ML systems has been largely neglected in research. Traditionally, experts have been burdened with the laborious task of labeling, which is not only time-consuming but also inefficient. In our system for the semantic interpretation of Airborne Laser Scanning point clouds, we break with this convention and completely remove labeling obligations from domain experts who have completed special training in geosciences and instead adopt a hybrid intelligence approach. This involves active and iterative collaboration between the ML model and humans through Active Learning, which identifies the most critical samples justifying manual inspection. Only these samples (typically ≪ 1 % of Passive Learning training points) are subject to human annotation. To carry out this annotation, we choose to outsource the task to a large group of non-specialists, referred to as the crowd, which comes with the inherent challenge of guiding those inexperienced annotators (i.e., “short-term employees”) to still produce labels of sufficient quality. However, we acknowledge that attracting enough volunteers for crowdsourcing campaigns can be challenging due to the tedious nature of labeling tasks. To address this, we propose employing paid crowdsourcing and providing monetary incentives to crowdworkers. This approach ensures access to a vast pool of prospective workers through respective platforms, ensuring timely completion of jobs. Effectively, crowdworkers become human processing units in our hybrid intelligence system mirroring the functionality of electronic processing units .
Author Walter, Volker
Sörgel, Uwe
Kölle, Michael
Author_xml – sequence: 1
  givenname: Michael
  orcidid: 0000-0002-5343-2021
  surname: Kölle
  fullname: Kölle, Michael
  email: michael.koelle@ifp.uni-stuttgart.de
  organization: Institute for Photogrammetry and Geoinformatics, University of Stuttgart
– sequence: 2
  givenname: Volker
  surname: Walter
  fullname: Walter, Volker
  organization: Institute for Photogrammetry and Geoinformatics, University of Stuttgart
– sequence: 3
  givenname: Uwe
  surname: Sörgel
  fullname: Sörgel, Uwe
  organization: Institute for Photogrammetry and Geoinformatics, University of Stuttgart
BookMark eNo1kEFOwzAQRS1UJErpBVj5AgE7ThpnWQopSJVAAtaWa49LSmJXtgOC03AWToZLYTGa_6WZ-aN3ikbWWUDonJILSkh1GQpKZkVG8lQk5zRjR2iclzTPkqlH_7ri9QmahrAlhFBOy6KqxyhcDW2nW7vB8vurGbruI5sP0fUytp-g8VzF9g3wCqS3-6HGyx7enX_FxnkcXwA_Qi9tbFUSmx5sTIvOYmfwElzYJSc7zK7xg2ttxIvODTqcoWMjuwDTvz5Bz83N0-I2W90v7xbzVaYYK2NWac1YpUpFlVKMFqDWmpakllxxTtbFTLPcUCkV1EUyAKoy2lDFQVNioGYTxA53w86n58GLrRu8TZGCErFHJw7oREInftEJxn4AID9mtQ
CitedBy_id crossref_primary_10_1109_ACCESS_2024_3461828
crossref_primary_10_14489_vkit_2024_11_pp_019_026
ContentType Journal Article
Copyright The Author(s) 2024
Copyright_xml – notice: The Author(s) 2024
DBID C6C
DOI 10.1007/s41064-024-00281-3
DatabaseName Springer Nature OA Free Journals
DatabaseTitleList
Database_xml – sequence: 1
  dbid: C6C
  name: SpringerOpen Free (Free internet resource, activated by CARLI)
  url: http://www.springeropen.com/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Geography
Geology
EISSN 2512-2819
EndPage 161
ExternalDocumentID 10_1007_s41064_024_00281_3
GrantInformation_xml – fundername: Universität Stuttgart (1023)
GroupedDBID -EM
0R~
406
AAAVM
AACDK
AAHNG
AAIAL
AAJBT
AANZL
AASML
AATNV
AATVU
AAUYE
ABAKF
ABDZT
ABECU
ABFTV
ABJOX
ABKCH
ABMQK
ABTEG
ABTKH
ABTMW
ABXPI
ACAOD
ACDTI
ACHSB
ACMLO
ACOKC
ACPIV
ACZOJ
ADKNI
ADTPH
ADURQ
ADYFF
AEBTG
AEFQL
AEJRE
AEMSY
AEOHA
AESKC
AEVLU
AFQWF
AGDGC
AGMZJ
AGQEE
AGQMX
AGRTI
AIAKS
AIGIU
AILAN
AJRNO
AJZVZ
ALMA_UNASSIGNED_HOLDINGS
AMKLP
AMXSW
AMYLF
AXYYD
BGNMA
C6C
CSCUP
DNIVK
DPUIP
EBLON
EBS
EIOEI
EJD
FERAY
FIGPU
FINBP
FNLPD
FSGXE
GGCAI
IKXTQ
IWAJR
J-C
JZLTJ
LLZTM
M4Y
NPVJJ
NQJWS
NU0
O9J
PT4
ROL
RSV
SISQX
SJYHP
SNE
SNPRN
SNX
SOHCF
SOJ
SPISZ
SRMVM
SSLCW
STPWE
TSG
UOJIU
UTJUX
UZXMN
VFIZW
ZMTXR
ID FETCH-LOGICAL-c335t-7dd337c5c1ccc314ecbd1509a8c880b46d32f1aace9446deec7fdf1c8ed10fe93
IEDL.DBID C6C
ISSN 2512-2789
IngestDate Fri Feb 21 02:40:07 EST 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 2
Keywords Semantic Segmentation
3D Point Clouds
Hybrid Intelligence System
Paid Crowdsourcing
Active Learning
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c335t-7dd337c5c1ccc314ecbd1509a8c880b46d32f1aace9446deec7fdf1c8ed10fe93
ORCID 0000-0002-5343-2021
OpenAccessLink https://doi.org/10.1007/s41064-024-00281-3
PageCount 31
ParticipantIDs springer_journals_10_1007_s41064_024_00281_3
PublicationCentury 2000
PublicationDate 20240400
PublicationDateYYYYMMDD 2024-04-01
PublicationDate_xml – month: 4
  year: 2024
  text: 20240400
PublicationDecade 2020
PublicationPlace Cham
PublicationPlace_xml – name: Cham
PublicationSubtitle Photogrammetrie, Fernerkundung, Geoinformation
PublicationTitle Journal of photogrammetry, remote sensing and geoinformation science
PublicationTitleAbbrev PFG
PublicationYear 2024
Publisher Springer International Publishing
Publisher_xml – name: Springer International Publishing
References Chandler J, Paolacci G, Mueller P (2013) Risks and rewards of crowdsourcing marketplaces. In: Handbook of Human Computation, Springer New York, pp 377–392, https://doi.org/10.1007/978-1-4614-8806-4_30
Wu TH, Liu YC, Huang YK, Lee HY, Su HT, Huang PC, Hsu WH (2021) Redal: Region-based and diversity-aware active learning for point cloud semantic segmentation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp 15510–15519
Lockhart J, Assefa S, Balch T, Veloso M (2020) Some people aren’t worth listening to: periodically retraining classifiers with feedback from a team of end users. CoRR abs/2004.13152, 2004.13152
Mackowiak R, Lenz P, Ghori O, Diego F, Lange O, Rother C (2018) CEREALS - Cost-Effective REgion-based Active Learning for Semantic Segmentation. BMVC 2018 http://arxiv.org/abs/1810.09726, 1810.09726
Crawford MM, Tuia D, Yang HL (2013) Active learning: Any value for classification of remotely sensed data? Proceedings of the IEEE 101(3):593–608, https://doi.org/10.1109/jproc.2012.2231951
Liu Z, Shabani S, Balet NG, Sokhn M, Cretton F (2018) How to motivate participation and improve quality of crowdsourcing when building accessibility maps. In: 2018 15th IEEE Annual Consumer Communications & Networking Conference (CCNC), IEEE, https://doi.org/10.1109/ccnc.2018.8319237
Korpela E, Werthimer D, Anderson D, Cobb J, Leboisky M (2001) Seti@home-massively distributed computing for seti. Computing in Science Engineering 3(1):78–83, https://doi.org/10.1109/5992.895191
Lin Y, Vosselman G, Cao Y, Yang MY (2020b) Efficient training of semantic point cloud segmentation via active learning. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences V‑2-2020:243–250, https://doi.org/10.5194/isprs-annals-V-2-2020-243-2020, https://www.isprs-ann-photogramm-remote-sens-spatial-inf-sci.net/V-2-2020/243/2020
Lintott CJ, Schawinski K, Slosar A, Land K, Bamford S, Thomas D, Raddick MJ, Nichol RC, Szalay A, Andreescu D, Murray P, Vandenberg J (2008) Galaxy zoo: morphologies derived from visual inspection of galaxies from the sloan digital sky survey. Monthly Notices of the Royal Astronomical Society 389(3):1179–1189, https://doi.org/10.1111/j.1365-2966.2008.13689.x, https://doi.org/10.1111/j.1365-2966.2008.13689.x
Bayas JL, See L, Fritz S, Sturn T, Perger C, Dürauer M, Karner M, Moorthy I, Schepaschenko D, Domian D, McCallum I (2016) Crowdsourcing in-situ data on land cover and land use using gamification and mobile technology. Remote Sensing 8(11):905, https://doi.org/10.3390/rs8110905
Parhami B (1994) Voting algorithms. IEEE Transactions on Reliability 43(4):617–629, https://doi.org/10.1109/24.370218
Salk CF, Sturn T, See L, Fritz S, Perger C (2015) Assessing quality of volunteer crowdsourcing contributions: lessons from the cropland capture game. International Journal of Digital Earth 9(4):410–426, https://doi.org/10.1080/17538947.2015.1039609
Jospin LV, Laga H, Boussaid F, Buntine W, Bennamoun M (2022) Hands-on bayesian neural networks—a tutorial for deep learning users. IEEE Computational Intelligence Magazine 17(2):29–48, https://doi.org/10.1109/mci.2022.3155327
Deng J, Dong W, Socher R, Li LJ, Li K, Li FF (2009) ImageNet: A Large-Scale Hierarchical Image Database. In: CVPR 2009, pp 248–255, https://doi.org/10.1109/CVPR.2009.5206848
Fleischer A, Mead AD, Huang J (2015) Inattentive responding in MTurk and other online samples. Industrial and Organizational Psychology 8(2):196–202, https://doi.org/10.1017/iop.2015.25
Kirsch A, van Amersfoort J, Gal Y (2019) BatchBALD: Efficient and Diverse Batch Acquisition for Deep Bayesian Active Learning. In: NIPS 2019, Curran Associates, Inc., pp 7026–7037
Russakovsky O, Li LJ, Fei-Fei L (2015b) Best of both worlds: Human-machine collaboration for object annotation. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, https://doi.org/10.1109/cvpr.2015.7298824
Surowiecki J (2004) The Wisdom of Crowds. Anchor
Shi X, Xu X, Chen K, Cai L, Foo CS, Jia K (2021) Label-efficient point cloud semantic segmentation: An active learning approach. CoRR abs/2101.06931, https://doi.org/10.48550/ARXIV.2101.06931
Feng D, Wei X, Rosenbaum L, Maki A, Dietmayer K (2019) Deep active learning for efficient training of a LiDAR 3d object detector. In: 2019 IEEE Intelligent Vehicles Symposium (IV), IEEE, https://doi.org/10.1109/ivs.2019.8814236
Gal Y, Islam R, Ghahramani Z (2017) Deep bayesian active learning with image data. In: Proceedings of the 34th International Conference on Machine Learning – Volume 70, JMLR.org, ICML’17, p 1183–1192
Hirth M, Hoßfeld T, Tran-Gia P (2013) Analyzing costs and accuracy of validation mechanisms for crowdsourcing platforms. Mathematical and Computer Modelling 57(11-12):2918–2932, https://doi.org/10.1016/j.mcm.2012.01.006
Kovashka A, Russakovsky O, Fei-Fei L, Grauman K (2016) Crowdsourcing in Computer Vision. Foundations and Trends in Computer Graphics and Vision 10(3):177–243, https://doi.org/10.1561/0600000071
Kölle M, Walter V, Shiller I, Soergel U (2021b) Categorise: An automated framework for utilizing the workforce of the crowd for semantic segmentation of 3D point clouds. In: ICPR International Workshops and Challenges, Springer International Publishing, Cham, pp 505–520
Haala N, Kölle M, Cramer M, Laupheimer D, Mandlburger G, Glira P (2020) Hybrid georeferencing, enhancement and classification of ultra-high resolution UAV LiDAR and image point clouds for monitoring applications. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences V‑2-2020:727–734, https://doi.org/10.5194/isprs-annals-V-2-2020-727-2020
Koelle M, Walter V, Schmohl S, Soergel U (2023) Learning on the edge: Benchmarking active learning for the semantic segmentation of als point clouds. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences X‑1/W1-2023:945–952, https://doi.org/10.5194/isprs-annals-X-1-W1-2023-945-2023, https://isprs-annals.copernicus.org/articles/X-1-W1-2023/945/2023
Ye T, You S, Robert Jr L (2017) When does more money work? examining the role of perceived fairness in pay on the performance quality of crowdworkers. Proceedings of the International AAAI Conference on Web and Social Media 11(1):327–336, https://ojs.aaai.org/index.php/ICWSM/article/view/14876
Ash JT, Zhang C, Krishnamurthy A, Langford J, Agarwal A (2019) Deep batch active learning by diverse, uncertain gradient lower bounds. CoRR abs/1906.03671, https://doi.org/10.48550/ARXIV.1906.03671
Redi J, Povoa I (2014) Crowdsourcing for rating image aesthetic appeal. In: Proceedings of the 2014 International ACM Workshop on Crowdsourcing for Multimedia – CrowdMM ’14, ACM Press, https://doi.org/10.1145/2660114.2660118
Scheffer T, Decomain C, Wrobel S (2001) Active hidden markov models for information extraction. In: Advances in Intelligent Data Analysis, Springer Berlin Heidelberg, pp 309–318, https://doi.org/10.1007/3-540-44816-0_31
Juni MZ, Eckstein MP (2017) The wisdom of crowds for visual search. Proceedings of the National Academy of Sciences 114(21):E4306–E4315, https://doi.org/10.1073/pnas.1610732114
Sorokin A, Forsyth D (2008) Utility data annotation with amazon mechanical turk. In: 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, IEEE, https://doi.org/10.1109/cvprw.2008.4562953
Argamon-Engelson S, Dagan I (1999) Committee-Based Sample Selection For Probabilistic Classifiers. Journal of Artificial Intelligence Research 11:335–360
Russell BC, Torralba A, Murphy KP, Freeman WT (2007) LabelMe: A database and web-based tool for image annotation. International Journal of Computer Vision 77(1-3):157–173, https://doi.org/10.1007/s11263-007-0090-8
Sener O, Savarese S (2018) Active learning for convolutional neural networks: A core-set approach. In: International Conference on Learning Representations, https://openreview.net/forum?id=H1aIuk-RW
Haralabopoulos G, Wagner C, McAuley D, Anagnostopoulos I (2019) Paid crowdsourcing, low income contributors, and subjectivity. In: IFIP Advances in Information and Communication Technology, Springer International Publishing, pp 225–231, https://doi.org/10.1007/978-3-030-19909-8_20
Tuia D, Volpi M, Copa L, Kanevski M, Munoz-Mari J (2011) A survey of active learning algorithms for supervised remote sensing image classification. IEEE Journal of Selected Topics in Signal Processing 5(3):606–617, https://doi.org/10.1109/jstsp.2011.2139193
von Ahn L, Maurer B, McMillen C, Abraham D, Blum M (2008) reCAPTCHA: Human-based character recognition via web security measures. Science 321(5895):1465–1468, https://doi.org/10.1126/science.1160379
Kittur A, Chi EH, Suh B (2008) Crowdsourcing user studies with mechanical turk. In: Proceeding of the twenty-sixth annual CHI conference on Human factors in computing systems – CHI ’08, ACM Press, https://doi.org/10.1145/1357054.1357127
Lewis DD, Gale WA (1994) A sequential algorithm for training text classifiers. In: SIGIR ’94, Springer London, pp 3–12, https://doi.org/10.1007/978-1-4471-2099-5_1
Prabhu V, Chandrasekaran A, Saenko K, Hoffman J (2021) Active domain adaptation via clustering uncertainty-weighted embeddings. In: 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp 8485–8494, https://doi.org/10.1109/ICCV48922.2021.00839
Niemeyer J, Rottensteiner F, Soergel U (2014) Contextual classification of lidar data and building object detection in urban areas. ISPRS Journal of Photogrammetry and Remote Sensing 87:152–165, https://doi.org/10.1016/j.isprsjprs.2013.11.001
Gingold Y, Shamir A, Cohen-Or D (2012) Micro perceptual human computation for visual tasks. ACM Transactions on Graphics 31(5):1–12, https://doi.org/10.1145/2231816.2231817
Maddalena E, Ibáñez LD, Simperl E (2020) Mapping points of interest through street view imagery and paid crowdsourcing. ACM Transactions on Intelligent Systems and Technology 11(5):1–28, https://doi.org/10.1145/3403931
Fonte C, Antoniou V, Bastin L, Estima J, Jokar Arsanjani J, Laso B
References_xml – reference: Olsson F, Tomanek K (2009) An intrinsic stopping criterion for committee-based active learning. In: Proceedings of the Thirteenth Conference on Computational Natural Language Learning, Association for Computational Linguistics, USA, CoNLL ’09, p 138–146
– reference: Ash JT, Zhang C, Krishnamurthy A, Langford J, Agarwal A (2019) Deep batch active learning by diverse, uncertain gradient lower bounds. CoRR abs/1906.03671, https://doi.org/10.48550/ARXIV.1906.03671
– reference: Hara K, Azenkot S, Campbell M, Bennett CL, Le V, Pannella S, Moore R, Minckler K, Ng RH, Froehlich JE (2015) Improving public transit accessibility for blind riders by crowdsourcing bus stop landmark locations with google street view: An extended analysis. ACM Transactions on Accessible Computing 6(2):1–23, https://doi.org/10.1145/2717513
– reference: McCallum A, Nigam K (1998) Employing EM and pool-based active learning for text classification. In: Proceedings of the Fifteenth International Conference on Machine Learning, Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, ICML ’98, p 350–358
– reference: Lewis DD, Gale WA (1994) A sequential algorithm for training text classifiers. In: SIGIR ’94, Springer London, pp 3–12, https://doi.org/10.1007/978-1-4471-2099-5_1
– reference: Shi X, Xu X, Chen K, Cai L, Foo CS, Jia K (2021) Label-efficient point cloud semantic segmentation: An active learning approach. CoRR abs/2101.06931, https://doi.org/10.48550/ARXIV.2101.06931
– reference: Bezos J (2007) Artificial Intelligence, With Help From the Humans [WWW Document]. https://www.nytimes.com/2007/03/25/business/yourmoney/25Stream.html, (accessed February 18, 2022)
– reference: Whitehill J, Wu Tf, Bergsma J, Movellan J, Ruvolo P (2009) Whose vote should count more: Optimal integration of labels from labelers of unknown expertise. In: Bengio Y, Schuurmans D, Lafferty J, Williams C, Culotta A (eds) Advances in Neural Information Processing Systems, Curran Associates, Inc., vol 22, https://proceedings.neurips.cc/paper/2009/file/f899139df5e1059396431415e770c6dd-Paper.pdf
– reference: Bloodgood M, Vijay-Shanker K (2009) A method for stopping active learning based on stabilizing predictions and the need for user-adjustable stopping. In: Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL-2009), Association for Computational Linguistics, Boulder, Colorado, pp 39–47, https://www.aclweb.org/anthology/W09-1107
– reference: See L, Comber A, Salk C, Fritz S, van der Velde M, Perger C, Schill C, McCallum I, Kraxner F, Obersteiner M (2013) Comparing the quality of crowdsourced data contributed by expert and non-experts. PLoS ONE 8(7):e69958, https://doi.org/10.1371/journal.pone.0069958
– reference: Fan H, Zipf A, Fu Q, Neis P (2014) Quality assessment for building footprints data on OpenStreetMap. International Journal of Geographical Information Science 28(4):700–719, https://doi.org/10.1080/13658816.2013.867495
– reference: Endres I, Farhadi A, Hoiem D, Forsyth DA (2010) The benefits and challenges of collecting richer object annotations. 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition – Workshops pp 1–8
– reference: Vondrick C, Patterson D, Ramanan D (2012) Efficiently scaling up crowdsourced video annotation. International Journal of Computer Vision 101(1):184–204, https://doi.org/10.1007/s11263-012-0564-1
– reference: von Ahn L, Maurer B, McMillen C, Abraham D, Blum M (2008) reCAPTCHA: Human-based character recognition via web security measures. Science 321(5895):1465–1468, https://doi.org/10.1126/science.1160379
– reference: Gal Y, Islam R, Ghahramani Z (2017) Deep bayesian active learning with image data. In: Proceedings of the 34th International Conference on Machine Learning – Volume 70, JMLR.org, ICML’17, p 1183–1192
– reference: Dasgupta S, Hsu D (2008) Hierarchical sampling for active learning. In: Proceedings of the 25th international conference on Machine learning – ICML ’08, ACM Press, https://doi.org/10.1145/1390156.1390183
– reference: Liu Z, Shabani S, Balet NG, Sokhn M, Cretton F (2018) How to motivate participation and improve quality of crowdsourcing when building accessibility maps. In: 2018 15th IEEE Annual Consumer Communications & Networking Conference (CCNC), IEEE, https://doi.org/10.1109/ccnc.2018.8319237
– reference: Tuia D, Volpi M, Copa L, Kanevski M, Munoz-Mari J (2011) A survey of active learning algorithms for supervised remote sensing image classification. IEEE Journal of Selected Topics in Signal Processing 5(3):606–617, https://doi.org/10.1109/jstsp.2011.2139193
– reference: Niemeyer J, Rottensteiner F, Soergel U (2014) Contextual classification of lidar data and building object detection in urban areas. ISPRS Journal of Photogrammetry and Remote Sensing 87:152–165, https://doi.org/10.1016/j.isprsjprs.2013.11.001
– reference: Luo H, Wang C, Wen C, Chen Z, Zai D, Yu Y, Li J (2018) Semantic labeling of mobile LiDAR point clouds via active learning and higher order MRF. TGRS 56(7):3631–3644
– reference: Actueel Hoogtebestand Nederland (2021) Dataset: Actueel Hoogtebestand Nederland (AHN3) [WWW Document]. URL: https://www.pdok.nl/introductie/-/article/actueel-hoogtebestand-nederland-ahn3- (accessed February 2, 2021)
– reference: Dorn H, Törnros T, Zipf A (2015) Quality evaluation of VGI using authoritative data—a comparison with land use data in southern germany. ISPRS International Journal of Geo-Information 4(3):1657–1671, https://doi.org/10.3390/ijgi4031657
– reference: Argamon-Engelson S, Dagan I (1999) Committee-Based Sample Selection For Probabilistic Classifiers. Journal of Artificial Intelligence Research 11:335–360
– reference: Vijayanarasimhan S, Grauman K (2009) What’s it going to cost you?: Predicting effort vs. informativeness for multi-label image annotations. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, IEEE, https://doi.org/10.1109/cvpr.2009.5206705
– reference: Gal Y, Ghahramani Z (2016) Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning. In: ICML 2016, PMLR, New York, NY, USA, vol 48, pp 1050–1059, http://proceedings.mlr.press/v48/gal16.html
– reference: Lloyd SP (1982) Least squares quantization in PCM. IEEE Trans Inf Theory 28(2):129–137
– reference: Gebru T, Krause J, Deng J, Fei-Fei L (2017) Scalable annotation of fine-grained categories without experts. Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems
– reference: Shao F, Luo Y, Liu P, Chen J, Yang Y, Lu Y, Xiao J (2022) Active learning for point cloud semantic segmentation via spatial-structural diversity reasoning. CoRR abs/2202.12588
– reference: Kovashka A, Russakovsky O, Fei-Fei L, Grauman K (2016) Crowdsourcing in Computer Vision. Foundations and Trends in Computer Graphics and Vision 10(3):177–243, https://doi.org/10.1561/0600000071
– reference: Walter V, Kölle M, Yin Y (2020) Evaluation and Optimisation of Crowd-Based Collection of Trees from 3D Point Clouds. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences V‑4-2020:49–56, https://doi.org/10.5194/isprs-annals-V-4-2020-49-2020, https://www.isprs-ann-photogramm-remote-sens-spatial-inf-sci.net/V-4-2020/49/2020/
– reference: Okolloh O (2009) Ushahidi, or ’testimony’: Web 2.0 tools for crowdsourcing crisis information. Participatory Learning and Action 59:65–70
– reference: Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M, Berg AC, Fei-Fei L (2015a) ImageNet large scale visual recognition challenge. International Journal of Computer Vision 115(3):211–252, https://doi.org/10.1007/s11263-015-0816-y
– reference: Zhdanov F (2019) Diverse mini-batch Active Learning. CoRR abs/1901.05954, http://arxiv.org/abs/1901.05954, 1901.05954
– reference: Redi J, Povoa I (2014) Crowdsourcing for rating image aesthetic appeal. In: Proceedings of the 2014 International ACM Workshop on Crowdsourcing for Multimedia – CrowdMM ’14, ACM Press, https://doi.org/10.1145/2660114.2660118
– reference: Ertekin S, Huang J, Bottou L, Giles L (2007) Learning on the Border: Active Learning in Imbalanced Data Classification. In: CIKM 2007, ACM, New York, NY, USA, pp 127–136, https://doi.org/10.1145/1321440.1321461, http://doi.acm.org/10.1145/1321440.1321461
– reference: Kirsch A, van Amersfoort J, Gal Y (2019) BatchBALD: Efficient and Diverse Batch Acquisition for Deep Bayesian Active Learning. In: NIPS 2019, Curran Associates, Inc., pp 7026–7037
– reference: Haklay M, Weber P (2008) OpenStreetMap: User-generated street maps. IEEE Pervasive Computing 7(4):12–18, https://doi.org/10.1109/mprv.2008.80
– reference: Koelle M, Walter V, Schmohl S, Soergel U (2023) Learning on the edge: Benchmarking active learning for the semantic segmentation of als point clouds. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences X‑1/W1-2023:945–952, https://doi.org/10.5194/isprs-annals-X-1-W1-2023-945-2023, https://isprs-annals.copernicus.org/articles/X-1-W1-2023/945/2023/
– reference: Deng J, Dong W, Socher R, Li LJ, Li K, Li FF (2009) ImageNet: A Large-Scale Hierarchical Image Database. In: CVPR 2009, pp 248–255, https://doi.org/10.1109/CVPR.2009.5206848
– reference: Shaw AD, Horton JJ, Chen DL (2011) Designing incentives for inexpert human raters. In: Proceedings of the ACM 2011 conference on Computer supported cooperative work – CSCW ’11, ACM Press, https://doi.org/10.1145/1958824.1958865
– reference: Zhang J, Wu X, Sheng VS (2016) Learning from crowdsourced labeled data: a survey. Artificial Intelligence Review 46(4):543–576, https://doi.org/10.1007/s10462-016-9491-9, https://doi.org/10.1007/s10462-016-9491-9
– reference: Chandler D, Kapelner A (2013) Breaking monotony with meaning: Motivation in crowdsourcing markets. Journal of Economic Behavior & Organization 90:123–133, https://doi.org/10.1016/j.jebo.2013.03.003
– reference: Jospin LV, Laga H, Boussaid F, Buntine W, Bennamoun M (2022) Hands-on bayesian neural networks—a tutorial for deep learning users. IEEE Computational Intelligence Magazine 17(2):29–48, https://doi.org/10.1109/mci.2022.3155327
– reference: Varney N, Asari VK, Graehling Q (2020) Dales: A large-scale aerial lidar data set for semantic segmentation. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp 717–726, https://doi.org/10.1109/CVPRW50498.2020.00101
– reference: Welinder P, Branson S, Perona P, Belongie S (2010) The multidimensional wisdom of crowds. In: Lafferty J, Williams C, Shawe-Taylor J, Zemel R, Culotta A (eds) Advances in Neural Information Processing Systems, Curran Associates, Inc., vol 23, https://proceedings.neurips.cc/paper/2010/file/0f9cafd014db7a619ddb4276af0d692c-Paper.pdf
– reference: Maddalena E, Ibáñez LD, Simperl E (2020) Mapping points of interest through street view imagery and paid crowdsourcing. ACM Transactions on Intelligent Systems and Technology 11(5):1–28, https://doi.org/10.1145/3403931
– reference: Kellenberger B, Marcos D, Lobry S, Tuia D (2019) Half a Percent of Labels is Enough: Efficient Animal Detection in UAV Imagery Using Deep CNNs and Active Learning. TRGS 57(12):9524–9533, https://doi.org/10.1109/TGRS.2019.2927393, https://doi.org/10.1109/TGRS.2019.2927393
– reference: Geiger D, Seedorf S, Schulze T, Nickerson RC, Schader M (2011) Managing the crowd: Towards a taxonomy of crowdsourcing processes. In: AMCIS
– reference: Vaughan JW (2018) Making better use of the crowd: How crowdsourcing can advance machine learning research. Journal of Machine Learning Research 18(193):1–46, http://jmlr.org/papers/v18/17-234.html
– reference: van der Maaten L, Hinton G (2008) Visualizing data using t‑SNE. Journal of Machine Learning Research 9:2579–2605, http://www.jmlr.org/papers/v9/vandermaaten08a.html
– reference: Estes L, McRitchie D, Choi J, Debats S, Evans T, Guthe W, Luo D, Ragazzo G, Zempleni R, Caylor K (2016) A platform for crowdsourcing the creation of representative, accurate landcover maps. Environmental Modelling & Software 80:41–53, https://doi.org/10.1016/j.envsoft.2016.01.011
– reference: Beluch WH, Genewein T, Nurnberger A, Kohler JM (2018) The power of ensembles for active learning in image classification. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, https://doi.org/10.1109/cvpr.2018.00976
– reference: Lin Y, Vosselman G, Yang MY (2022) Weakly supervised semantic segmentation of airborne laser scanning point clouds. ISPRS Journal of Photogrammetry and Remote Sensing 187:79–100, https://doi.org/10.1016/j.isprsjprs.2022.03.001
– reference: Gingold Y, Shamir A, Cohen-Or D (2012) Micro perceptual human computation for visual tasks. ACM Transactions on Graphics 31(5):1–12, https://doi.org/10.1145/2231816.2231817
– reference: Haralabopoulos G, Wagner C, McAuley D, Anagnostopoulos I (2019) Paid crowdsourcing, low income contributors, and subjectivity. In: IFIP Advances in Information and Communication Technology, Springer International Publishing, pp 225–231, https://doi.org/10.1007/978-3-030-19909-8_20
– reference: Chandler J, Paolacci G, Mueller P (2013) Risks and rewards of crowdsourcing marketplaces. In: Handbook of Human Computation, Springer New York, pp 377–392, https://doi.org/10.1007/978-1-4614-8806-4_30
– reference: Li N, Pfeifer N (2019) Active learning to extend training data for large area airborne lidar classification. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W13:1033–1037
– reference: Cortes C, Vapnik V (1995) Support-vector networks. Machine Learning 20(3):273–297
– reference: Kölle M, Walter V, Schmohl S, Soergel U (2021a) Remembering both the machine and the crowd when sampling points: Active learning for semantic segmentation of ALS point clouds. In: ICPR International Workshops and Challenges, Springer International Publishing, Cham, pp 505–520
– reference: Herfort B, Höfle B, Klonner C (2018) 3D micro-mapping: Towards assessing the quality of crowdsourcing to support 3D point cloud analysis. ISPRS Journal of Photogrammetry and Remote Sensing 137:73–83, https://doi.org/10.1016/j.isprsjprs.2018.01.009
– reference: Sener O, Savarese S (2018) Active learning for convolutional neural networks: A core-set approach. In: International Conference on Learning Representations, https://openreview.net/forum?id=H1aIuk-RW
– reference: Walter V, Kölle M, Collmar D (2022) Measuring the wisdom of the crowd: How many is enough? PFG – Journal of Photogrammetry, Remote Sensing and Geoinformation Science 90:269–291, https://doi.org/10.1007/s41064-022-00202-2
– reference: Parhami B (1994) Voting algorithms. IEEE Transactions on Reliability 43(4):617–629, https://doi.org/10.1109/24.370218
– reference: Walter V, Kölle M, Collmar D, Zhang Y (2021) A two-level approach for the crowd-based collection of vehicles from 3D point clouds. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences V‑4-2021:97–104, https://doi.org/10.5194/isprs-annals-V-4-2021-97-2021, https://www.isprs-ann-photogramm-remote-sens-spatial-inf-sci.net/V-4-2021/97/2021/
– reference: Haala N, Kölle M, Cramer M, Laupheimer D, Mandlburger G, Glira P (2020) Hybrid georeferencing, enhancement and classification of ultra-high resolution UAV LiDAR and image point clouds for monitoring applications. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences V‑2-2020:727–734, https://doi.org/10.5194/isprs-annals-V-2-2020-727-2020
– reference: Ye Z, Xu Y, Huang R, Tong X, Li X, Liu X, Luan K, Hoegner L, Stilla U (2020) LASDU: A large-scale aerial LiDAR dataset for semantic labeling in dense urban areas. ISPRS International Journal of Geo-Information 9(7):450, https://doi.org/10.3390/ijgi9070450
– reference: Zhou B, Lapedriza A, Xiao J, Torralba A, Oliva A (2014) Learning deep features for scene recognition using places database. In: Ghahramani Z, Welling M, Cortes C, Lawrence N, Weinberger KQ (eds) Advances in Neural Information Processing Systems, Curran Associates, Inc., vol 27, https://proceedings.neurips.cc/paper/2014/file/3fe94a002317b5f9259f82690aeea4cd-Paper.pdf
– reference: Hirth M, Hoßfeld T, Tran-Gia P (2013) Analyzing costs and accuracy of validation mechanisms for crowdsourcing platforms. Mathematical and Computer Modelling 57(11-12):2918–2932, https://doi.org/10.1016/j.mcm.2012.01.006
– reference: Tuia D, Munoz-Mari J (2013) Learning user’s confidence for active learning. IEEE Transactions on Geoscience and Remote Sensing 51(2):872–880, https://doi.org/10.1109/tgrs.2012.2203605
– reference: Li H, Zipf A (2022) A conceptual model for converting openstreetmap contribution to geospatial machine learning training data. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B4-2022:253–259, https://doi.org/10.5194/isprs-archives-xliii-b4-2022-253-2022
– reference: Hirth M, Hoßfeld T, Tran-Gia P (2011) Anatomy of a Crowdsourcing Platform – Using the Example of Microworkers.com. In: IMIS 2011, IEEE Computer Society, Washington, DC, USA, pp 322–329, https://doi.org/10.1109/IMIS.2011.89, https://doi.org/10.1109/IMIS.2011.89
– reference: Korpela E, Werthimer D, Anderson D, Cobb J, Leboisky M (2001) Seti@home-massively distributed computing for seti. Computing in Science Engineering 3(1):78–83, https://doi.org/10.1109/5992.895191
– reference: Senaratne H, Mobasheri A, Ali AL, Capineri C, Haklay MM (2016) A review of volunteered geographic information quality assessment methods. International Journal of Geographical Information Science 31(1):139–167, https://doi.org/10.1080/13658816.2016.1189556
– reference: Fleischer A, Mead AD, Huang J (2015) Inattentive responding in MTurk and other online samples. Industrial and Organizational Psychology 8(2):196–202, https://doi.org/10.1017/iop.2015.25
– reference: Sorokin A, Forsyth D (2008) Utility data annotation with amazon mechanical turk. In: 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, IEEE, https://doi.org/10.1109/cvprw.2008.4562953
– reference: Lin Y, Vosselman G, Cao Y, Yang MY (2020b) Efficient training of semantic point cloud segmentation via active learning. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences V‑2-2020:243–250, https://doi.org/10.5194/isprs-annals-V-2-2020-243-2020, https://www.isprs-ann-photogramm-remote-sens-spatial-inf-sci.net/V-2-2020/243/2020/
– reference: Surowiecki J (2004) The Wisdom of Crowds. Anchor
– reference: Kölle M, Laupheimer D, Schmohl S, Haala N, Rottensteiner F, Wegner JD, Ledoux H (2021a) The hessigheim 3D (H3D) benchmark on semantic segmentation of high-resolution 3D point clouds and textured meshes from uav lidar and multi-view-stereo. ISPRS Open Journal of Photogrammetry and Remote Sensing 1:100001, https://doi.org/10.1016/j.ophoto.2021.100001
– reference: Shannon CE (1948) A mathematical theory of communication. Bell System Technical Journal 27(3):379–423, https://doi.org/10.1002/j.1538-7305.1948.tb01338.x
– reference: Kölle M, Laupheimer D, Walter V, Haala N, Soergel U (2021b) Which 3D data representation does the crowd like best? crowd-based active learning for coupled semantic segmentation of point clouds and textured meshes. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences V‑2-2021:93–100, https://doi.org/10.5194/isprs-annals-V-2-2021-93-2021, https://www.isprs-ann-photogramm-remote-sens-spatial-inf-sci.net/V-2-2021/93/2021/
– reference: Wu TH, Liu YC, Huang YK, Lee HY, Su HT, Huang PC, Hsu WH (2021) Redal: Region-based and diversity-aware active learning for point cloud semantic segmentation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp 15510–15519
– reference: Lockhart J, Assefa S, Balch T, Veloso M (2020) Some people aren’t worth listening to: periodically retraining classifiers with feedback from a team of end users. CoRR abs/2004.13152, 2004.13152
– reference: Lintott CJ, Schawinski K, Slosar A, Land K, Bamford S, Thomas D, Raddick MJ, Nichol RC, Szalay A, Andreescu D, Murray P, Vandenberg J (2008) Galaxy zoo: morphologies derived from visual inspection of galaxies from the sloan digital sky survey. Monthly Notices of the Royal Astronomical Society 389(3):1179–1189, https://doi.org/10.1111/j.1365-2966.2008.13689.x, https://doi.org/10.1111/j.1365-2966.2008.13689.x
– reference: Feng D, Wei X, Rosenbaum L, Maki A, Dietmayer K (2019) Deep active learning for efficient training of a LiDAR 3d object detector. In: 2019 IEEE Intelligent Vehicles Symposium (IV), IEEE, https://doi.org/10.1109/ivs.2019.8814236
– reference: Antoniou V, Skopeliti A (2015) Measures and indicators of VGI quality: An overview. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences II-3/W5:345–351, https://doi.org/10.5194/isprsannals-II-3-W5-345-2015, https://www.isprs-ann-photogramm-remote-sens-spatial-inf-sci.net/II-3-W5/345/2015/
– reference: Allahbakhsh M, Benatallah B, Ignjatovic A, Motahari-Nezhad HR, Bertino E, Dustdar S (2013) Quality control in crowdsourcing systems: Issues and directions. IEEE Internet Computing 17(2):76–81, https://doi.org/10.1109/mic.2013.20
– reference: Goodchild MF, Li L (2012) Assuring the quality of volunteered geographic information. Spatial Statistics 1:110–120, https://doi.org/10.1016/j.spasta.2012.03.002
– reference: Houlsby N, Huszár F, Ghahramani Z, Lengyel M (2011) Bayesian active learning for classification and preference learning. https://doi.org/10.48550/ARXIV.1112.5745
– reference: Juni MZ, Eckstein MP (2017) The wisdom of crowds for visual search. Proceedings of the National Academy of Sciences 114(21):E4306–E4315, https://doi.org/10.1073/pnas.1610732114
– reference: Salk CF, Sturn T, See L, Fritz S, Perger C (2015) Assessing quality of volunteer crowdsourcing contributions: lessons from the cropland capture game. International Journal of Digital Earth 9(4):410–426, https://doi.org/10.1080/17538947.2015.1039609
– reference: Marcus A, Parameswaran A (2015) Crowdsourced data management: Industry and academic perspectives. Foundations and Trends in Databases 6(1-2):1–161, https://doi.org/10.1561/1900000044
– reference: Russakovsky O, Li LJ, Fei-Fei L (2015b) Best of both worlds: Human-machine collaboration for object annotation. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, https://doi.org/10.1109/cvpr.2015.7298824
– reference: Kölle M, Walter V, Shiller I, Soergel U (2021b) Categorise: An automated framework for utilizing the workforce of the crowd for semantic segmentation of 3D point clouds. In: ICPR International Workshops and Challenges, Springer International Publishing, Cham, pp 505–520
– reference: Kittur A, Chi EH, Suh B (2008) Crowdsourcing user studies with mechanical turk. In: Proceeding of the twenty-sixth annual CHI conference on Human factors in computing systems – CHI ’08, ACM Press, https://doi.org/10.1145/1357054.1357127
– reference: Crawford MM, Tuia D, Yang HL (2013) Active learning: Any value for classification of remotely sensed data? Proceedings of the IEEE 101(3):593–608, https://doi.org/10.1109/jproc.2012.2231951
– reference: Mackowiak R, Lenz P, Ghori O, Diego F, Lange O, Rother C (2018) CEREALS - Cost-Effective REgion-based Active Learning for Semantic Segmentation. BMVC 2018 http://arxiv.org/abs/1810.09726, 1810.09726
– reference: Russell BC, Torralba A, Murphy KP, Freeman WT (2007) LabelMe: A database and web-based tool for image annotation. International Journal of Computer Vision 77(1-3):157–173, https://doi.org/10.1007/s11263-007-0090-8
– reference: Thoreau R, Achard V, Risser L, Berthelot B, Briottet X (2022) Active learning for hyperspectral image classification: A comparative review. IEEE Geoscience and Remote Sensing Magazine pp 2–24, https://doi.org/10.1109/mgrs.2022.3169947
– reference: Walter V, Soergel U (2018) Implementation, Results, and Problems of Paid Crowd-Based Geospatial Data Collection. PFG – Journal of Photogrammetry, Remote Sensing and Geoinformation Science 86:187–197
– reference: Lin Y, Vosselman G, Cao Y, Yang MY (2020a) Active and incremental learning for semantic ALS point cloud segmentation. ISPRS Journal of Photogrammetry and Remote Sensing 169:73–92, https://doi.org/10.1016/j.isprsjprs.2020.09.003
– reference: Ng A (2021) The batch – weekly issue 84 [WWW Document]. URL: https://www.deeplearning.ai/the-batch/issue-84/ (accessed October 18, 2022)
– reference: Ren P, Xiao Y, Chang X, Huang PY, Li Z, Gupta BB, Chen X, Wang X (2022) A survey of deep active learning. ACM Computing Surveys 54(9):1–40, https://doi.org/10.1145/3472291
– reference: Prabhu V, Chandrasekaran A, Saenko K, Hoffman J (2021) Active domain adaptation via clustering uncertainty-weighted embeddings. In: 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp 8485–8494, https://doi.org/10.1109/ICCV48922.2021.00839
– reference: Bayas JL, See L, Fritz S, Sturn T, Perger C, Dürauer M, Karner M, Moorthy I, Schepaschenko D, Domian D, McCallum I (2016) Crowdsourcing in-situ data on land cover and land use using gamification and mobile technology. Remote Sensing 8(11):905, https://doi.org/10.3390/rs8110905
– reference: Chandler JJ, Paolacci G (2017) Lie for a dime. Social Psychological and Personality Science 8(5):500–508, https://doi.org/10.1177/1948550617698203
– reference: Galton F (1907) Vox populi. Nature 75(1949):450–451, https://doi.org/10.1038/075450a0
– reference: Scheffer T, Decomain C, Wrobel S (2001) Active hidden markov models for information extraction. In: Advances in Intelligent Data Analysis, Springer Berlin Heidelberg, pp 309–318, https://doi.org/10.1007/3-540-44816-0_31
– reference: Settles B (2009) Active Learning Literature Survey. Computer Sciences Technical Report 1648, University of Wisconsin–Madison
– reference: Mao A, Kamar E, Chen Y, Horvitz E, Schwamb ME, Lintott CJ, Smith AM (2013) Volunteering versus work for pay: Incentives and tradeoffs in crowdsourcing. In: In Proceedings of the First AAAI Conference on Human Computation and Crowdsourcing (HCOMP ’13
– reference: Goodchild MF (2007) Citizens as sensors: the world of volunteered geography. GeoJournal 69(4):211–221, https://doi.org/10.1007/s10708-007-9111-y
– reference: Schmohl S, Sörgel U (2019) Submanifold Sparse Convolutional Networks For Semantic Segmentation of Large-Scale ALS Point Clouds. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences IV-2/W5:77–84, https://doi.org/10.5194/isprs-annals-IV-2-W5-77-2019, https://www.isprs-ann-photogramm-remote-sens-spatial-inf-sci.net/IV-2-W5/77/2019/
– reference: Vlachos A (2008) A stopping criterion for active learning. Computer Speech & Language 22(3):295–312, https://doi.org/10.1016/j.csl.2007.12.001
– reference: Hui Z, Jin S, Cheng P, Ziggah YY, Wang L, Wang Y, Hu H, Hu Y (2019) An Active Learning Method for DEM Extraction from Airborne LiDAR Point Clouds. IEEE Access 7:89366–89378
– reference: Sinha S, Ebrahimi S, Darrell T (2019) Variational adversarial active learning. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV), IEEE Computer Society, Los Alamitos, CA, USA, pp 5971–5980, https://doi.org/10.1109/ICCV.2019.00607, https://doi.ieeecomputersociety.org/10.1109/ICCV.2019.00607
– reference: Waldhauser C, Hochreiter R, Otepka J, Pfeifer N, Ghuffar S, Korzeniowska K, Wagner G (2014) Automated classification of airborne laser scanning point clouds. In: Solving Computationally Expensive Engineering Problems, Springer International Publishing, pp 269–292, https://doi.org/10.1007/978-3-319-08985-0_12
– reference: Ye T, You S, Robert Jr L (2017) When does more money work? examining the role of perceived fairness in pay on the performance quality of crowdworkers. Proceedings of the International AAAI Conference on Web and Social Media 11(1):327–336, https://ojs.aaai.org/index.php/ICWSM/article/view/14876
– reference: Patterson G, Xu C, Su H, Hays J (2014) The SUN attribute database: Beyond categories for deeper scene understanding. International Journal of Computer Vision 108(1-2):59–81, https://doi.org/10.1007/s11263-013-0695-z
– reference: Zolanvari SMI, Ruano S, Rana A, Cummins A, da Silva RE, Rahbar M, Smolic A (2019) Dublincity: Annotated lidar point cloud and its applications. CoRR abs/1909.03613, https://doi.org/10.48550/ARXIV.1909.03613, https://arxiv.org/abs/1909.03613
– reference: Fonte C, Antoniou V, Bastin L, Estima J, Jokar Arsanjani J, Laso Bayas J, See L, Vatseva R (2017) Assessing VGI Data Quality, Ubiquity Press, pp 137–163. https://doi.org/10.5334/bbf.g
– reference: Howe J (2006) The rise of crowdsourcing. Wired Magazine 6(14):1–4
– reference: Hecht R, Kalla M, Krüger T (2018) Crowd-sourced data collection to support automatic classification of building footprint data. Proceedings of the ICA 1:1–7, https://doi.org/10.5194/ica-proc-1-54-2018
– reference: van Dijk TC, Fischer N, Häussner B (2020) Algorithmic improvement of crowdsourced data. In: Proceedings of the 28th International Conference on Advances in Geographic Information Systems, ACM, https://doi.org/10.1145/3397536.3422260
– reference: Hashemi P, Abbaspour RA (2015) Assessment of logical consistency in OpenStreetMap based on the spatial similarity concept. In: Lecture Notes in Geoinformation and Cartography, Springer International Publishing, pp 19–36, https://doi.org/10.1007/978-3-319-14280-7_2
SSID ssj0001815479
Score 2.2657032
Snippet In recent years, significant progress has been made in developing supervised Machine Learning (ML) systems like Convolutional Neural Networks. However, it’s...
SourceID springer
SourceType Publisher
StartPage 131
SubjectTerms Aerospace Technology and Astronautics
Astronomy
Computer Imaging
Earth and Environmental Science
Geographical Information Systems/Cartography
Geography
Observations and Techniques
Original Article
Pattern Recognition and Graphics
Remote Sensing/Photogrammetry
Signal,Image and Speech Processing
Vision
Title Building a Fully-Automatized Active Learning Framework for the Semantic Segmentation of Geospatial 3D Point Clouds
URI https://link.springer.com/article/10.1007/s41064-024-00281-3
Volume 92
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1NS8MwGA46Eb2ITsVvcvBo0CxJ2xzndA5BEXSwW2nfJFNwLdjuMH-Nv8Vf5puaKagXbz2ENrxv0ud5kveDkOMEPLC7nCmQ0p9WaaYNqlbVgTyJdOS49dnIN7fRYCivR2oUyuT4XJgf9_enlUTNIhkiCfPygDOxSJYUF5Ffwb2o932ekiAZaErrecRmPsEz5Mj8_ZpfN58NoPTXyVpggrT76boNsmCLNlkJTckfZ22yfNV03Z1tkuo8NK-m2fubF40z1p3WpSebr9bQbvPPoqFU6pj25wFXFBkpRYZH7-0ELfgE-DCehGyjgpaO4icqH1ONExEX9K58Kmraey6nptoiw_7lQ2_AQrcEBkKomsXGCBGDAg4AgksLuUG2p7MEcI_mMjKi43iWgdUoAY21EDvjOCTW8DNntdgmraIs7A6h0soMIpsrZQH3tNaxsBkqL-GcjJHj7ZKTue3SsOSr9Kv-cWPqFE2dNqZOxd7_hu-T1U7jKB8ec0Ba9cvUHiLy1_lR4_IPCwKm-w
linkProvider Springer Nature
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV09T8MwELWgCJUFQQHxjQdGLEjtfI0lUAq0FRKt1C1KznapRBOJpEP5NfwWfhnn4IIELGwZrMS6s3Pv2ffuCDkNwAR2nTIXhDCnVSELJbJWtwlp4IWedpRRI_f6Xmco7kbuyJbJMVqYH_f354VAziIYRhJm6IHD-DJZEciUTfpe5EXf5ykBgoGqtJ6J2MwIPK1G5u_X_Lr5rAJKe4OsWyRIW5-u2yRLKmuQum1K_jRvkNWbquvufIsUl7Z5NU3e3wxpnLPWrMwN2HxVkraqfxa1pVLHtL1IuKKISCkiPPqopmjBCeDDeGrVRhnNNcVPFCanGifCr-hDPslKGj3nM1lsk2H7ehB1mO2WwIBzt2S-lJz74IIDANwRClKJaC9MAsA9mgpP8qZ2kgRUiBRQKgW-ltqBQEnnQquQ75Balmdql1ChRAKeSl1XAe7pMPS5SpB5ca2Fjxhvj5wtbBfbJV_EX_WPK1PHaOq4MnXM9_83_ITUO4NeN-7e9u8PyFqzcppJlTkktfJlpo4QBZTpceX-DwfQqeg
linkToPdf http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3JTsMwELWgiOWCoIDY8YEjVkntbMfSUvaqElTqLUrGdqlEE0TSQ_kavoUvY-ymBQku3HKIkmhGk3nPM2-GkNMATGLXCXNBCHNaFbJQImt165AEXuhpRxk18kPHu-6J277b_6Hit93us5LkVNNgpjSlRe1V6tpc-CaQyQiG-YUZ0uAwvkiWkKnYQm3Ta36fsgQIEezAPZPHmZF9lsqZvx_zqx5q00x7g6yX-JA2pg7dJAsqrZLVclX586RKlq_sLt7JFskvypXWNP78MFRywhrjIjMQ9F1J2rB_MloOUB3Q9qwNiyJOpYj76KMaoV2HgBeDUalBSmmmKb4iN53W-CG8RbvZMC1o8yUby3yb9NqXT81rVu5QYMC5WzBfSs59cMEBAO4IBYlEDBjGAWDkJsKTvK6dOAYVIjGUSoGvpXYgUNI51yrkO6SSZqnaJVQoEYOnEtdVgJEehj5XMfIxrrXwEfntkbOZ7aIyEPJoPhXZmjpCU0fW1BHf_9_tJ2Sl22pH9zeduwOyVrc-M_0zh6RSvI3VEUKDIjm23v8C3-myLw
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Building+a%C2%A0Fully-Automatized+Active+Learning+Framework+for+the+Semantic+Segmentation+of+Geospatial+3D+Point+Clouds&rft.jtitle=Journal+of+photogrammetry%2C+remote+sensing+and+geoinformation+science&rft.au=K%C3%B6lle%2C+Michael&rft.au=Walter%2C+Volker&rft.au=S%C3%B6rgel%2C+Uwe&rft.date=2024-04-01&rft.pub=Springer+International+Publishing&rft.issn=2512-2789&rft.eissn=2512-2819&rft.volume=92&rft.issue=2&rft.spage=131&rft.epage=161&rft_id=info:doi/10.1007%2Fs41064-024-00281-3&rft.externalDocID=10_1007_s41064_024_00281_3
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2512-2789&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2512-2789&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2512-2789&client=summon