Quality Assured: Rethinking Annotation Strategies in Imaging AI

This paper does not describe a novel method. Instead, it studies an essential foundation for reliable benchmarking and ultimately real-world application of AI-based image analysis: generating high-quality reference annotations. Previous research has focused on crowdsourcing as a means of outsourcing...

Full description

Saved in:
Bibliographic Details
Published inComputer Vision – ECCV 2024 pp. 52 - 69
Main Authors Rädsch, Tim, Reinke, Annika, Weru, Vivienn, Tizabi, Minu D., Heller, Nicholas, Isensee, Fabian, Kopp-Schneider, Annette, Maier-Hein, Lena
Format Book Chapter
LanguageEnglish
Published Cham Springer Nature Switzerland 25.10.2024
SeriesLecture Notes in Computer Science
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This paper does not describe a novel method. Instead, it studies an essential foundation for reliable benchmarking and ultimately real-world application of AI-based image analysis: generating high-quality reference annotations. Previous research has focused on crowdsourcing as a means of outsourcing annotations. However, little attention has so far been given to annotation companies, specifically regarding their internal quality assurance (QA) processes. Therefore, our aim is to evaluate the influence of QA employed by annotation companies on annotation quality and devise methodologies for maximizing data annotation efficacy. Based on a total of 57,648 instance segmented images obtained from a total of 924 annotators and 34 QA workers from four annotation companies and Amazon Mechanical Turk (MTurk), we derived the following insights: (1) Annotation companies perform better both in terms of quantity and quality compared to the widely used platform MTurk. (2) Annotation companies’ internal QA only provides marginal improvements, if any. However, improving labeling instructions instead of investing in QA can substantially boost annotation performance. (3) The benefit of internal QA depends on specific image characteristics. Our work could enable researchers to derive substantially more value from a fixed annotation budget and change the way annotation companies conduct internal QA.
Bibliography:Supplementary InformationThe online version contains supplementary material available at https://doi.org/10.1007/978-3-031-73229-4_4.
A. Kopp-Schneider and L. Maier-Hein—shared last author.
ISBN:3031732286
9783031732287
ISSN:0302-9743
1611-3349
DOI:10.1007/978-3-031-73229-4_4