Estimating Consensus from Crowdsourced Continuous Annotations

With the emergence of crowdsourcing services, the concept of the wisdom of crowds has gained immense popularity. To capture the subjectivity phenomena, multiple annotators are asked to give their responses using crowdsourcing tools. Unfortunately, inattentive and adversarial annotators pose a threat...

Full description

Saved in:
Bibliographic Details
Published in2020 3rd International Conference on Communication System, Computing and IT Applications (CSCITA) pp. 156 - 161
Main Authors Chapaneri, Santosh, Jayaswal, Deepak
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.04.2020
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:With the emergence of crowdsourcing services, the concept of the wisdom of crowds has gained immense popularity. To capture the subjectivity phenomena, multiple annotators are asked to give their responses using crowdsourcing tools. Unfortunately, inattentive and adversarial annotators pose a threat to the quality and trustworthiness of the consensus. In this work, we focus on crowd consensus estimation of continuous labels using a probabilistic approach. A lot of existing work is reported for annotator behavior modeling for the categorical case; however, there is limited work in the continuous case. We propose a maximum-likelihood solution to determine the estimated consensus while simultaneously modeling the behavior of various annotators. Further, to handle the long-tail phenomena commonly observed in crowdsourced datasets, a confidence-interval based estimated consensus is derived. The proposed technique is shown to perform better than using the average annotation values and existing work.
DOI:10.1109/CSCITA47329.2020.9137784