Estimating Consensus from Crowdsourced Continuous Annotations
With the emergence of crowdsourcing services, the concept of the wisdom of crowds has gained immense popularity. To capture the subjectivity phenomena, multiple annotators are asked to give their responses using crowdsourcing tools. Unfortunately, inattentive and adversarial annotators pose a threat...
Saved in:
Published in | 2020 3rd International Conference on Communication System, Computing and IT Applications (CSCITA) pp. 156 - 161 |
---|---|
Main Authors | , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
01.04.2020
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | With the emergence of crowdsourcing services, the concept of the wisdom of crowds has gained immense popularity. To capture the subjectivity phenomena, multiple annotators are asked to give their responses using crowdsourcing tools. Unfortunately, inattentive and adversarial annotators pose a threat to the quality and trustworthiness of the consensus. In this work, we focus on crowd consensus estimation of continuous labels using a probabilistic approach. A lot of existing work is reported for annotator behavior modeling for the categorical case; however, there is limited work in the continuous case. We propose a maximum-likelihood solution to determine the estimated consensus while simultaneously modeling the behavior of various annotators. Further, to handle the long-tail phenomena commonly observed in crowdsourced datasets, a confidence-interval based estimated consensus is derived. The proposed technique is shown to perform better than using the average annotation values and existing work. |
---|---|
DOI: | 10.1109/CSCITA47329.2020.9137784 |