Estimating Agreement by Chance for Sequence Annotation

In the field of natural language processing, correction of performance assessment for chance agreement plays a crucial role in evaluating the reliability of annotations. However, there is a notable dearth of research focusing on chance correction for assessing the reliability of sequence annotation...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Li, Diya, Rosé, Carolyn, Ao Yuan, Zhou, Chunxiao
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 16.07.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In the field of natural language processing, correction of performance assessment for chance agreement plays a crucial role in evaluating the reliability of annotations. However, there is a notable dearth of research focusing on chance correction for assessing the reliability of sequence annotation tasks, despite their widespread prevalence in the field. To address this gap, this paper introduces a novel model for generating random annotations, which serves as the foundation for estimating chance agreement in sequence annotation tasks. Utilizing the proposed randomization model and a related comparison approach, we successfully derive the analytical form of the distribution, enabling the computation of the probable location of each annotated text segment and subsequent chance agreement estimation. Through a combination simulation and corpus-based evaluation, we successfully assess its applicability and validate its accuracy and efficacy.
ISSN:2331-8422