SemEval-2022 Task 7: Identifying Plausible Clarifications of Implicit and Underspecified Phrases in Instructional Texts
We describe SemEval-2022 Task 7, a shared task on rating the plausibility of clarifications in instructional texts. The dataset for this task consists of manually clarified how-to guides for which we generated alternative clarifications and collected human plausibility judgements. The task of partic...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
21.09.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | We describe SemEval-2022 Task 7, a shared task on rating the plausibility of
clarifications in instructional texts. The dataset for this task consists of
manually clarified how-to guides for which we generated alternative
clarifications and collected human plausibility judgements. The task of
participating systems was to automatically determine the plausibility of a
clarification in the respective context. In total, 21 participants took part in
this task, with the best system achieving an accuracy of 68.9%. This report
summarizes the results and findings from 8 teams and their system descriptions.
Finally, we show in an additional evaluation that predictions by the top
participating team make it possible to identify contexts with multiple
plausible clarifications with an accuracy of 75.2%. |
---|---|
DOI: | 10.48550/arxiv.2309.12102 |