Japanese-English Sentence Translation Exercises Dataset for Automatic Grading

This paper proposes the task of automatic assessment of Sentence Translation Exercises (STEs), that have been used in the early stage of L2 language learning. We formalize the task as grading student responses for each rubric criterion pre-specified by the educators. We then create a dataset for STE...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Miura, Naoki, Funayama, Hiroaki, Kikuchi, Seiya, Matsubayashi, Yuichiroh, Iwase, Yuya, Inui, Kentaro
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 06.03.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This paper proposes the task of automatic assessment of Sentence Translation Exercises (STEs), that have been used in the early stage of L2 language learning. We formalize the task as grading student responses for each rubric criterion pre-specified by the educators. We then create a dataset for STE between Japanese and English including 21 questions, along with a total of 3, 498 student responses (167 on average). The answer responses were collected from students and crowd workers. Using this dataset, we demonstrate the performance of baselines including finetuned BERT and GPT models with few-shot in-context learning. Experimental results show that the baseline model with finetuned BERT was able to classify correct responses with approximately 90% in F1, but only less than 80% for incorrect responses. Furthermore, the GPT models with few-shot learning show poorer results than finetuned BERT, indicating that our newly proposed task presents a challenging issue, even for the stateof-the-art large language models.
ISSN:2331-8422