Beyond English: Evaluating LLMs for Arabic Grammatical Error Correction

Large language models (LLMs) finetuned to follow human instruction have recently exhibited significant capabilities in various English NLP tasks. However, their performance in grammatical error correction (GEC), especially on languages other than English, remains significantly unexplored. In this wo...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Sang Yun Kwon, Bhatia, Gagan, El Moatez Billah Nagoudi, Abdul-Mageed, Muhammad
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 13.12.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Large language models (LLMs) finetuned to follow human instruction have recently exhibited significant capabilities in various English NLP tasks. However, their performance in grammatical error correction (GEC), especially on languages other than English, remains significantly unexplored. In this work, we evaluate the abilities of instruction finetuned LLMs in Arabic GEC, a complex task due to Arabic's rich morphology. Our findings suggest that various prompting methods, coupled with (in-context) few-shot learning, demonstrate considerable effectiveness, with GPT-4 achieving up to \(65.49\) F\(_{1}\) score under expert prompting (approximately \(5\) points higher than our established baseline). Despite these positive results, we find that instruction finetuned models, regardless of their size, are still outperformed by fully finetuned ones, even if they are significantly smaller in size. This disparity highlights substantial room for improvements for LLMs. Inspired by methods used in low-resource machine translation, we also develop a method exploiting synthetic data that significantly outperforms previous models on two standard Arabic benchmarks. Our best model achieves a new SOTA on Arabic GEC, with \(73.29\) and \(73.26\) F\(_{1}\) on the 2014 and 2015 QALB datasets, respectively, compared to peer-reviewed published baselines.
ISSN:2331-8422