Choosing the right word: Using bidirectional LSTM tagger for writing support systems

Scientific writing is difficult. It is even harder for those for whom English is a second language (ESL learners). Scholars around the world spend a significant amount of time and resources proofreading their work before submitting it for review or publication. In this paper we present a novel machi...

Full description

Saved in:
Bibliographic Details
Published inEngineering applications of artificial intelligence Vol. 84; pp. 1 - 10
Main Authors Makarenkov, Victor, Rokach, Lior, Shapira, Bracha
Format Journal Article
LanguageEnglish
Published Elsevier Ltd 01.09.2019
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Scientific writing is difficult. It is even harder for those for whom English is a second language (ESL learners). Scholars around the world spend a significant amount of time and resources proofreading their work before submitting it for review or publication. In this paper we present a novel machine learning based application for proper word choice task. Proper word choice is a generalization the lexical substitution (LS) and grammatical error correction (GEC) tasks. We demonstrate and evaluate the usefulness of applying bidirectional Long Short Term Memory (LSTM) tagger, for this task. While state-of-the-art grammatical error correction uses error-specific classifiers and machine translation methods, we demonstrate an unsupervised method that is based solely on a high quality text corpus and does not require manually annotated data. We use a bidirectional Recurrent Neural Network (RNN) with LSTM for learning the proper word choice based on a word’s sentential context. We demonstrate and evaluate our application in various settings, including both a domain-specific (scientific), writing task and a general-purpose writing task. We perform both strict machine and human evaluation. We show that our domain-specific and general-purpose models outperform state-of-the-art general context learning. As an additional contribution of this research, we also share our code, pre-trained models, and a new ESL learner test set with the research community.
ISSN:0952-1976
1873-6769
DOI:10.1016/j.engappai.2019.05.003