Combining Pretrained High-Resource Embeddings and Subword Representations for Low-Resource Languages

The contrast between the need for large amounts of data for current Natural Language Processing (NLP) techniques, and the lack thereof, is accentuated in the case of African languages, most of which are considered low-resource. To help circumvent this issue, we explore techniques exploiting the qual...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Reid, Machel, Marrese-Taylor, Edison, Matsuo, Yutaka
Format Paper Journal Article
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 21.04.2020
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The contrast between the need for large amounts of data for current Natural Language Processing (NLP) techniques, and the lack thereof, is accentuated in the case of African languages, most of which are considered low-resource. To help circumvent this issue, we explore techniques exploiting the qualities of morphologically rich languages (MRLs), while leveraging pretrained word vectors in well-resourced languages. In our exploration, we show that a meta-embedding approach combining both pretrained and morphologically-informed word embeddings performs best in the downstream task of Xhosa-English translation.
ISSN:2331-8422
DOI:10.48550/arxiv.2003.04419