Analysis of Multi-Source Language Training in Cross-Lingual Transfer

The successful adaptation of multilingual language models (LMs) to a specific language-task pair critically depends on the availability of data tailored for that condition. While cross-lingual transfer (XLT) methods have contributed to addressing this data scarcity problem, there still exists ongoin...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Lim, Seong Hoon, Yun, Taejun, Kim, Jinhyeon, Choi, Jihun, Kim, Taeuk
Format Paper Journal Article
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 05.06.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The successful adaptation of multilingual language models (LMs) to a specific language-task pair critically depends on the availability of data tailored for that condition. While cross-lingual transfer (XLT) methods have contributed to addressing this data scarcity problem, there still exists ongoing debate about the mechanisms behind their effectiveness. In this work, we focus on one of promising assumptions about inner workings of XLT, that it encourages multilingual LMs to place greater emphasis on language-agnostic or task-specific features. We test this hypothesis by examining how the patterns of XLT change with a varying number of source languages involved in the process. Our experimental findings show that the use of multiple source languages in XLT-a technique we term Multi-Source Language Training (MSLT)-leads to increased mingling of embedding spaces for different languages, supporting the claim that XLT benefits from making use of language-independent information. On the other hand, we discover that using an arbitrary combination of source languages does not always guarantee better performance. We suggest simple heuristics for identifying effective language combinations for MSLT and empirically prove its effectiveness.
ISSN:2331-8422
DOI:10.48550/arxiv.2402.13562