The Hidden Space of Transformer Language Adapters

We analyze the operation of transformer language adapters, which are small modules trained on top of a frozen language model to adapt its predictions to new target languages. We show that adapted predictions mostly evolve in the source language the model was trained on, while the target language bec...

Full description

Saved in:
Bibliographic Details
Main Authors Alabi, Jesujoba O, Mosbach, Marius, Eyal, Matan, Klakow, Dietrich, Geva, Mor
Format Journal Article
LanguageEnglish
Published 20.02.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:We analyze the operation of transformer language adapters, which are small modules trained on top of a frozen language model to adapt its predictions to new target languages. We show that adapted predictions mostly evolve in the source language the model was trained on, while the target language becomes pronounced only in the very last layers of the model. Moreover, the adaptation process is gradual and distributed across layers, where it is possible to skip small groups of adapters without decreasing adaptation performance. Last, we show that adapters operate on top of the model's frozen representation space while largely preserving its structure, rather than on an 'isolated' subspace. Our findings provide a deeper view into the adaptation process of language models to new languages, showcasing the constraints imposed on it by the underlying model and introduces practical implications to enhance its efficiency.
DOI:10.48550/arxiv.2402.13137