Overcoming Barriers to Skill Injection in Language Modeling: Case Study in Arithmetic
Through their transfer learning abilities, highly-parameterized large pre-trained language models have dominated the NLP landscape for a multitude of downstream language tasks. Though linguistically proficient, the inability of these models to incorporate the learning of non-linguistic entities (num...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
03.11.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Through their transfer learning abilities, highly-parameterized large
pre-trained language models have dominated the NLP landscape for a multitude of
downstream language tasks. Though linguistically proficient, the inability of
these models to incorporate the learning of non-linguistic entities (numerals
and arithmetic reasoning) limits their usage for tasks that require numeric
comprehension or strict mathematical reasoning. However, as we illustrate in
this paper, building a general purpose language model that also happens to be
proficient in mathematical reasoning is not as straight-forward as training it
on a numeric dataset. In this work, we develop a novel framework that enables
language models to be mathematically proficient while retaining their
linguistic prowess. Specifically, we offer information-theoretic interventions
to overcome the catastrophic forgetting of linguistic skills that occurs while
injecting non-linguistic skills into language models. |
---|---|
DOI: | 10.48550/arxiv.2211.02098 |