In-Context Retrieval-Augmented Language Models
Retrieval-Augmented Language Modeling (RALM) methods, which condition a language model (LM) on relevant documents from a grounding corpus during generation, were shown to significantly improve language modeling performance. In addition, they can mitigate the problem of factually inaccurate text gene...
Saved in:
Main Authors | , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
31.01.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Retrieval-Augmented Language Modeling (RALM) methods, which condition a
language model (LM) on relevant documents from a grounding corpus during
generation, were shown to significantly improve language modeling performance.
In addition, they can mitigate the problem of factually inaccurate text
generation and provide natural source attribution mechanism. Existing RALM
approaches focus on modifying the LM architecture in order to facilitate the
incorporation of external information, significantly complicating deployment.
This paper considers a simple alternative, which we dub In-Context RALM:
leaving the LM architecture unchanged and prepending grounding documents to the
input, without any further training of the LM. We show that In-Context RALM
that builds on off-the-shelf general purpose retrievers provides surprisingly
large LM gains across model sizes and diverse corpora. We also demonstrate that
the document retrieval and ranking mechanism can be specialized to the RALM
setting to further boost performance. We conclude that In-Context RALM has
considerable potential to increase the prevalence of LM grounding, particularly
in settings where a pretrained LM must be used without modification or even via
API access. |
---|---|
DOI: | 10.48550/arxiv.2302.00083 |