Bridging the Preference Gap between Retrievers and LLMs

Large Language Models (LLMs) have demonstrated superior results across a wide range of tasks, and Retrieval-augmented Generation (RAG) is an effective way to enhance the performance by locating relevant information and placing it into the context window of the LLM. However, the relationship between...

Full description

Saved in:
Bibliographic Details
Main Authors Ke, Zixuan, Kong, Weize, Li, Cheng, Zhang, Mingyang, Mei, Qiaozhu, Bendersky, Michael
Format Journal Article
LanguageEnglish
Published 12.01.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Large Language Models (LLMs) have demonstrated superior results across a wide range of tasks, and Retrieval-augmented Generation (RAG) is an effective way to enhance the performance by locating relevant information and placing it into the context window of the LLM. However, the relationship between retrievers and LLMs in a RAG is still under-investigated. Most existing work treats the retriever and the LLM as independent components and leaves a gap between retrieving human-"friendly" information and assembling a LLM-"friendly" context. In this work, we examine a novel bridge mechanism. We validate the ranking and selection assumptions of retrievers in the context of RAG and propose a framework that chains together supervised and reinforcement learning to train a bridge model that optimizes the connection between the retriever and the LLM. Empirical results demonstrate the effectiveness of our method in both question-answering and personalized generation tasks.
DOI:10.48550/arxiv.2401.06954