RaLLe: A Framework for Developing and Evaluating Retrieval-Augmented Large Language Models

Retrieval-augmented large language models (R-LLMs) combine pre-trained large language models (LLMs) with information retrieval systems to improve the accuracy of factual question-answering. However, current libraries for building R-LLMs provide high-level abstractions without sufficient transparency...

Full description

Saved in:
Bibliographic Details
Main Authors Hoshi, Yasuto, Miyashita, Daisuke, Ng, Youyang, Tatsuno, Kento, Morioka, Yasuhiro, Torii, Osamu, Deguchi, Jun
Format Journal Article
LanguageEnglish
Published 21.08.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Retrieval-augmented large language models (R-LLMs) combine pre-trained large language models (LLMs) with information retrieval systems to improve the accuracy of factual question-answering. However, current libraries for building R-LLMs provide high-level abstractions without sufficient transparency for evaluating and optimizing prompts within specific inference processes such as retrieval and generation. To address this gap, we present RaLLe, an open-source framework designed to facilitate the development, evaluation, and optimization of R-LLMs for knowledge-intensive tasks. With RaLLe, developers can easily develop and evaluate R-LLMs, improving hand-crafted prompts, assessing individual inference processes, and objectively measuring overall system performance quantitatively. By leveraging these features, developers can enhance the performance and accuracy of their R-LLMs in knowledge-intensive generation tasks. We open-source our code at https://github.com/yhoshi3/RaLLe.
DOI:10.48550/arxiv.2308.10633