Seven Failure Points When Engineering a Retrieval Augmented Generation System

Software engineers are increasingly adding semantic search capabilities to applications using a strategy known as Retrieval Augmented Generation (RAG). A RAG system involves finding documents that semantically match a query and then passing the documents to a large language model (LLM) such as ChatG...

Full description

Saved in:
Bibliographic Details
Published in2024 IEEE/ACM 3rd International Conference on AI Engineering – Software Engineering for AI (CAIN) pp. 194 - 199
Main Authors Barnett, Scott, Kurniawan, Stefanus, Thudumu, Srikanth, Brannelly, Zach, Abdelrazek, Mohamed
Format Conference Proceeding
LanguageEnglish
Published ACM 14.04.2024
Subjects
Online AccessGet full text
DOI10.1145/3644815.3644945

Cover

More Information
Summary:Software engineers are increasingly adding semantic search capabilities to applications using a strategy known as Retrieval Augmented Generation (RAG). A RAG system involves finding documents that semantically match a query and then passing the documents to a large language model (LLM) such as ChatGPT to extract the right answer using an LLM. RAG systems aim to: a) reduce the problem of hallucinated responses from LLMs, b) link sources/references to generated responses, and c) remove the need for annotating documents with meta-data. However, RAG systems suffer from limitations inherent to information retrieval systems and from reliance on LLMs. In this paper, we present an experience report on the failure points of RAG systems from three case studies from separate domains: research, education, and biomedical. We share the lessons learned and present 7 failure points to consider when designing a RAG system. The two key takeaways arising from our work are: 1) validation of a RAG system is only feasible during operation, and 2) the robustness of a RAG system evolves rather than designed in at the start. We conclude with a list of potential research directions on RAG systems for the software engineering community.CCS CONCEPTS* Software and its engineering → Empirical software validation.
DOI:10.1145/3644815.3644945