(Why) Is My Prompt Getting Worse? Rethinking Regression Testing for Evolving LLM APIs

Large Language Models (LLMs) are increasingly integrated into software applications. Downstream application developers often access LLMs through APIs provided as a service. However, LLM APIs are often updated silently and scheduled to be deprecated, forcing users to continuously adapt to evolving mo...

Full description

Saved in:
Bibliographic Details
Published in2024 IEEE/ACM 3rd International Conference on AI Engineering – Software Engineering for AI (CAIN) pp. 166 - 171
Main Authors Ma, Wanqin, Yang, Chenyang, Kastner, Christian
Format Conference Proceeding
LanguageEnglish
Published ACM 14.04.2024
Subjects
Online AccessGet full text
DOI10.1145/3644815.3644950

Cover

More Information
Summary:Large Language Models (LLMs) are increasingly integrated into software applications. Downstream application developers often access LLMs through APIs provided as a service. However, LLM APIs are often updated silently and scheduled to be deprecated, forcing users to continuously adapt to evolving models. This can cause performance regression and affect prompt design choices, as evidenced by our case study on toxicity detection. Based on our case study, we emphasize the need for and re-examine the concept of regression testing for evolving LLM APIs. We argue that regression testing LLMs requires fundamental changes to traditional testing approaches, due to different correctness notions, prompting brittleness, and non-determinism in LLM APIs.CCS CONCEPTS*Software and its engineering → Software testing and debugging.
DOI:10.1145/3644815.3644950