Assessing the Answerability of Queries in Retrieval-Augmented Code Generation

Thanks to unprecedented language understanding and generation capabilities of large language model (LLM), Retrieval-augmented Code Generation (RaCG) has recently been widely utilized among software developers. While this has increased productivity, there are still frequent instances of incorrect cod...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Kim, Geonmin, Kim, Jaeyeon, Park, Hancheol, Shin, Wooksu, Tae-Ho, Kim
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 08.11.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Thanks to unprecedented language understanding and generation capabilities of large language model (LLM), Retrieval-augmented Code Generation (RaCG) has recently been widely utilized among software developers. While this has increased productivity, there are still frequent instances of incorrect codes being provided. In particular, there are cases where plausible yet incorrect codes are generated for queries from users that cannot be answered with the given queries and API descriptions. This study proposes a task for evaluating answerability, which assesses whether valid answers can be generated based on users' queries and retrieved APIs in RaCG. Additionally, we build a benchmark dataset called Retrieval-augmented Code Generability Evaluation (RaCGEval) to evaluate the performance of models performing this task. Experimental results show that this task remains at a very challenging level, with baseline models exhibiting a low performance of 46.7%. Furthermore, this study discusses methods that could significantly improve performance.
ISSN:2331-8422