Assessing the Answerability of Queries in Retrieval-Augmented Code Generation
Thanks to unprecedented language understanding and generation capabilities of large language model (LLM), Retrieval-augmented Code Generation (RaCG) has recently been widely utilized among software developers. While this has increased productivity, there are still frequent instances of incorrect cod...
Saved in:
Main Authors | , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
08.11.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Thanks to unprecedented language understanding and generation capabilities of
large language model (LLM), Retrieval-augmented Code Generation (RaCG) has
recently been widely utilized among software developers. While this has
increased productivity, there are still frequent instances of incorrect codes
being provided. In particular, there are cases where plausible yet incorrect
codes are generated for queries from users that cannot be answered with the
given queries and API descriptions. This study proposes a task for evaluating
answerability, which assesses whether valid answers can be generated based on
users' queries and retrieved APIs in RaCG. Additionally, we build a benchmark
dataset called Retrieval-augmented Code Generability Evaluation (RaCGEval) to
evaluate the performance of models performing this task. Experimental results
show that this task remains at a very challenging level, with baseline models
exhibiting a low performance of 46.7%. Furthermore, this study discusses
methods that could significantly improve performance. |
---|---|
DOI: | 10.48550/arxiv.2411.05547 |