Model Cascading for Code: Reducing Inference Costs with Model Cascading for LLM Based Code Generation
The rapid development of large language models (LLMs) has led to significant advancements in code completion tasks. While larger models have higher accuracy, they also cost much more to run. Meanwhile, model cascading has been proven effective to conserve computational resources while enhancing accu...
Saved in:
Main Authors | , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
24.05.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | The rapid development of large language models (LLMs) has led to significant
advancements in code completion tasks. While larger models have higher
accuracy, they also cost much more to run. Meanwhile, model cascading has been
proven effective to conserve computational resources while enhancing accuracy
in LLMs on natural language generation tasks. It generates output with the
smallest model in a set, and only queries the larger models when it fails to
meet predefined quality criteria. However, this strategy has not been used in
code completion tasks, primarily because assessing the quality of code
completions differs substantially from assessing natural language, where the
former relies heavily on the functional correctness. To address this, we
propose letting each model generate and execute a set of test cases for their
solutions, and use the test results as the cascading threshold. We show that
our model cascading strategy reduces computational costs while increases
accuracy compared to generating the output with a single model. We also
introduce a heuristics to determine the optimal combination of the number of
solutions, test cases, and test lines each model should generate, based on the
budget. Compared to speculative decoding, our method works on black-box models,
having the same level of cost-accuracy trade-off, yet providing much more
choices based on the server's budget. Ours is the first work to optimize
cost-accuracy trade-off for LLM code generation with model cascading. |
---|---|
DOI: | 10.48550/arxiv.2405.15842 |