Bridging Code Semantic and LLMs: Semantic Chain-of-Thought Prompting for Code Generation
Large language models (LLMs) have showcased remarkable prowess in code generation. However, automated code generation is still challenging since it requires a high-level semantic mapping between natural language requirements and codes. Most existing LLMs-based approaches for code generation rely on...
Saved in:
Main Authors | , , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
16.10.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Large language models (LLMs) have showcased remarkable prowess in code
generation. However, automated code generation is still challenging since it
requires a high-level semantic mapping between natural language requirements
and codes. Most existing LLMs-based approaches for code generation rely on
decoder-only causal language models often treate codes merely as plain text
tokens, i.e., feeding the requirements as a prompt input, and outputing code as
flat sequence of tokens, potentially missing the rich semantic features
inherent in source code. To bridge this gap, this paper proposes the "Semantic
Chain-of-Thought" approach to intruduce semantic information of code, named
SeCoT. Our motivation is that the semantic information of the source code (\eg
data flow and control flow) describes more precise program execution behavior,
intention and function. By guiding LLM consider and integrate semantic
information, we can achieve a more granular understanding and representation of
code, enhancing code generation accuracy. Meanwhile, while traditional
techniques leveraging such semantic information require complex static or
dynamic code analysis to obtain features such as data flow and control flow,
SeCoT demonstrates that this process can be fully automated via the intrinsic
capabilities of LLMs (i.e., in-context learning), while being generalizable and
applicable to challenging domains. While SeCoT can be applied with different
LLMs, this paper focuses on the powerful GPT-style models: ChatGPT(close-source
model) and WizardCoder(open-source model). The experimental study on three
popular DL benchmarks (i.e., HumanEval, HumanEval-ET and MBPP) shows that SeCoT
can achieves state-of-the-art performance, greatly improving the potential for
large models and code generation. |
---|---|
DOI: | 10.48550/arxiv.2310.10698 |