Semantic similarity loss for neural source code summarization
This paper presents a procedure for and evaluation of using a semantic similarity metric as a loss function for neural source code summarization. Code summarization is the task of writing natural language descriptions of source code. Neural code summarization refers to automated techniques for gener...
Saved in:
Published in | Journal of software : evolution and process Vol. 36; no. 11 |
---|---|
Main Authors | , |
Format | Journal Article |
Language | English |
Published |
Chichester
Wiley Subscription Services, Inc
01.11.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | This paper presents a procedure for and evaluation of using a semantic similarity metric as a loss function for neural source code summarization. Code summarization is the task of writing natural language descriptions of source code. Neural code summarization refers to automated techniques for generating these descriptions using neural networks. Almost all current approaches involve neural networks as either standalone models or as part of a pretrained large language models, for example, GPT, Codex, and LLaMA. Yet almost all also use a categorical cross‐entropy (CCE) loss function for network optimization. Two problems with CCE are that (1) it computes loss over each word prediction one‐at‐a‐time, rather than evaluating a whole sentence, and (2) it requires a perfect prediction, leaving no room for partial credit for synonyms. In this paper, we extend our previous work on semantic similarity metrics to show a procedure for using semantic similarity as a loss function to alleviate this problem, and we evaluate this procedure in several settings in both metrics‐driven and human studies. In essence, we propose to use a semantic similarity metric to calculate loss over the whole output sentence prediction per training batch, rather than just loss for each word. We also propose to combine our loss with CCE for each word, which streamlines the training process compared to baselines. We evaluate our approach over several baselines and report improvement in the vast majority of conditions.
We proposed a procedure for using semantic similarity as a loss function. We evaluated this loss function with both purpose‐built models and large language model (LLM). The results in terms of human study and automatic metrics show that models trained with this loss function are better than models trained with categorical cross‐entropy (CCE). |
---|---|
Bibliography: | Holy Cross Dr, Notre Dame, 46556, IN, USA Present address ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 2047-7473 2047-7481 |
DOI: | 10.1002/smr.2706 |