Exploring the Effect of Multiple Natural Languages on Code Suggestion Using GitHub Copilot

GitHub Copilot is an AI-enabled tool that automates program synthesis. It has gained significant attention since its launch in 2021. Recent studies have extensively examined Copilot's capabilities in various programming tasks, as well as its security issues. However, little is known about the e...

Full description

Saved in:
Bibliographic Details
Main Authors Koyanagi, Kei, Wang, Dong, Noguchi, Kotaro, Kondo, Masanari, Serebrenik, Alexander, Kamei, Yasutaka, Ubayashi, Naoyasu
Format Journal Article
LanguageEnglish
Published 02.02.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:GitHub Copilot is an AI-enabled tool that automates program synthesis. It has gained significant attention since its launch in 2021. Recent studies have extensively examined Copilot's capabilities in various programming tasks, as well as its security issues. However, little is known about the effect of different natural languages on code suggestion. Natural language is considered a social bias in the field of NLP, and this bias could impact the diversity of software engineering. To address this gap, we conducted an empirical study to investigate the effect of three popular natural languages (English, Japanese, and Chinese) on Copilot. We used 756 questions of varying difficulty levels from AtCoder contests for evaluation purposes. The results highlight that the capability varies across natural languages, with Chinese achieving the worst performance. Furthermore, regardless of the type of natural language, the performance decreases significantly as the difficulty of questions increases. Our work represents the initial step in comprehending the significance of natural languages in Copilot's capability and introduces promising opportunities for future endeavors.
DOI:10.48550/arxiv.2402.01438