A systematic literature review on the impact of AI models on the security of code generation

Artificial Intelligence (AI) is increasingly used as a helper to develop computing programs. While it can boost software development and improve coding proficiency, this practice offers no guarantee of security. On the contrary, recent research shows that some AI models produce software with vulnera...

Full description

Saved in:
Bibliographic Details
Published inFrontiers in big data Vol. 7; p. 1386720
Main Authors Negri-Ribalta, Claudia, Geraud-Stewart, Rémi, Sergeeva, Anastasia, Lenzini, Gabriele
Format Journal Article
LanguageEnglish
Published Switzerland Frontiers Media S.A 13.05.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Artificial Intelligence (AI) is increasingly used as a helper to develop computing programs. While it can boost software development and improve coding proficiency, this practice offers no guarantee of security. On the contrary, recent research shows that some AI models produce software with vulnerabilities. This situation leads to the question: How serious and widespread are the security flaws in code generated using AI models? Through a systematic literature review, this work reviews the state of the art on how AI models impact software security. It systematizes the knowledge about the risks of using AI in coding security-critical software. It reviews what security flaws of well-known vulnerabilities (e.g., the MITRE CWE Top 25 Most Dangerous Software Weaknesses) are commonly hidden in AI-generated code. It also reviews works that discuss how vulnerabilities in AI-generated code can be exploited to compromise security and lists the attempts to improve the security of such AI-generated code. Overall, this work provides a comprehensive and systematic overview of the impact of AI in secure coding. This topic has sparked interest and concern within the software security engineering community. It highlights the importance of setting up security measures and processes, such as code verification, and that such practices could be customized for AI-aided code production.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
Christos Chrysoulas, Edinburgh Napier University, United Kingdom
Edited by: Nikolaos Pitropakis, Edinburgh Napier University, United Kingdom
Livinus Obiora Nweke, NTNU, Norway
Reviewed by: Dimitrios Kasimatis, Edinburgh Napier University, United Kingdom
ISSN:2624-909X
2624-909X
DOI:10.3389/fdata.2024.1386720