Welcome Your New AI Teammate: On Safety Analysis by Leashing Large Language Models

DevOps is a necessity in many industries, including the development of Autonomous Vehicles. In those settings, there are iterative activities that reduce the speed of SafetyOps cycles. One of these activities is "Hazard Analysis & Risk Assessment" (HARA), which is an essential step to...

Full description

Saved in:
Bibliographic Details
Published in2024 IEEE/ACM 3rd International Conference on AI Engineering – Software Engineering for AI (CAIN) pp. 172 - 177
Main Authors Nouri, Ali, Cabrero-Daniel, Beatriz, Torner, Fredrik, Sivencrona, Hakan, Berger, Christian
Format Conference Proceeding
LanguageEnglish
Published ACM 14.04.2024
Subjects
Online AccessGet full text
DOI10.1145/3644815.3644953

Cover

More Information
Summary:DevOps is a necessity in many industries, including the development of Autonomous Vehicles. In those settings, there are iterative activities that reduce the speed of SafetyOps cycles. One of these activities is "Hazard Analysis & Risk Assessment" (HARA), which is an essential step to start the safety requirements specification. As a potential approach to increase the speed of this step in SafetyOps, we have delved into the capabilities of Large Language Models (LLMs). Our objective is to systematically assess their potential for application in the field of safety engineering. To that end, we propose a framework to support a higher degree of automation of HARA with LLMs. Despite our endeavors to automate as much of the process as possible, expert review remains crucial to ensure the validity and correctness of the analysis results, with necessary modifications made accordingly.CCS Concepts*Software and its engineering → Software verification and validation;*General and reference → Verification;*Computing methodologies → Natural language processing;*Computer systems organization → Dependable and fault-tolerant systems and networks.
DOI:10.1145/3644815.3644953