Trustworthiness Assurance Assessment for High-Risk AI-Based Systems

This work proposes methodologies for ensuring the trustworthiness of high-risk artificial intelligence (AI) systems (AIS) to achieve compliance with the European Union's (EU) AI Act. High-risk classified AIS must fulfill seven requirements to be considered trustworthy and human-centric, and sub...

Full description

Saved in:
Bibliographic Details
Published inIEEE access Vol. 12; pp. 22718 - 22745
Main Authors Stettinger, Georg, Weissensteiner, Patrick, Khastgir, Siddartha
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This work proposes methodologies for ensuring the trustworthiness of high-risk artificial intelligence (AI) systems (AIS) to achieve compliance with the European Union's (EU) AI Act. High-risk classified AIS must fulfill seven requirements to be considered trustworthy and human-centric, and subsequently be considered for deployment. These requirements are equally important, mutually supportive, and should be implemented and evaluated throughout the AI lifecycle. The assurance of trustworthiness is influenced by ethical considerations, amongst others. Hence, the operational design domain (ODD) and behavior competency (BC) concepts from the automated driving domain are utilized in risk assessment strategies to quantify different types of residual risks. The methodology presented is guided by the consistent application of the ODD and its related BC concept throughout the entire AI lifecycle, focusing on the trustworthiness assurance framework and its associated process as the main pillars for AIS certification. The achievement of the overall objective of trustworthy and human-centric AIS is divided into seven interconnected sub-goals: the formulation of use restrictions, the trustworthiness assurance/argument itself, the identification of dysfunctional cases, the utilization of scenario databases and datasets, the application of metrics for evaluation, the implementation of the proposed concept across the AI lifecycle, and sufficient consideration of human factors. The role of standards in the assurance process is discussed, considering any existing gaps and areas for improvement. The work concludes with a summary of the developed approach, highlighting key takeaways and action points. Finally, a roadmap to ensure trustworthy and human-centric behavior of future AIS is outlined.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2024.3364387