How to Induce Trust in Medical AI Systems

Trust is an important prerequisite for the acceptance of an Artificial Intelligence (AI) system, in particular in the medical domain. Explainability is currently discussed as the key approach to induce trust. Since a medical AI system is considered a medical device, it also has to be formally certif...

Full description

Saved in:
Bibliographic Details
Published inAdvances in Conceptual Modeling pp. 5 - 14
Main Authors Reimer, Ulrich, Tödtli, Beat, Maier, Edith
Format Book Chapter
LanguageEnglish
Published Cham Springer International Publishing
SeriesLecture Notes in Computer Science
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Trust is an important prerequisite for the acceptance of an Artificial Intelligence (AI) system, in particular in the medical domain. Explainability is currently discussed as the key approach to induce trust. Since a medical AI system is considered a medical device, it also has to be formally certified by an officially recognised agency. The paper argues that neither explainability nor certification suffice to tackle the trust problem. Instead, we propose an alternative approach aimed at showing the physician how well a patient is represented in the original training data set. We operationalize this approach by developing formal indicators and illustrate their usefulness with a real-world medical data set.
ISBN:9783030658465
3030658465
ISSN:0302-9743
1611-3349
DOI:10.1007/978-3-030-65847-2_1