How to Induce Trust in Medical AI Systems
Trust is an important prerequisite for the acceptance of an Artificial Intelligence (AI) system, in particular in the medical domain. Explainability is currently discussed as the key approach to induce trust. Since a medical AI system is considered a medical device, it also has to be formally certif...
Saved in:
Published in | Advances in Conceptual Modeling pp. 5 - 14 |
---|---|
Main Authors | , , |
Format | Book Chapter |
Language | English |
Published |
Cham
Springer International Publishing
|
Series | Lecture Notes in Computer Science |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Trust is an important prerequisite for the acceptance of an Artificial Intelligence (AI) system, in particular in the medical domain. Explainability is currently discussed as the key approach to induce trust. Since a medical AI system is considered a medical device, it also has to be formally certified by an officially recognised agency. The paper argues that neither explainability nor certification suffice to tackle the trust problem. Instead, we propose an alternative approach aimed at showing the physician how well a patient is represented in the original training data set. We operationalize this approach by developing formal indicators and illustrate their usefulness with a real-world medical data set. |
---|---|
ISBN: | 9783030658465 3030658465 |
ISSN: | 0302-9743 1611-3349 |
DOI: | 10.1007/978-3-030-65847-2_1 |