Should AI models be explainable to clinicians?

Abstract In the high-stakes realm of critical care, where daily decisions are crucial and clear communication is paramount, comprehending the rationale behind Artificial Intelligence (AI)-driven decisions appears essential. While AI has the potential to improve decision-making, its complexity can hi...

Full description

Saved in:
Bibliographic Details
Published inCritical care (London, England) Vol. 28; no. 1; p. 301
Main Authors Abgrall, Gwénolé, Holder, Andre L, Chelly Dagdia, Zaineb, Zeitouni, Karine, Monnet, Xavier
Format Journal Article
LanguageEnglish
Published London BioMed Central Ltd 12.09.2024
BioMed Central
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Abstract In the high-stakes realm of critical care, where daily decisions are crucial and clear communication is paramount, comprehending the rationale behind Artificial Intelligence (AI)-driven decisions appears essential. While AI has the potential to improve decision-making, its complexity can hinder comprehension and adherence to its recommendations. “Explainable AI” (XAI) aims to bridge this gap, enhancing confidence among patients and doctors. It also helps to meet regulatory transparency requirements, offers actionable insights, and promotes fairness and safety. Yet, defining explainability and standardising assessments are ongoing challenges and balancing performance and explainability can be needed, even if XAI is a growing field.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1364-8535
1466-609X
1364-8535
1466-609X
DOI:10.1186/s13054-024-05005-y