A Radiology-focused Review of Predictive Uncertainty for AI Interpretability in Computer-assisted Segmentation

The recent advances and availability of computer hardware, software tools, and massive digital data archives have enabled the rapid development of artificial intelligence (AI) applications. Concerns over whether AI tools can "communicate" decisions to radiologists and primary care physicia...

Full description

Saved in:
Bibliographic Details
Published inRadiology. Artificial intelligence Vol. 3; no. 6; p. e210031
Main Authors McCrindle, Brian, Zukotynski, Katherine, Doyle, Thomas E, Noseworthy, Michael D
Format Journal Article
LanguageEnglish
Published United States Radiological Society of North America 01.11.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The recent advances and availability of computer hardware, software tools, and massive digital data archives have enabled the rapid development of artificial intelligence (AI) applications. Concerns over whether AI tools can "communicate" decisions to radiologists and primary care physicians is of particular importance because automated clinical decisions can substantially impact patient outcome. A challenge facing the clinical implementation of AI stems from the potential lack of trust clinicians have in these predictive models. This review will expand on the existing literature on interpretability methods for deep learning and review the state-of-the-art methods for predictive uncertainty estimation for computer-assisted segmentation tasks. Last, we discuss how uncertainty can improve predictive performance and model interpretability and can act as a tool to help foster trust. Segmentation, Quantification, Ethics, Bayesian Network (BN) © RSNA, 2021.
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Feature-3
content type line 23
ObjectType-Review-1
ISSN:2638-6100
2638-6100
DOI:10.1148/ryai.2021210031