Black Box Nature of Deep Learning for Digital Pathology: Beyond Quantitative to Qualitative Algorithmic Performances

Artificial intelligence (AI), particularly deep learning (DL), which involves automated feature extraction using deep neural networks, is expected to be used increasingly often by clinicians in the near future. AI can analyze medical images and patient data at a level not possible by a single physic...

Full description

Saved in:
Bibliographic Details
Published inArtificial Intelligence and Machine Learning for Digital Pathology Vol. 12090; pp. 95 - 101
Main Author Hayashi, Yoichi
Format Book Chapter
LanguageEnglish
Published Switzerland Springer International Publishing AG 2020
Springer International Publishing
SeriesLecture Notes in Computer Science
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Artificial intelligence (AI), particularly deep learning (DL), which involves automated feature extraction using deep neural networks, is expected to be used increasingly often by clinicians in the near future. AI can analyze medical images and patient data at a level not possible by a single physician; however, the resulting parameters are difficult to interpret. This so-called “black box” problem causes opaqueness in DL. The aim of the present study is to help realize the transparency of black box machine learning for digital pathology (DP). To achieve this aim, we review the “black box” problem and the limitations of DL for DP, and attempt to reveal a paradigm shift in DP in which diagnostic accuracy is surpassed to achieve explainability. DL in medical fields such as DP still has considerable limitations. To interpret and apply DL effectively in DP, sufficient expertise in computer science is required. Moreover, although rules can be extracted using the Re-RX family, the classification accuracy is slightly lower than that using whole images trained by a convolutional neural network; thus, to establish accountability, one of the most important issues in DP is to explain the classification results clearly. Although more interpretable algorithms seem likely to be more readily accepted by medical professionals, it remains necessary to determine whether this could lead to increased clinical effectiveness. For the acceptance of AI by pathologists and physicians in DP, not only quantitative, but also qualitative algorithmic performance, such as rule extraction, should be improved.
ISBN:3030504018
9783030504014
ISSN:0302-9743
1611-3349
DOI:10.1007/978-3-030-50402-1_6