New unified insights on deep learning in radiological and pathological images: Beyond quantitative performances to qualitative interpretation
Deep learning (DL) has become the main focus of research in the field of artificial intelligence, despite its lack of explainability and interpretability. DL mainly involves automated feature extraction using deep neural networks (DNNs) that can classify radiological and pathological images. Convolu...
Saved in:
Published in | Informatics in medicine unlocked Vol. 19; p. 100329 |
---|---|
Main Author | |
Format | Journal Article |
Language | English |
Published |
Elsevier Ltd
2020
Elsevier |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Deep learning (DL) has become the main focus of research in the field of artificial intelligence, despite its lack of explainability and interpretability. DL mainly involves automated feature extraction using deep neural networks (DNNs) that can classify radiological and pathological images. Convolutional neural network (CNNs) can be also applied to pathological image analysis, such as the detection of tumors and the quantification of cellular features. However, to our knowledge, no attempts have been made to identify interpretable signatures from CNN features, and few studies have examined the use of CNNs for cytopathology images. Therefore, the aim of the present paper is to provide new unified insights to aid the development of more interpretable CNN-based methods to classify radiological and pathological images and explain the reason for this classification in the form of if-then rules. We first describe the “black box” problem of shallow NNs, the concept of rule extraction, the renewed attack of the “black box” problem in DNN architectures, and the paradigm shift regarding the transparency of DL using rule extraction. Next, we review limitations of DL in pathology in regard to histopathology and cytopathology. We then investigate the discrimination of cytological features and explanations and review recent techniques for interpretable CNN-based methods in histopathology, as well as current approaches being taken to enhance the interpretability of CNN-based methods for radiological images. Finally, we provide new unified insights to extract qualitative interpretable rules for radiological and pathological images.
•We review the limitations of deep learning in radiology and pathology.•We review recent inspired ideas from radiological imaging.•We review approaches to enhance the interpretability of CNN models in pathology.•We provide new insights for the qualitative interpretation for medical images.•Following the same principle and that can be unified from the perspective of rule extraction. |
---|---|
ISSN: | 2352-9148 2352-9148 |
DOI: | 10.1016/j.imu.2020.100329 |