A More Interpretable Classifier For Multiple Sclerosis
Over the past years, deep learning proved its effectiveness in medical imaging for diagnosis or segmentation. Nevertheless, to be fully integrated in clinics, these methods must both reach good performances and convince area practitioners about their interpretability. Thus, an interpretable model sh...
Saved in:
Published in | 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI) pp. 1062 - 1066 |
---|---|
Main Authors | , , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
13.04.2021
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Over the past years, deep learning proved its effectiveness in medical imaging for diagnosis or segmentation. Nevertheless, to be fully integrated in clinics, these methods must both reach good performances and convince area practitioners about their interpretability. Thus, an interpretable model should make its decision on clinical relevant information as a domain expert would. With this purpose, we propose a more interpretable classifier focusing on the most widespread autoimmune neuroinflammatory disease: multiple sclerosis. This disease is characterized by brain lesions visible on MRI (Magnetic Resonance Images) on which diagnosis is based. Using Integrated Gradients attributions, we show that the utilization of brain tissue probability maps instead of raw MR images as deep network input reaches a more accurate and interpretable classifier with decision highly based on lesions. |
---|---|
ISSN: | 1945-8452 |
DOI: | 10.1109/ISBI48211.2021.9434074 |