Interpretable Deep Learning for Brain Tumor Diagnosis: Occlusion Sensitivity-Driven Explainability in MRI Classification
Magnetic resonance imaging (MRI) serves as a crucial diagnostic tool, particularly for brain tumors where early detection significantly improves patient prognosis. The growing use of deep learning in medical imaging has led to substantial progress, yet the opaque nature of these models creates barri...
Saved in:
Published in | VFAST Transactions on Software Engineering Vol. 13; no. 2; pp. 135 - 146 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
30.05.2025
|
Online Access | Get full text |
ISSN | 2411-6246 2309-3978 |
DOI | 10.21015/vtse.v13i2.2082 |
Cover
Loading…
Summary: | Magnetic resonance imaging (MRI) serves as a crucial diagnostic tool, particularly for brain tumors where early detection significantly improves patient prognosis. The growing use of deep learning in medical imaging has led to substantial progress, yet the opaque nature of these models creates barriers to clinical acceptance, especially for critical applications such as tumor diagnosis. Our research applies explainable AI (XAI) techniques to improve the transparency of CNN-based brain tumor detection using MRI data. Working with a dataset containing 7,022 images spanning four tumor categories, our model attains 80\% accuracy while employing occlusion sensitivity analysis to produce visual interpretations. These heatmaps identify the most influential regions for predictions, giving clinicians insight into the model's decision process. This XAI integration enhances both understanding and accountability in healthcare AI systems, facilitating more reliable diagnostic tools.Precise early identification of brain tumors through MRI dramatically affects survival outcomes, though human interpretation remains time-consuming and variable. While CNNs show impressive classification results, their unclear reasoning limits clinical implementation. Our study introduces an XAI approach that pairs an accurate CNN classifier (80% on 7,024 multi-class scans) with occlusion analysis to create intuitive visual explanations. By methodically altering image areas and measuring prediction variations, we generate heatmaps that accurately pinpoint tumor-distinguishing features, matching radiological assessment. Comparative results demonstrate occlusion analysis's superiority over gradient methods like Grad-CAM in spatial precision for tumor classification (meningioma, glioma, pituitary). This research progresses clinically useful AI by connecting model effectiveness with interpretability in brain tumor imaging. |
---|---|
ISSN: | 2411-6246 2309-3978 |
DOI: | 10.21015/vtse.v13i2.2082 |