Low-Resolution Chest X-Ray Classification Via Knowledge Distillation and Multi-Task Learning

This research addresses the challenges of diagnosing chest X-rays (CXRs) at low resolutions, a common limitation in resource-constrained healthcare settings. High-resolution CXR imaging is crucial for identifying small but critical anomalies, such as nodules or opacities. However, when images are do...

Full description

Saved in:
Bibliographic Details
Published in2024 IEEE International Symposium on Biomedical Imaging (ISBI) pp. 1 - 5
Main Authors Akhter, Yasmeena, Ranjan, Rishabh, Singh, Richa, Vatsa, Mayank
Format Conference Proceeding
LanguageEnglish
Published IEEE 27.05.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This research addresses the challenges of diagnosing chest X-rays (CXRs) at low resolutions, a common limitation in resource-constrained healthcare settings. High-resolution CXR imaging is crucial for identifying small but critical anomalies, such as nodules or opacities. However, when images are downsized for processing in Computer-Aided Diagnosis (CAD) systems, vital spatial details and receptive fields are lost, hampering diagnosis accuracy. To address this, this paper presents the Multilevel Collaborative Attention Knowledge (MLCAK) method. This approach leverages the self-attention mechanism of Vision Transformers (ViT) to transfer critical diagnostic knowledge from high-resolution images to enhance the diagnostic efficacy of low-resolution CXRs. MLCAK incorporates local pathological findings to boost model explainability, enabling more accurate global predictions in a multi-task framework tailored for low-resolution CXR analysis. Our research, utilizing the Vindr CXR dataset, shows a considerable enhancement in the ability to diagnose diseases from low-resolution images (e.g. 28 × 28), suggesting a critical transition from the traditional reliance on high-resolution imaging (e.g. 224 × 224).
ISSN:1945-8452
DOI:10.1109/ISBI56570.2024.10635737