Enhancing diagnostic deep learning via self-supervised pretraining on large-scale, unlabeled non-medical images

Background Pretraining labeled datasets, like ImageNet, have become a technical standard in advanced medical image analysis. However, the emergence of self-supervised learning (SSL), which leverages unlabeled data to learn robust features, presents an opportunity to bypass the intensive labeling pro...

Full description

Saved in:
Bibliographic Details
Published inEuropean radiology experimental Vol. 8; no. 1; p. 10
Main Authors Tayebi Arasteh, Soroosh, Misera, Leo, Kather, Jakob Nikolas, Truhn, Daniel, Nebelung, Sven
Format Journal Article
LanguageEnglish
Published Vienna Springer Vienna 08.02.2024
Springer Nature B.V
SpringerOpen
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Background Pretraining labeled datasets, like ImageNet, have become a technical standard in advanced medical image analysis. However, the emergence of self-supervised learning (SSL), which leverages unlabeled data to learn robust features, presents an opportunity to bypass the intensive labeling process. In this study, we explored if SSL for pretraining on non-medical images can be applied to chest radiographs and how it compares to supervised pretraining on non-medical images and on medical images. Methods We utilized a vision transformer and initialized its weights based on the following: (i) SSL pretraining on non-medical images (DINOv2), (ii) supervised learning (SL) pretraining on non-medical images (ImageNet dataset), and (iii) SL pretraining on chest radiographs from the MIMIC-CXR database, the largest labeled public dataset of chest radiographs to date. We tested our approach on over 800,000 chest radiographs from 6 large global datasets, diagnosing more than 20 different imaging findings. Performance was quantified using the area under the receiver operating characteristic curve and evaluated for statistical significance using bootstrapping. Results SSL pretraining on non-medical images not only outperformed ImageNet-based pretraining ( p  < 0.001 for all datasets) but, in certain cases, also exceeded SL on the MIMIC-CXR dataset. Our findings suggest that selecting the right pretraining strategy, especially with SSL, can be pivotal for improving diagnostic accuracy of artificial intelligence in medical imaging. Conclusions By demonstrating the promise of SSL in chest radiograph analysis, we underline a transformative shift towards more efficient and accurate AI models in medical imaging. Relevance statement Self-supervised learning highlights a paradigm shift towards the enhancement of AI-driven accuracy and efficiency in medical imaging. Given its promise, the broader application of self-supervised learning in medical imaging calls for deeper exploration, particularly in contexts where comprehensive annotated datasets are limited. Graphical Abstract Key points • Validated on over 800,000 chest radiographs from 6 datasets and 20 imaging findings, a self-supervised pretraining on non-medical images outperformed ImageNet-based supervised pretraining. • Non-medical-self-supervised learning even outperformed task-specific supervised learning on large-scale chest radiographs, in certain cases. • Self-supervised learning signifies AI’s transformative potential in medical imaging, especially chest radiography.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:2509-9280
2509-9280
DOI:10.1186/s41747-023-00411-3