A scoping review of transfer learning research on medical image analysis using ImageNet

Employing transfer learning (TL) with convolutional neural networks (CNNs), well-trained on non-medical ImageNet dataset, has shown promising results for medical image analysis in recent years. We aimed to conduct a scoping review to identify these studies and summarize their characteristics in term...

Full description

Saved in:
Bibliographic Details
Published inComputers in biology and medicine Vol. 128; p. 104115
Main Authors Morid, Mohammad Amin, Borjali, Alireza, Del Fiol, Guilherme
Format Journal Article
LanguageEnglish
Published United States Elsevier Ltd 01.01.2021
Elsevier Limited
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Employing transfer learning (TL) with convolutional neural networks (CNNs), well-trained on non-medical ImageNet dataset, has shown promising results for medical image analysis in recent years. We aimed to conduct a scoping review to identify these studies and summarize their characteristics in terms of the problem description, input, methodology, and outcome. To identify relevant studies, MEDLINE, IEEE, and ACM digital library were searched for studies published between June 1st, 2012 and January 2nd, 2020. Two investigators independently reviewed articles to determine eligibility and to extract data according to a study protocol defined a priori. After screening of 8421 articles, 102 met the inclusion criteria. Of 22 anatomical areas, eye (18%), breast (14%), and brain (12%) were the most commonly studied. Data augmentation was performed in 72% of fine-tuning TL studies versus 15% of the feature-extracting TL studies. Inception models were the most commonly used in breast related studies (50%), while VGGNet was the common in eye (44%), skin (50%) and tooth (57%) studies. AlexNet for brain (42%) and DenseNet for lung studies (38%) were the most frequently used models. Inception models were the most frequently used for studies that analyzed ultrasound (55%), endoscopy (57%), and skeletal system X-rays (57%). VGGNet was the most common for fundus (42%) and optical coherence tomography images (50%). AlexNet was the most frequent model for brain MRIs (36%) and breast X-Rays (50%). 35% of the studies compared their model with other well-trained CNN models and 33% of them provided visualization for interpretation. This study identified the most prevalent tracks of implementation in the literature for data preparation, methodology selection and output evaluation for various medical image analysis tasks. Also, we identified several critical research gaps existing in the TL studies on medical image analysis. The findings of this scoping review can be used in future TL studies to guide the selection of appropriate research approaches, as well as identify research gaps and opportunities for innovation. [Display omitted] •We reviewed 102 studies from MEDLINE, IEEE, and ACM digital library.•Most prevalent models included:•wide CNN models using the Inception modules for ultrasound, endoscopy and skeletal system X-rays•shallow CNN models with large kernel size using AlexNet for brain MRIs and breast X-rays•deep CNN models with DenseNet for lung X-rays•shallow CNN models with small kernel size using VGGNet models for eye images, skin and dental X-rays•Identified research gaps:•Stronger focus on systematic benchmarking is critical to understand optimal models for each medical imaging task.•Deep networks for a variety of image modalities and anatomical sites should be further investigated.•Further research is required to identify whether large data size or better choice of CNN model is the most important factor to optimize accuracy, time and memory.•More research is needed to provide insights on the decision-making process of the CNN model through visualization.•Instead of image modification exploring other data augmentation methods warrants investigation.
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Feature-3
content type line 23
ObjectType-Review-1
ISSN:0010-4825
1879-0534
DOI:10.1016/j.compbiomed.2020.104115