Collaborative Learning for Annotation‐Efficient Volumetric MR Image Segmentation
Background Deep learning has presented great potential in accurate MR image segmentation when enough labeled data are provided for network optimization. However, manually annotating three‐dimensional (3D) MR images is tedious and time‐consuming, requiring experts with rich domain knowledge and exper...
Saved in:
Published in | Journal of magnetic resonance imaging Vol. 60; no. 4; pp. 1604 - 1614 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
Hoboken, USA
John Wiley & Sons, Inc
01.10.2024
Wiley Subscription Services, Inc |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Background
Deep learning has presented great potential in accurate MR image segmentation when enough labeled data are provided for network optimization. However, manually annotating three‐dimensional (3D) MR images is tedious and time‐consuming, requiring experts with rich domain knowledge and experience.
Purpose
To build a deep learning method exploring sparse annotations, namely only a single two‐dimensional slice label for each 3D training MR image.
Study Type
Retrospective.
Population
Three‐dimensional MR images of 150 subjects from two publicly available datasets were included. Among them, 50 (1377 image slices) are for prostate segmentation. The other 100 (8800 image slices) are for left atrium segmentation. Five‐fold cross‐validation experiments were carried out utilizing the first dataset. For the second dataset, 80 subjects were used for training and 20 were used for testing.
Field Strength/Sequence
1.5 T and 3.0 T; axial T2‐weighted and late gadolinium‐enhanced, 3D respiratory navigated, inversion recovery prepared gradient echo pulse sequence.
Assessment
A collaborative learning method by integrating the strengths of semi‐supervised and self‐supervised learning schemes was developed. The method was trained using labeled central slices and unlabeled noncentral slices. Segmentation performance on testing set was reported quantitatively and qualitatively.
Statistical Tests
Quantitative evaluation metrics including boundary intersection‐over‐union (B‐IoU), Dice similarity coefficient, average symmetric surface distance, and relative absolute volume difference were calculated. Paired t test was performed, and P < 0.05 was considered statistically significant.
Results
Compared to fully supervised training with only the labeled central slice, mean teacher, uncertainty‐aware mean teacher, deep co‐training, interpolation consistency training (ICT), and ambiguity‐consensus mean teacher, the proposed method achieved a substantial improvement in segmentation accuracy, increasing the mean B‐IoU significantly by more than 10.0% for prostate segmentation (proposed method B‐IoU: 70.3% ± 7.6% vs. ICT B‐IoU: 60.3% ± 11.2%) and by more than 6.0% for left atrium segmentation (proposed method B‐IoU: 66.1% ± 6.8% vs. ICT B‐IoU: 60.1% ± 7.1%).
Data Conclusions
A collaborative learning method trained using sparse annotations can segment prostate and left atrium with high accuracy.
Level of Evidence
0
Technical Efficacy
Stage 1 |
---|---|
Bibliography: | The first two authors contributed equally to this work. ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
ISSN: | 1053-1807 1522-2586 1522-2586 |
DOI: | 10.1002/jmri.29194 |