Uncertainty-aware multi-view co-training for semi-supervised medical image segmentation and domain adaptation

•A unified framework for semi-supervised medical image segmentation and domain adaptation.•A co-training type algorithm that enforces multi-view consistency as additional supervision.•Uncertainty for each view is estimated and weighted to generate reliable pseudo labels.•Superior semi-supervised seg...

Full description

Saved in:
Bibliographic Details
Published inMedical image analysis Vol. 65; p. 101766
Main Authors Xia, Yingda, Yang, Dong, Yu, Zhiding, Liu, Fengze, Cai, Jinzheng, Yu, Lequan, Zhu, Zhuotun, Xu, Daguang, Yuille, Alan, Roth, Holger
Format Journal Article
LanguageEnglish
Published Amsterdam Elsevier B.V 01.10.2020
Elsevier BV
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:•A unified framework for semi-supervised medical image segmentation and domain adaptation.•A co-training type algorithm that enforces multi-view consistency as additional supervision.•Uncertainty for each view is estimated and weighted to generate reliable pseudo labels.•Superior semi-supervised segmentation performance on pancreas and multi-organ datasets.•Improve unsupervised domain adaptation results with and without source domain data. [Display omitted] Although having achieved great success in medical image segmentation, deep learning-based approaches usually require large amounts of well-annotated data, which can be extremely expensive in the field of medical image analysis. Unlabeled data, on the other hand, is much easier to acquire. Semi-supervised learning and unsupervised domain adaptation both take the advantage of unlabeled data, and they are closely related to each other. In this paper, we propose uncertainty-aware multi-view co-training (UMCT), a unified framework that addresses these two tasks for volumetric medical image segmentation. Our framework is capable of efficiently utilizing unlabeled data for better performance. We firstly rotate and permute the 3D volumes into multiple views and train a 3D deep network on each view. We then apply co-training by enforcing multi-view consistency on unlabeled data, where an uncertainty estimation of each view is utilized to achieve accurate labeling. Experiments on the NIH pancreas segmentation dataset and a multi-organ segmentation dataset show state-of-the-art performance of the proposed framework on semi-supervised medical image segmentation. Under unsupervised domain adaptation settings, we validate the effectiveness of this work by adapting our multi-organ segmentation model to two pathological organs from the Medical Segmentation Decathlon Datasets. Additionally, we show that our UMCT-DA model can even effectively handle the challenging situation where labeled source data is inaccessible, demonstrating strong potentials for real-world applications.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1361-8415
1361-8423
DOI:10.1016/j.media.2020.101766