Uncertainty‐aware iterative learning for noisy‐labeled medical image segmentation

Abstract Medical image segmentation from noisy labels is an important task since obtaining high‐quality annotations is extremely difficult and expensive. There are a lot of approaches proposed for such task. However, some issues like the overfitting on noisy annotations, the limited learning of boun...

Full description

Saved in:
Bibliographic Details
Published inIET image processing Vol. 17; no. 13; pp. 3830 - 3840
Main Authors Hao, Pengyi, Shi, Kangjian, Tian, Shuyuan, Wu, Fuli
Format Journal Article
LanguageEnglish
Published Wiley 01.11.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Abstract Medical image segmentation from noisy labels is an important task since obtaining high‐quality annotations is extremely difficult and expensive. There are a lot of approaches proposed for such task. However, some issues like the overfitting on noisy annotations, the limited learning of boundary features, and no consideration of the corrupted local pixels are still not solved. Therefore, a novel approach named uncertainty‐aware iterative learning (UaIL) is proposed for medical image segmentation with noisy labels. UaIL iteratively and jointly trains two deep networks using the original images and their argumented ones through a joint loss function including softened label loss, hard label loss and consistency loss, which encourages UaIL to produce segmentations that are robust to the perturbations in arbitrary semantic space. The uncertainty of labels is estimated based on the predictions in iterative learning, then the original labels are refined, which improves the learning of boundary features in segmentation. To avoid overfitting, a stopping strategy is designed based on the dice coefficient in iterative learning. Experiments on two public datasets verify the effectiveness of UaIL under different levels of annotation noise. Especially, when there are serious noises in the labels, the dice achieved by UaIL is 1.43% to 15.03% higher than the competing approaches on the two public datasets. The UaIL is further verified on a private dataset, which shows its ability of applying in the real application with noisy labels.
ISSN:1751-9659
1751-9667
DOI:10.1049/ipr2.12900