MedUHIP: Towards Human-In-the-Loop Medical Segmentation
Although segmenting natural images has shown impressive performance, these techniques cannot be directly applied to medical image segmentation. Medical image segmentation is particularly complicated by inherent uncertainties. For instance, the ambiguous boundaries of tissues can lead to diverse but...
Saved in:
Main Authors | , |
---|---|
Format | Journal Article |
Language | English |
Published |
02.08.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Although segmenting natural images has shown impressive performance, these
techniques cannot be directly applied to medical image segmentation. Medical
image segmentation is particularly complicated by inherent uncertainties. For
instance, the ambiguous boundaries of tissues can lead to diverse but plausible
annotations from different clinicians. These uncertainties cause significant
discrepancies in clinical interpretations and impact subsequent medical
interventions. Therefore, achieving quantitative segmentations from uncertain
medical images becomes crucial in clinical practice. To address this, we
propose a novel approach that integrates an \textbf{uncertainty-aware model}
with \textbf{human-in-the-loop interaction}. The uncertainty-aware model
proposes several plausible segmentations to address the uncertainties inherent
in medical images, while the human-in-the-loop interaction iteratively modifies
the segmentation under clinician supervision. This collaborative model ensures
that segmentation is not solely dependent on automated techniques but is also
refined through clinician expertise. As a result, our approach represents a
significant advancement in the field which enhances the safety of medical image
segmentation. It not only offers a comprehensive solution to produce
quantitative segmentation from inherent uncertain medical images, but also
establishes a synergistic balance between algorithmic precision and clincian
knowledge. We evaluated our method on various publicly available
multi-clinician annotated datasets: REFUGE2, LIDC-IDRI and QUBIQ. Our method
showcases superior segmentation capabilities, outperforming a wide range of
deterministic and uncertainty-aware models. We also demonstrated that our model
produced significantly better results with fewer interactions compared to
previous interactive models. We will release the code to foster further
research in this area. |
---|---|
DOI: | 10.48550/arxiv.2408.01620 |