Minimally interactive segmentation of soft-tissue tumors on CT and MRI using deep learning

Segmentations are crucial in medical imaging for morphological, volumetric, and radiomics biomarkers. Manual segmentation is accurate but not feasible in clinical workflow, while automatic segmentation generally performs sub-par. To develop a minimally interactive deep learning-based segmentation me...

Full description

Saved in:
Bibliographic Details
Published inEuropean radiology
Main Authors Spaanderman, Douwe J, Starmans, Martijn P A, van Erp, Gonnie C M, Hanff, David F, Sluijter, Judith H, Schut, Anne-Rose W, van Leenders, Geert J L H, Verhoef, Cornelis, Grünhagen, Dirk J, Niessen, Wiro J, Visser, Jacob J, Klein, Stefan
Format Journal Article
LanguageEnglish
Published Germany 19.11.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Segmentations are crucial in medical imaging for morphological, volumetric, and radiomics biomarkers. Manual segmentation is accurate but not feasible in clinical workflow, while automatic segmentation generally performs sub-par. To develop a minimally interactive deep learning-based segmentation method for soft-tissue tumors (STTs) on CT and MRI. The interactive method requires the user to click six points near the tumor's extreme boundaries in the image. These six points are transformed into a distance map and serve, with the image, as input for a convolutional neural network. A multi-center public dataset with 514 patients and nine STT phenotypes in seven anatomical locations, with CT or T1-weighted MRI, was used for training and internal validation. For external validation, another public dataset was employed, which included five unseen STT phenotypes in extremities on CT, T1-weighted MRI, and T2-weighted fat-saturated (FS) MRI. Internal validation resulted in a dice similarity coefficient (DSC) of 0.85 ± 0.11 (mean ± standard deviation) for CT and 0.84 ± 0.12 for T1-weighted MRI. External validation resulted in DSCs of 0.81 ± 0.08 for CT, 0.84 ± 0.09 for T1-weighted MRI, and 0.88 ± 0.08 for T2-weighted FS MRI. Volumetric measurements showed consistent replication with low error internally (volume: 1 ± 28 mm , r = 0.99; diameter: - 6 ± 14 mm, r = 0.90) and externally (volume: - 7 ± 23 mm , r = 0.96; diameter: - 3 ± 6 mm, r = 0.99). Interactive segmentation time was considerably shorter (CT: 364 s, T1-weighted MRI: 258s) than manual segmentation (CT: 1639s, T1-weighted MRI: 1895s). The minimally interactive segmentation method effectively segments STT phenotypes on CT and MRI, with robust generalization to unseen phenotypes and imaging modalities. Question Can this deep learning-based method segment soft-tissue tumors faster than can be done manually and more accurately than other automatic methods? Findings The minimally interactive segmentation method achieved accurate segmentation results in internal and external validation, and generalized well across soft-tissue tumor phenotypes and imaging modalities. Clinical relevance This minimally interactive deep learning-based segmentation method could reduce the burden of manual segmentation, facilitate the integration of imaging-based biomarkers (e.g., radiomics) into clinical practice, and provide a fast, semi-automatic solution for volume and diameter measurements (e.g., RECIST).
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1432-1084
1432-1084
DOI:10.1007/s00330-024-11167-8