A modified U-Net convolutional neural network for segmenting periprostatic adipose tissue based on contour feature learning

This study trains a U-shaped fully convolutional neural network (U-Net) model based on peripheral contour measures to achieve rapid, accurate, automated identification and segmentation of periprostatic adipose tissue (PPAT). Currently, no studies are using deep learning methods to discriminate and s...

Full description

Saved in:
Bibliographic Details
Published inHeliyon Vol. 10; no. 3; p. e25030
Main Authors Wang, Gang, Hu, Jinyue, Zhang, Yu, Xiao, Zhaolin, Huang, Mengxing, He, Zhanping, Chen, Jing, Bai, Zhiming
Format Journal Article
LanguageEnglish
Published England Elsevier Ltd 15.02.2024
Elsevier
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This study trains a U-shaped fully convolutional neural network (U-Net) model based on peripheral contour measures to achieve rapid, accurate, automated identification and segmentation of periprostatic adipose tissue (PPAT). Currently, no studies are using deep learning methods to discriminate and segment periprostatic adipose tissue. This paper proposes a novel and modified, U-shaped convolutional neural network contour control points on a small number of datasets of MRI T2W images of PPAT combined with its gradient images as a feature learning method to reduce feature ambiguity caused by the differences in PPAT contours of different patients. This paper adopts a supervised learning method on the labeled dataset, combining the probability and spatial distribution of control points, and proposes a weighted loss function to optimize the neural network's convergence speed and detection performance. Based on high-precision detection of control points, this paper uses a convex curve fitting to obtain the final PPAT contour. The imaging segmentation results were compared with those of a fully convolutional network (FCN), U-Net, and semantic segmentation convolutional network (SegNet) on three evaluation metrics: Dice similarity coefficient (DSC), Hausdorff distance (HD), and intersection over union ratio (IoU). Cropped images with a 270 × 270-pixel matrix had DSC, HD, and IoU values of 70.1%, 27 mm, and 56.1%, respectively; downscaled images with a 256 × 256-pixel matrix had 68.7%, 26.7 mm, and 54.1%. A U-Net network based on peripheral contour characteristics predicted the complete periprostatic adipose tissue contours on T2W images at different levels. FCN, U-Net, and SegNet could not completely predict them. This U-Net convolutional neural network based on peripheral contour features can identify and segment periprostatic adipose tissue quite well. Cropped images with a 270 × 270-pixel matrix are more appropriate for use with the U-Net convolutional neural network based on contour features; reducing the resolution of the original image will lower the accuracy of the U-Net convolutional neural network. FCN and SegNet are not appropriate for identifying PPAT on T2 sequence MR images. Our method can automatically segment PPAT rapidly and accurately, laying a foundation for PPAT image analysis.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
These authors have contributed equally to this work.
ISSN:2405-8440
2405-8440
DOI:10.1016/j.heliyon.2024.e25030