Improved Medical Image Segmentation Model Based on 3D U-Net

TP317.4; With the widespread application of deep learning in the field of computer vision, gradually allowing medical image technology to assist doctors in making diagnoses has great practical and research significance. Aiming at the shortcomings of the traditional U-Net model in 3D spatial informat...

Full description

Saved in:
Bibliographic Details
Published in东华大学学报(英文版) Vol. 39; no. 4; pp. 311 - 316
Main Authors LIN Wei, FAN Hong, HU Chenxi, YANG Yi, YU Suping, NI Lin
Format Journal Article
LanguageEnglish
Published College of Information Science and Technology,Donghua University,Shanghai 201620,China 30.08.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:TP317.4; With the widespread application of deep learning in the field of computer vision, gradually allowing medical image technology to assist doctors in making diagnoses has great practical and research significance. Aiming at the shortcomings of the traditional U-Net model in 3D spatial information extraction, model over-fitting, and low degree of semantic information fusion, an improved medical image segmentation model has been used to achieve more accurate segmentation of medical images. In this model, we make full use of the residual network (ResNet) to solve the over-fitting problem. In order to process and aggregate data at different scales, the inception network is used instead of the traditional convolutional layer, and the dilated convolution is used to increase the receptive field. The conditional random field(CRF) can complete the contour refinement work. Compared with the traditional 3D U-Net network, the segmentation accuracy of the improved liver and tumor images increases by 2.89% and 7.66%, respectively. As a part of the image processing process, the method in this paper not only can be used for medical image segmentation, but also can lay the foundation for subsequent image 3D reconstruction work.
ISSN:1672-5220
DOI:10.19884/j.1672-5220.202202377