D-UNet: A Dimension-Fusion U Shape Network for Chronic Stroke Lesion Segmentation

Assessing the location and extent of lesions caused by chronic stroke is critical for medical diagnosis, surgical planning, and prognosis. In recent years, with the rapid development of 2D and 3D convolutional neural networks (CNN), the encoder-decoder structure has shown great potential in the fiel...

Full description

Saved in:
Bibliographic Details
Published inIEEE/ACM transactions on computational biology and bioinformatics Vol. 18; no. 3; pp. 940 - 950
Main Authors Zhou, Yongjin, Huang, Weijian, Dong, Pei, Xia, Yong, Wang, Shanshan
Format Journal Article
LanguageEnglish
Published United States IEEE 01.05.2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN1545-5963
1557-9964
1557-9964
DOI10.1109/TCBB.2019.2939522

Cover

Loading…
More Information
Summary:Assessing the location and extent of lesions caused by chronic stroke is critical for medical diagnosis, surgical planning, and prognosis. In recent years, with the rapid development of 2D and 3D convolutional neural networks (CNN), the encoder-decoder structure has shown great potential in the field of medical image segmentation. However, the 2D CNN ignores the 3D information of medical images, while the 3D CNN suffers from high computational resource demands. This paper proposes a new architecture called dimension-fusion-UNet (D-UNet), which combines 2D and 3D convolution innovatively in the encoding stage. The proposed architecture achieves a better segmentation performance than 2D networks, while requiring significantly less computation time in comparison to 3D networks. Furthermore, to alleviate the data imbalance issue between positive and negative samples for the network training, we propose a new loss function called Enhance Mixing Loss (EML). This function adds a weighted focal coefficient and combines two traditional loss functions. The proposed method has been tested on the ATLAS dataset and compared to three state-of-the-art methods. The results demonstrate that the proposed method achieves the best quality performance in terms of DSC = 0.5349 <inline-formula><tex-math notation="LaTeX">\pm</tex-math> <mml:math><mml:mo>±</mml:mo></mml:math><inline-graphic xlink:href="huang-ieq1-2939522.gif"/> </inline-formula> 0.2763 and precision = 0.6331 <inline-formula><tex-math notation="LaTeX">\pm</tex-math> <mml:math><mml:mo>±</mml:mo></mml:math><inline-graphic xlink:href="huang-ieq2-2939522.gif"/> </inline-formula> 0.295).
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:1545-5963
1557-9964
1557-9964
DOI:10.1109/TCBB.2019.2939522