Deep learning–based image instance segmentation for moisture marks of shield tunnel lining

This paper presents a method for the image instance segmentation of the moisture marks of shield tunnel lining using a mask-region-based convolutional neural network (Mask R-CNN) algorithm. The authors’ previously proposed fully convolutional network (FCN) framework and the moisture-mark detection f...

Full description

Saved in:
Bibliographic Details
Published inTunnelling and underground space technology Vol. 95; p. 103156
Main Authors Zhao, Shuai, Zhang, Dong Ming, Huang, Hong Wei
Format Journal Article
LanguageEnglish
Published Oxford Elsevier Ltd 01.01.2020
Elsevier BV
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This paper presents a method for the image instance segmentation of the moisture marks of shield tunnel lining using a mask-region-based convolutional neural network (Mask R-CNN) algorithm. The authors’ previously proposed fully convolutional network (FCN) framework and the moisture-mark detection framework have been combined into a unified Mask R-CNN framework. A total of 5031 images covering five scales were collected and annotated to train this deep-learning (DL)-based algorithm to identify the moisture marks in images. Three steps are detailed for instance segmentation: feature extraction, the generation of region proposals, and moisture-mark identification. A high-quality segmentation mask for the moisture marks is generated, and the moisture-mark area is obtained by counting the pixels with a value of 1 in the polygon generated for moisture marks during the test process of the trained Mask R-CNN model. The proposed method is validated by an experimental study, and the results are compared with those obtained by the authors’ previous FCN method and two conventional methods—the region growing algorithm (RGA) and Otsu algorithm (OA). The accuracy, F1 score, and intersection over union (IoU) for the proposed method are superior than those for the FCN, RGA, and OA with respect to 503 test images. The inference time for the proposed method is considerably shorter than that for the FCN and RGA and slightly longer than that for the OA.
ISSN:0886-7798
1878-4364
DOI:10.1016/j.tust.2019.103156