ToolNet: Holistically-Nested Real-Time Segmentation of Robotic Surgical Tools

Real-time tool segmentation from endoscopic videos is an essential part of many computer-assisted robotic surgical systems and of critical importance in robotic surgical data science. We propose two novel deep learning architectures for automatic segmentation of non-rigid surgical instruments. Both...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Garcia-Peraza-Herrera, Luis C, Li, Wenqi, Lucas Fidon, Gruijthuijsen, Caspar, Devreker, Alain, Attilakos, George, Deprest, Jan, Emmanuel Vander Poorten, Stoyanov, Danail, Vercauteren, Tom, Ourselin, Sebastien
Format Paper Journal Article
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 04.07.2017
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Real-time tool segmentation from endoscopic videos is an essential part of many computer-assisted robotic surgical systems and of critical importance in robotic surgical data science. We propose two novel deep learning architectures for automatic segmentation of non-rigid surgical instruments. Both methods take advantage of automated deep-learning-based multi-scale feature extraction while trying to maintain an accurate segmentation quality at all resolutions. The two proposed methods encode the multi-scale constraint inside the network architecture. The first proposed architecture enforces it by cascaded aggregation of predictions and the second proposed network does it by means of a holistically-nested architecture where the loss at each scale is taken into account for the optimization process. As the proposed methods are for real-time semantic labeling, both present a reduced number of parameters. We propose the use of parametric rectified linear units for semantic labeling in these small architectures to increase the regularization ability of the design and maintain the segmentation accuracy without overfitting the training sets. We compare the proposed architectures against state-of-the-art fully convolutional networks. We validate our methods using existing benchmark datasets, including ex vivo cases with phantom tissue and different robotic surgical instruments present in the scene. Our results show a statistically significant improved Dice Similarity Coefficient over previous instrument segmentation methods. We analyze our design choices and discuss the key drivers for improving accuracy.
ISSN:2331-8422
DOI:10.48550/arxiv.1706.08126