LapFormer: surgical tool detection in laparoscopic surgical video using transformer architecture

One of the most essential steps in the surgical workflow analysis is recognition of surgical tool presence. We propose a method to detect the presence of surgical tools in laparoscopic surgery videos, called LapFormer. The novelty of LapFormer is to use a Transformer architecture, which is a feed-fo...

Full description

Saved in:
Bibliographic Details
Published inComputer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization Vol. 9; no. 3; pp. 302 - 307
Main Author Kondo, Satoshi
Format Journal Article
LanguageEnglish
Japanese
Published Taylor & Francis 04.05.2021
Informa UK Limited
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:One of the most essential steps in the surgical workflow analysis is recognition of surgical tool presence. We propose a method to detect the presence of surgical tools in laparoscopic surgery videos, called LapFormer. The novelty of LapFormer is to use a Transformer architecture, which is a feed-forward neural network architecture with attention mechanism, growing in popularity for natural language processing, for analysing inter-frame correlation in videos instead of using recurrent neural network families. To the best of our knowledge, no methods using a Transformer architecture for analysing laparoscopic surgery videos have been proposed. We evaluate our method on a dataset called Cholec80, which contains 80 videos of cholecystectomy surgeries. We confirm that our proposed method outperforms the conventional methods such as single-frame analysis with convolutional neural networks or multiple frame analysis with recurrent neural networks by 20.3 and 17.3 points in macro-F1 score, respectively. We also conduct an ablation study on how hyper-parameters for Transformer block in our proposed method affect the performance of the detection.
ISSN:2168-1163
2168-1171
DOI:10.1080/21681163.2020.1835550