A deep learning model based on fusion images of chest radiography and X-ray sponge images supports human visual characteristics of retained surgical items detection

Purpose Although a novel deep learning software was proposed using post-processed images obtained by the fusion between X-ray images of normal post-operative radiography and surgical sponge, the association of the retained surgical item detectability with human visual evaluation has not been suffici...

Full description

Saved in:
Bibliographic Details
Published inInternational journal for computer assisted radiology and surgery Vol. 18; no. 8; pp. 1459 - 1467
Main Authors Kawakubo, Masateru, Waki, Hiroto, Shirasaka, Takashi, Kojima, Tsukasa, Mikayama, Ryoji, Hamasaki, Hiroshi, Akamine, Hiroshi, Kato, Toyoyuki, Baba, Shingo, Ushiro, Shin, Ishigami, Kousei
Format Journal Article
LanguageEnglish
Published Cham Springer International Publishing 30.12.2022
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Purpose Although a novel deep learning software was proposed using post-processed images obtained by the fusion between X-ray images of normal post-operative radiography and surgical sponge, the association of the retained surgical item detectability with human visual evaluation has not been sufficiently examined. In this study, we investigated the association of retained surgical item detectability between deep learning and human subjective evaluation. Methods A deep learning model was constructed from 2987 training images and 1298 validation images, which were obtained from post-processing of the image fusion between X-ray images of normal post-operative radiography and surgical sponge. Then, another 800 images were used, i.e., 400 with and 400 without surgical sponge. The detection characteristics of retained sponges between the model and a general observer with 10-year clinical experience were analyzed using the receiver operator characteristics. Results The following values from the deep learning model and observer were, respectively, derived: Cutoff values of probability were 0.37 and 0.45; areas under the curves were 0.87 and 0.76; sensitivity values were 85% and 61%; and specificity values were 73% and 92%. Conclusion For the detection of surgical sponges, we concluded that the deep learning model has higher sensitivity, while the human observer has higher specificity. These characteristics indicate that the deep learning system that is complementary to humans could support the clinical workflow in operation rooms for prevention of retained surgical items.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1861-6429
1861-6410
1861-6429
DOI:10.1007/s11548-022-02816-8