Feasibility study of deep learning‐based markerless real‐time lung tumor tracking with orthogonal X‐ray projection images

Purpose The feasibility of a deep learning‐based markerless real‐time tumor tracking (RTTT) method was retrospectively studied with orthogonal kV X‐ray images and clinical tracking records acquired during lung cancer treatment. Methods Ten patients with lung cancer treated with marker‐implanted RTTT...

Full description

Saved in:
Bibliographic Details
Published inJournal of applied clinical medical physics Vol. 24; no. 4; pp. e13894 - n/a
Main Authors Zhou, Dejun, Nakamura, Mitsuhiro, Mukumoto, Nobutaka, Matsuo, Yukinori, Mizowaki, Takashi
Format Journal Article
LanguageEnglish
Published United States John Wiley & Sons, Inc 01.04.2023
John Wiley and Sons Inc
Subjects
Online AccessGet full text
ISSN1526-9914
1526-9914
DOI10.1002/acm2.13894

Cover

More Information
Summary:Purpose The feasibility of a deep learning‐based markerless real‐time tumor tracking (RTTT) method was retrospectively studied with orthogonal kV X‐ray images and clinical tracking records acquired during lung cancer treatment. Methods Ten patients with lung cancer treated with marker‐implanted RTTT were included. The prescription dose was 50 Gy in four fractions, using seven‐ to nine‐port non‐coplanar static beams. This corresponds to 14–18 X‐ray tube angles for an orthogonal X‐ray imaging system rotating with the gantry. All patients underwent 10 respiratory phases four‐dimensional computed tomography. After a data augmentation approach, for each X‐ray tube angle of a patient, 2250 digitally reconstructed radiograph (DRR) images with gross tumor volume (GTV) contour labeled were obtained. These images were adopted to train the patient and X‐ray tube angle‐specific GTV contour prediction model. During the testing, the model trained with DRR images predicted GTV contour on X‐ray projection images acquired during treatment. The predicted three‐dimensional (3D) positions of the GTV were calculated based on the centroids of the contours in the orthogonal images. The 3D positions of GTV determined by the marker‐implanted RTTT during the treatment were considered as the ground truth. The 3D deviations between the prediction and the ground truth were calculated to evaluate the performance of the model. Results The median GTV volume and motion range were 7.42 (range, 1.18–25.74) cm3 and 22 (range, 11–28) mm, respectively. In total, 8993 3D position comparisons were included. The mean calculation time was 85 ms per image. The overall median value of the 3D deviation was 2.27 (interquartile range: 1.66–2.95) mm. The probability of the 3D deviation smaller than 5 mm was 93.6%. Conclusions The evaluation results and calculation efficiency show the proposed deep learning‐based markerless RTTT method may be feasible for patients with lung cancer.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:1526-9914
1526-9914
DOI:10.1002/acm2.13894