Probabilistic 4D predictive model from in-room surrogates using conditional generative networks for image-guided radiotherapy

•Free-breathing motion model to generate 3D + t volumes.•Integration of anatomical information and a history of partial observations as predictive variables within a conditional generative model•Temporal predictive mechanism acting on low-dimensional features to forecast multiple future volumes in o...

Full description

Saved in:
Bibliographic Details
Published inMedical image analysis Vol. 74; p. 102250
Main Authors Romaguera, Liset Vázquez, Mezheritsky, Tal, Mansour, Rihab, Carrier, Jean-François, Kadoury, Samuel
Format Journal Article
LanguageEnglish
Published Netherlands Elsevier B.V 01.12.2021
Elsevier BV
Subjects
Online AccessGet full text
ISSN1361-8415
1361-8423
1361-8423
DOI10.1016/j.media.2021.102250

Cover

Loading…
More Information
Summary:•Free-breathing motion model to generate 3D + t volumes.•Integration of anatomical information and a history of partial observations as predictive variables within a conditional generative model•Temporal predictive mechanism acting on low-dimensional features to forecast multiple future volumes in one shot.•Inference requires only a pre-treatment volume and real-time 2D images from the treated organ•Model validation with multiple imaging modalities (MRI and US) both in healthy volunteers and patients. [Display omitted] Shape and location organ variability induced by respiration constitutes one of the main challenges during dose delivery in radiotherapy. Providing up-to-date volumetric information during treatment can improve tumor tracking, thereby increasing treatment efficiency and reducing damage to healthy tissue. We propose a novel probabilistic model to address the problem of volumetric estimation with scalable predictive horizon from image-based surrogates during radiotherapy treatments, thus enabling out-of-plane tracking of targets. This problem is formulated as a conditional learning task, where the predictive variables are the 2D surrogate images and a pre-operative static 3D volume. The model learns a distribution of realistic motion fields over a population dataset. Simultaneously, a seq-2-seq inspired temporal mechanism acts over the surrogate images yielding extrapolated-in-time representations. The phase-specific motion distributions are associated with the predicted temporal representations, allowing the recovery of dense organ deformation in multiple times. Due to its generative nature, this model enables uncertainty estimations by sampling the latent space multiple times. Furthermore, it can be readily personalized to a new subject via fine-tuning, and does not require inter-subject correspondences. The proposed model was evaluated on free-breathing 4D MRI and ultrasound datasets from 25 healthy volunteers, as well as on 11 cancer patients. A navigator-based data augmentation strategy was used during the slice reordering process to increase model robustness against inter-cycle variability. The patient data was used as a hold-out test set. Our approach yields volumetric prediction from image surrogates with a mean error of 1.67 ± 1.68 mm and 2.17 ± 0.82 mm in unseen cases of the patient MRI and US datasets, respectively. Moreover, model personalization yields a mean landmark error of 1.4 ± 1.1 mm compared to ground truth annotations in the volunteer MRI dataset, with statistically significant improvements over state-of-the-art.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:1361-8415
1361-8423
1361-8423
DOI:10.1016/j.media.2021.102250