Population-based 3D respiratory motion modelling from convolutional autoencoders for 2D ultrasound-guided radiotherapy

•Motion compensation model for free-breathing ultrasound-guided radiotherapy treatments.•A novel real-time motion modelling framework composed of a rigid alignment module and a deep deformable model.•Inference requires only two pre-treatment volumes and live 2D images representing the current state...

Full description

Saved in:
Bibliographic Details
Published inMedical image analysis Vol. 75; p. 102260
Main Authors Mezheritsky, Tal, Romaguera, Liset Vázquez, Le, William, Kadoury, Samuel
Format Journal Article
LanguageEnglish
Published Netherlands Elsevier B.V 01.01.2022
Elsevier BV
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:•Motion compensation model for free-breathing ultrasound-guided radiotherapy treatments.•A novel real-time motion modelling framework composed of a rigid alignment module and a deep deformable model.•Inference requires only two pre-treatment volumes and live 2D images representing the current state of the treated organ.•Validation on a cohort of 20 healthy volunteers. [Display omitted] Radiotherapy is a widely used treatment modality for various types of cancers. A challenge for precise delivery of radiation to the treatment site is the management of internal motion caused by the patient’s breathing, especially around abdominal organs such as the liver. Current image-guided radiation therapy (IGRT) solutions rely on ionising imaging modalities such as X-ray or CBCT, which do not allow real-time target tracking. Ultrasound imaging (US) on the other hand is relatively inexpensive, portable and non-ionising. Although 2D US can be acquired at a sufficient temporal frequency, it doesn’t allow for target tracking in multiple planes, while 3D US acquisitions are not adapted for real-time. In this work, a novel deep learning-based motion modelling framework is presented for ultrasound IGRT. Our solution includes an image similarity-based rigid alignment module combined with a deep deformable motion model. Leveraging the representational capabilities of convolutional autoencoders, our deformable motion model associates complex 3D deformations with 2D surrogate US images through a common learned low dimensional representation. The model is trained on a variety of deformations and anatomies which enables it to generate the 3D motion experienced by the liver of a previously unseen subject. During inference, our framework only requires two pre-treatment 3D volumes of the liver at extreme breathing phases and a live 2D surrogate image representing the current state of the organ. In this study, the presented model is evaluated on a 3D+t US data set of 20 volunteers based on image similarity as well as anatomical target tracking performance. We report results that surpass comparable methodologies in both metric categories with a mean tracking error of 3.5±2.4 mm, demonstrating the potential of this technique for IGRT.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:1361-8415
1361-8423
1361-8423
DOI:10.1016/j.media.2021.102260