Dual-energy CT for automatic organs-at-risk segmentation in brain-tumor patients using a multi-atlas and deep-learning approach
In radiotherapy, computed tomography (CT) datasets are mostly used for radiation treatment planning to achieve a high-conformal tumor coverage while optimally sparing healthy tissue surrounding the tumor, referred to as organs-at-risk (OARs). Based on CT scan and/or magnetic resonance images, OARs h...
Saved in:
Published in | Scientific reports Vol. 9; no. 1; p. 4126 |
---|---|
Main Authors | , , , , , , |
Format | Journal Article |
Language | English |
Published |
London
Nature Publishing Group UK
11.03.2019
Nature Publishing Group |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | In radiotherapy, computed tomography (CT) datasets are mostly used for radiation treatment planning to achieve a high-conformal tumor coverage while optimally sparing healthy tissue surrounding the tumor, referred to as organs-at-risk (OARs). Based on CT scan and/or magnetic resonance images, OARs have to be manually delineated by clinicians, which is one of the most time-consuming tasks in the clinical workflow. Recent multi-atlas (MA) or deep-learning (DL) based methods aim to improve the clinical routine by an automatic segmentation of OARs on a CT dataset. However, so far no studies investigated the performance of these MA or DL methods on dual-energy CT (DECT) datasets, which have been shown to improve the image quality compared to conventional 120 kVp single-energy CT. In this study, the performance of an in-house developed MA and a DL method (two-step three-dimensional U-net) was quantitatively and qualitatively evaluated on various DECT-derived pseudo-monoenergetic CT datasets ranging from 40 keV to 170 keV. At lower energies, the MA method resulted in more accurate OAR segmentations. Both the qualitative and quantitative metric analysis showed that the DL approach often performed better than the MA method. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
ISSN: | 2045-2322 2045-2322 |
DOI: | 10.1038/s41598-019-40584-9 |