Three-dimensional convolutional neural networks for simultaneous dual-tracer PET imaging

Dual-tracer positron emission tomography (PET) is a promising technique to measure the distribution of two tracers in the body by a single scan, which can improve the clinical accuracy of disease diagnosis and can also serve as a research tool for scientists. Most current research on dual-tracer PET...

Full description

Saved in:
Bibliographic Details
Published inPhysics in medicine & biology Vol. 64; no. 18; p. 185016
Main Authors Xu, Jinmin, Liu, Huafeng
Format Journal Article
LanguageEnglish
Published England IOP Publishing 19.09.2019
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Dual-tracer positron emission tomography (PET) is a promising technique to measure the distribution of two tracers in the body by a single scan, which can improve the clinical accuracy of disease diagnosis and can also serve as a research tool for scientists. Most current research on dual-tracer PET reconstruction is based on mixed images pre-reconstructed by algorithms, which restricts the further improvement of the precision of reconstruction. In this study, we present a hybrid loss-guided deep learning based framework for dual-tracer PET imaging using sinogram data, which can achieve reconstruction by naturally unifying two functions: the reconstruction of the mixed images and the separation for individual tracers. Combined with volumetric dual-tracer images, we adopted a three-dimensional (3D) convolutional neural network (CNN) to learn full features, including spatial information and temporal information simultaneously. In addition, an auxiliary loss layer was introduced to guide the reconstruction of the dual tracers. We used Monte Carlo simulations with data augmentation to generate sufficient datasets for training and testing. The results were analyzed by the bias and variance both spatially (different regions of interest) and temporally (different frames). The analysis verified the feasibility of the 3D CNN framework for dual-tracer reconstruction. Furthermore, we compared the reconstruction results with a deep belief network (DBN), which is another deep learning based technique for the separation of dual-tracer images based on time-activity curves (TACs). The comparison results provide insights into the superior features and performance of the 3D CNN. Furthermore, we tested the [11C]FMZ-[11C]DTBZ images with three total-counts levels (, , ), which indicate different noise ratios. The analysis results demonstrate that our method can successfully recover the respective distribution of lower total counts with nearly the same accuracy as that of the higher total counts in the total counts range we applied, which also also indicates the proposed 3D CNN framework is more robust to noise compared with DBN.
Bibliography:Institute of Physics and Engineering in Medicine
PMB-108473.R2
ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0031-9155
1361-6560
1361-6560
DOI:10.1088/1361-6560/ab3103