Deep learning-based attenuation correction for brain PET with various radiotracers

Objectives Attenuation correction (AC) is crucial for ensuring the quantitative accuracy of positron emission tomography (PET) imaging. However, obtaining accurate μ-maps from brain-dedicated PET scanners without AC acquisition mechanism is challenging. Therefore, to overcome these problems, we deve...

Full description

Saved in:
Bibliographic Details
Published inAnnals of nuclear medicine Vol. 35; no. 6; pp. 691 - 701
Main Authors Hashimoto, Fumio, Ito, Masanori, Ote, Kibo, Isobe, Takashi, Okada, Hiroyuki, Ouchi, Yasuomi
Format Journal Article
LanguageEnglish
Published Singapore Springer Singapore 01.06.2021
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Objectives Attenuation correction (AC) is crucial for ensuring the quantitative accuracy of positron emission tomography (PET) imaging. However, obtaining accurate μ-maps from brain-dedicated PET scanners without AC acquisition mechanism is challenging. Therefore, to overcome these problems, we developed a deep learning-based PET AC (deep AC) framework to synthesize transmission computed tomography (TCT) images from non-AC (NAC) PET images using a convolutional neural network (CNN) with a huge dataset of various radiotracers for brain PET imaging. Methods The proposed framework is comprised of three steps: (1) NAC PET image generation, (2) synthetic TCT generation using CNN, and (3) PET image reconstruction. We trained the CNN by combining the mixed image dataset of six radiotracers to avoid overfitting, including [ 18 F]FDG, [ 18 F]BCPP-EF, [ 11 C]Racropride, [ 11 C]PIB, [ 11 C]DPA-713, and [ 11 C]PBB3. We used 1261 brain NAC PET and TCT images (1091 for training and 70 for testing). We did not include [ 11 C]Methionine subjects in the training dataset, but included them in the testing dataset. Results The image quality of the synthetic TCT images obtained using the CNN trained on the mixed dataset of six radiotracers was superior to those obtained using the CNN trained on the split dataset generated from each radiotracer. In the [ 18 F]FDG study, the mean relative PET biases of the emission-segmented AC (ESAC) and deep AC were 8.46 ± 5.24 and − 5.69 ± 4.97, respectively. The deep AC PET and TCT AC PET images exhibited excellent correlation for all seven radiotracers ( R 2  = 0.912–0.982). Conclusion These results indicate that our proposed deep AC framework can be leveraged to provide quantitatively superior PET images when using the CNN trained on the mixed dataset of PET tracers than when using the CNN trained on the split dataset which means specific for each tracer.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0914-7187
1864-6433
1864-6433
DOI:10.1007/s12149-021-01611-w