Model-Based Deep Learning PET Image Reconstruction Using Forward-Backward Splitting Expectation-Maximization
We propose a forward-backward splitting algorithm to integrate deep learning into maximum- a-posteriori (MAP) positron emission tomography (PET) image reconstruction. The MAP reconstruction is split into regularization, expectation-maximization (EM), and a weighted fusion. For regularization, the us...
Saved in:
Published in | IEEE transactions on radiation and plasma medical sciences Vol. 5; no. 1; pp. 54 - 64 |
---|---|
Main Authors | , |
Format | Journal Article |
Language | English |
Published |
United States
IEEE
01.01.2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | We propose a forward-backward splitting algorithm to integrate deep learning into maximum- a-posteriori (MAP) positron emission tomography (PET) image reconstruction. The MAP reconstruction is split into regularization, expectation-maximization (EM), and a weighted fusion. For regularization, the use of either a Bowsher prior (using Markov-random fields) or a residual learning unit (using convolutional-neural networks) were considered. For the latter, our proposed forward-backward splitting EM (FBSEM), accelerated with ordered subsets (OS), was unrolled into a recurrent-neural network in which network parameters (including regularization strength) are shared across all states and learned during PET reconstruction. Our network was trained and evaluated using PET-only (FBSEM-p) and PET-MR (FBSEM-pm) datasets for low-dose simulations and short-duration in-vivo brain imaging. It was compared to OSEM, Bowsher MAPEM, and a post-reconstruction U-Net denoising trained on the same PET-only (Unet-p) or PET-MR (Unet-pm) datasets. For simulations, FBSEM-p(m) and Unet-p(m) nets achieved a comparable performance, on average, 14.4% and 13.4% normalized root-mean square error (NRMSE), respectively; and both outperformed OSEM and MAPEM methods (with 20.7% and 17.7% NRMSE, respectively). For in-vivo datasets, FBSEM-p(m), Unet-p(m), MAPEM, and OSEM methods achieved average root-sum-of-squared errors of 3.9%, 5.7%, 5.9%, and 7.8% in different brain regions, respectively. In conclusion, the studied U-Net denoising method achieved a comparable performance to a representative implementation of the FBSEM net. |
---|---|
AbstractList | We propose a forward-backward splitting algorithm to integrate deep learning into maximum-
a-posteriori
(MAP) positron emission tomography (PET) image reconstruction. The MAP reconstruction is split into regularization, expectation-maximization (EM), and a weighted fusion. For regularization, the use of either a Bowsher prior (using Markov-random fields) or a residual learning unit (using convolutional-neural networks) were considered. For the latter, our proposed forward-backward splitting EM (FBSEM), accelerated with ordered subsets (OS), was unrolled into a recurrent-neural network in which network parameters (including regularization strength) are shared across all states and learned during PET reconstruction. Our network was trained and evaluated using PET-only (FBSEM-p) and PET-MR (FBSEM-pm) datasets for low-dose simulations and short-duration
in-vivo
brain imaging. It was compared to OSEM, Bowsher MAPEM, and a post-reconstruction U-Net denoising trained on the same PET-only (Unet-p) or PET-MR (Unet-pm) datasets. For simulations, FBSEM-p(m) and Unet-p(m) nets achieved a comparable performance, on average, 14.4% and 13.4% normalized root-mean square error (NRMSE), respectively; and both outperformed OSEM and MAPEM methods (with 20.7% and 17.7% NRMSE, respectively). For
in-vivo
datasets, FBSEM-p(m), Unet-p(m), MAPEM, and OSEM methods achieved average root-sum-of-squared errors of 3.9%, 5.7%, 5.9%, and 7.8% in different brain regions, respectively. In conclusion, the studied U-Net denoising method achieved a comparable performance to a representative implementation of the FBSEM net. We propose a forward-backward splitting algorithm to integrate deep learning into maximum- (MAP) positron emission tomography (PET) image reconstruction. The MAP reconstruction is split into regularization, expectation-maximization (EM), and a weighted fusion. For regularization, the use of either a Bowsher prior (using Markov-random fields) or a residual learning unit (using convolutional-neural networks) were considered. For the latter, our proposed forward-backward splitting EM (FBSEM), accelerated with ordered subsets (OS), was unrolled into a recurrent-neural network in which network parameters (including regularization strength) are shared across all states and learned during PET reconstruction. Our network was trained and evaluated using PET-only (FBSEM-p) and PET-MR (FBSEM-pm) datasets for low-dose simulations and short-duration brain imaging. It was compared to OSEM, Bowsher MAPEM, and a post-reconstruction U-Net denoising trained on the same PET-only (Unet-p) or PET-MR (Unet-pm) datasets. For simulations, FBSEM-p(m) and Unet-p(m) nets achieved a comparable performance, on average, 14.4% and 13.4% normalized root-mean square error (NRMSE), respectively; and both outperformed OSEM and MAPEM methods (with 20.7% and 17.7% NRMSE, respectively). For datasets, FBSEM-p(m), Unet-p(m), MAPEM, and OSEM methods achieved average root-sum-of-squared errors of 3.9%, 5.7%, 5.9%, and 7.8% in different brain regions, respectively. In conclusion, the studied U-Net denoising method achieved a comparable performance to a representative implementation of the FBSEM net. We propose a forward-backward splitting algorithm to integrate deep learning into maximum- a-posteriori (MAP) positron emission tomography (PET) image reconstruction. The MAP reconstruction is split into regularization, expectation-maximization (EM), and a weighted fusion. For regularization, the use of either a Bowsher prior (using Markov-random fields) or a residual learning unit (using convolutional-neural networks) were considered. For the latter, our proposed forward-backward splitting EM (FBSEM), accelerated with ordered subsets (OS), was unrolled into a recurrent-neural network in which network parameters (including regularization strength) are shared across all states and learned during PET reconstruction. Our network was trained and evaluated using PET-only (FBSEM-p) and PET-MR (FBSEM-pm) datasets for low-dose simulations and short-duration in-vivo brain imaging. It was compared to OSEM, Bowsher MAPEM, and a post-reconstruction U-Net denoising trained on the same PET-only (Unet-p) or PET-MR (Unet-pm) datasets. For simulations, FBSEM-p(m) and Unet-p(m) nets achieved a comparable performance, on average, 14.4% and 13.4% normalized root-mean square error (NRMSE), respectively; and both outperformed OSEM and MAPEM methods (with 20.7% and 17.7% NRMSE, respectively). For in-vivo datasets, FBSEM-p(m), Unet-p(m), MAPEM, and OSEM methods achieved average root-sum-of-squared errors of 3.9%, 5.7%, 5.9%, and 7.8% in different brain regions, respectively. In conclusion, the studied U-Net denoising method achieved a comparable performance to a representative implementation of the FBSEM net. We propose a forward-backward splitting algorithm to integrate deep learning into maximum-a-posteriori (MAP) positron emission tomography (PET) image reconstruction. The MAP reconstruction is split into regularization, expectation-maximization (EM), and a weighted fusion. For regularization, the use of either a Bowsher prior (using Markov-random fields) or a residual learning unit (using convolutional-neural networks) were considered. For the latter, our proposed forward-backward splitting EM (FBSEM), accelerated with ordered subsets (OS), was unrolled into a recurrent-neural network in which network parameters (including regularization strength) are shared across all states and learned during PET reconstruction. Our network was trained and evaluated using PET-only (FBSEM-p) and PET-MR (FBSEM-pm) datasets for low-dose simulations and short-duration in-vivo brain imaging. It was compared to OSEM, Bowsher MAPEM, and a post-reconstruction U-Net denoising trained on the same PET-only (Unet-p) or PET-MR (Unet-pm) datasets. For simulations, FBSEM-p(m) and Unet-p(m) nets achieved a comparable performance, on average, 14.4% and 13.4% normalized root-mean square error (NRMSE), respectively; and both outperformed OSEM and MAPEM methods (with 20.7% and 17.7% NRMSE, respectively). For in-vivo datasets, FBSEM-p(m), Unet-p(m), MAPEM, and OSEM methods achieved average root-sum-of-squared errors of 3.9%, 5.7%, 5.9%, and 7.8% in different brain regions, respectively. In conclusion, the studied U-Net denoising method achieved a comparable performance to a representative implementation of the FBSEM net.We propose a forward-backward splitting algorithm to integrate deep learning into maximum-a-posteriori (MAP) positron emission tomography (PET) image reconstruction. The MAP reconstruction is split into regularization, expectation-maximization (EM), and a weighted fusion. For regularization, the use of either a Bowsher prior (using Markov-random fields) or a residual learning unit (using convolutional-neural networks) were considered. For the latter, our proposed forward-backward splitting EM (FBSEM), accelerated with ordered subsets (OS), was unrolled into a recurrent-neural network in which network parameters (including regularization strength) are shared across all states and learned during PET reconstruction. Our network was trained and evaluated using PET-only (FBSEM-p) and PET-MR (FBSEM-pm) datasets for low-dose simulations and short-duration in-vivo brain imaging. It was compared to OSEM, Bowsher MAPEM, and a post-reconstruction U-Net denoising trained on the same PET-only (Unet-p) or PET-MR (Unet-pm) datasets. For simulations, FBSEM-p(m) and Unet-p(m) nets achieved a comparable performance, on average, 14.4% and 13.4% normalized root-mean square error (NRMSE), respectively; and both outperformed OSEM and MAPEM methods (with 20.7% and 17.7% NRMSE, respectively). For in-vivo datasets, FBSEM-p(m), Unet-p(m), MAPEM, and OSEM methods achieved average root-sum-of-squared errors of 3.9%, 5.7%, 5.9%, and 7.8% in different brain regions, respectively. In conclusion, the studied U-Net denoising method achieved a comparable performance to a representative implementation of the FBSEM net. |
Author | Reader, Andrew J. Mehranian, Abolfazl |
Author_xml | – sequence: 1 givenname: Abolfazl orcidid: 0000-0003-4584-4453 surname: Mehranian fullname: Mehranian, Abolfazl email: abolfazl.mehranian@kcl.ac.uk organization: Department of Biomedical Engineering, School of Biomedical Engineering and Imaging Sciences, King's College London, London, U.K – sequence: 2 givenname: Andrew J. orcidid: 0000-0002-2726-3383 surname: Reader fullname: Reader, Andrew J. organization: Department of Biomedical Engineering, School of Biomedical Engineering and Imaging Sciences, King's College London, London, U.K |
BackLink | https://www.ncbi.nlm.nih.gov/pubmed/34056150$$D View this record in MEDLINE/PubMed |
BookMark | eNp9Uctu1DAUtVARLaU_ABKKxIZNBr-TbJBKmUKlGVG107Xl2HcGl8QOdgKFryfpTEfQBStf6zzusc9zdOCDB4ReEjwjBFfvVleXy-sZxRTPGMac4_IJOqJcVnnBMDvYz4QcopOUbjHGpChpxcUzdMg4FpIIfISaZbDQ5B90Apt9BOiyBejond9kl_NVdtHqDWRXYIJPfRxM74LPbtIEn4f4U0c7Ss23aciuu8b1_QTN7zowvZ7I-VLfudb9vr-8QE_XuklwsjuP0c35fHX2OV98-XRxdrrIDRe4z5nVZWmYFIUgltE1WKh1SSlIXWBuLdNgNYCgUEtSCyMpMYRrI6FeF2A0O0bvt77dULdgDfg-6kZ10bU6_lJBO_Uv4t1XtQk_VCEJLkU1GrzdGcTwfYDUq9YlA02jPYQhKSqYmFrgE_XNI-ptGKIfn6coL3gpZInxyHr9d6J9lIciRkK5JZgYUoqwVsZtf3AM6BpFsJoWqvva1VS72tU-Sukj6YP7f0WvtiIHAHtBRSjjTLA_Faq66Q |
CODEN | ITRPFI |
CitedBy_id | crossref_primary_10_1109_TMI_2022_3176002 crossref_primary_10_1088_1361_6560_ace49c crossref_primary_10_1109_TRPMS_2020_3014786 crossref_primary_10_1177_14727978251319398 crossref_primary_10_1053_j_semnuclmed_2021_10_005 crossref_primary_10_1002_jmri_29294 crossref_primary_10_1007_s40766_024_00050_3 crossref_primary_10_1109_TRPMS_2022_3204643 crossref_primary_10_1088_1361_6560_ad40f6 crossref_primary_10_1016_j_cpet_2021_06_004 crossref_primary_10_1186_s12880_024_01417_y crossref_primary_10_1109_TRPMS_2023_3243735 crossref_primary_10_1109_TCI_2021_3118944 crossref_primary_10_1186_s13550_021_00788_5 crossref_primary_10_1007_s00259_021_05478_x crossref_primary_10_1007_s11042_023_15916_7 crossref_primary_10_1088_1361_6560_ac5bfb crossref_primary_10_1007_s00259_022_05891_w crossref_primary_10_1007_s12149_021_01697_2 crossref_primary_10_1109_TRPMS_2021_3101947 crossref_primary_10_3390_s23135783 crossref_primary_10_1109_TMI_2023_3273029 crossref_primary_10_1109_TRPMS_2023_3283786 crossref_primary_10_1016_j_sigpro_2023_109165 crossref_primary_10_1146_annurev_bioeng_082420_020343 crossref_primary_10_6009_jjrt_2024_2365 crossref_primary_10_1055_a_2198_0358 crossref_primary_10_3389_fradi_2024_1466498 crossref_primary_10_1002_mp_17191 crossref_primary_10_1109_TII_2020_3025182 crossref_primary_10_3389_fnume_2023_1324562 crossref_primary_10_1109_TIP_2024_3418347 crossref_primary_10_1109_TRPMS_2022_3161569 crossref_primary_10_1007_s12194_022_00652_8 crossref_primary_10_1002_mp_15520 crossref_primary_10_1088_1361_6560_ad2882 crossref_primary_10_1109_TMI_2024_3356189 crossref_primary_10_1109_TMI_2021_3120913 crossref_primary_10_1109_TMI_2023_3239596 crossref_primary_10_1007_s12194_024_00780_3 crossref_primary_10_1007_s10278_023_00815_y crossref_primary_10_1109_TRPMS_2023_3349194 crossref_primary_10_1109_TMI_2022_3217543 crossref_primary_10_1007_s00259_025_07119_z crossref_primary_10_1109_OJSP_2023_3311354 crossref_primary_10_1007_s40336_022_00508_6 crossref_primary_10_1007_s00259_022_05746_4 crossref_primary_10_1088_1361_6560_acde3e crossref_primary_10_1088_1361_6560_abfa36 crossref_primary_10_1088_1361_6560_abfb17 crossref_primary_10_3389_fnume_2022_936091 crossref_primary_10_1186_s40658_024_00660_0 crossref_primary_10_6009_jjrt_2024_2386 crossref_primary_10_2967_jnumed_121_262303 crossref_primary_10_1088_2057_1976_acf66c crossref_primary_10_1259_bjr_20230292 |
Cites_doi | 10.1109/42.232263 10.1109/TMI.2012.2211378 10.1088/1361-6560/ab3242 10.1109/TMI.2018.2869871 10.1109/JPROC.2019.2936809 10.1109/TMI.2018.2888491 10.1109/TMI.2018.2865356 10.1109/CVPR.2016.90 10.1053/j.semnuclmed.2012.08.006 10.1007/s00259-019-04468-4 10.1088/1361-6560/aa7670 10.1038/nature25988 10.1109/TRPMS.2017.2771490 10.1109/TRPMS.2018.2877644 10.1214/aos/1021379863 10.1117/12.2513096 10.1109/NSSMIC.2004.1462760 10.1109/TMI.2018.2833635 10.1111/j.2517-6161.1990.tb01798.x 10.1007/978-1-4419-9569-8_10 10.1088/0031-9155/51/15/R01 10.1109/NSS/MIC42101.2019.9059998 10.1007/978-3-319-24574-4_28 10.1088/1361-6560/ab0dc0 10.1109/42.538946 10.1109/TRPMS.2018.2844559 10.1109/TMI.1987.4307826 10.1109/NSSMIC.2018.8824563 10.1016/j.media.2019.03.013 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021 |
DBID | 97E RIA RIE AAYXX CITATION NPM 7QO 8FD F28 FR3 K9. NAPCQ P64 7X8 5PM |
DOI | 10.1109/TRPMS.2020.3004408 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef PubMed Biotechnology Research Abstracts Technology Research Database ANTE: Abstracts in New Technology & Engineering Engineering Research Database ProQuest Health & Medical Complete (Alumni) Nursing & Allied Health Premium Biotechnology and BioEngineering Abstracts MEDLINE - Academic PubMed Central (Full Participant titles) |
DatabaseTitle | CrossRef PubMed Nursing & Allied Health Premium Biotechnology Research Abstracts Technology Research Database ProQuest Health & Medical Complete (Alumni) Engineering Research Database ANTE: Abstracts in New Technology & Engineering Biotechnology and BioEngineering Abstracts MEDLINE - Academic |
DatabaseTitleList | PubMed MEDLINE - Academic Nursing & Allied Health Premium |
Database_xml | – sequence: 1 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Physics |
EISSN | 2469-7303 |
EndPage | 64 |
ExternalDocumentID | PMC7610859 34056150 10_1109_TRPMS_2020_3004408 9123435 |
Genre | orig-research Journal Article |
GrantInformation_xml | – fundername: Department of Health through the National Institute for Health Research (NIHR) comprehensive Biomedical Research Centre award to Guy’s & St. Thomas’ NHS Foundation Trust in partnership with King’s College London and King’s College Hospital NHS Foundation Trust funderid: 10.13039/100010872 – fundername: Engineering and Physical Sciences Research Council (EPSRC) grantid: EP/M020142/1 funderid: 10.13039/501100000266 – fundername: Wellcome EPSRC Centre for Medical Engineering at King’s College London grantid: WT 203148/Z/16/Z funderid: 10.13039/501100000266 – fundername: Wellcome Trust grantid: 203148 |
GroupedDBID | 0R~ 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABJNI ABQJQ ABVLG ACGFS AGQYO AHBIQ AKJIK ALMA_UNASSIGNED_HOLDINGS ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ EBS EJD IFIPE IPLJI JAVBF OCL RIA RIE AAYXX CITATION RIG NPM 7QO 8FD F28 FR3 K9. NAPCQ P64 7X8 5PM |
ID | FETCH-LOGICAL-c450t-3da88c365751d32fedeba822e6a704dd3aedaee52eb61b5c621c14ac6ebf7eca3 |
IEDL.DBID | RIE |
ISSN | 2469-7311 |
IngestDate | Thu Aug 21 18:17:30 EDT 2025 Fri Jul 11 09:03:59 EDT 2025 Mon Jun 30 17:52:33 EDT 2025 Mon Jul 21 06:00:17 EDT 2025 Thu Apr 24 23:04:05 EDT 2025 Tue Jul 01 03:04:15 EDT 2025 Wed Aug 27 02:32:36 EDT 2025 |
IsDoiOpenAccess | false |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 1 |
Keywords | image reconstruction Deep learning (DL) MRI positron emission tomography (PET) |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c450t-3da88c365751d32fedeba822e6a704dd3aedaee52eb61b5c621c14ac6ebf7eca3 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
ORCID | 0000-0003-4584-4453 0000-0002-2726-3383 |
OpenAccessLink | https://kclpure.kcl.ac.uk/ws/files/130912953/Author_Accepted_Final.pdf |
PMID | 34056150 |
PQID | 2474856800 |
PQPubID | 4437208 |
PageCount | 11 |
ParticipantIDs | pubmedcentral_primary_oai_pubmedcentral_nih_gov_7610859 ieee_primary_9123435 pubmed_primary_34056150 proquest_journals_2474856800 proquest_miscellaneous_2535110949 crossref_citationtrail_10_1109_TRPMS_2020_3004408 crossref_primary_10_1109_TRPMS_2020_3004408 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2021-01-01 |
PublicationDateYYYYMMDD | 2021-01-01 |
PublicationDate_xml | – month: 01 year: 2021 text: 2021-01-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | United States |
PublicationPlace_xml | – name: United States – name: Piscataway |
PublicationTitle | IEEE transactions on radiation and plasma medical sciences |
PublicationTitleAbbrev | TRPMS |
PublicationTitleAlternate | IEEE Trans Radiat Plasma Med Sci |
PublicationYear | 2021 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref13 ref15 ref30 ref11 ref10 ref2 ref1 ref17 ref16 ronneberger (ref14) 2015 ref19 ref18 ref24 ref23 ref26 ref25 ref20 ref22 ref21 ref28 ref27 ref29 ref8 ref7 ref9 ref4 ref3 ref6 ref5 cheng (ref12) 2017 |
References_xml | – ident: ref20 doi: 10.1109/42.232263 – ident: ref18 doi: 10.1109/TMI.2012.2211378 – ident: ref30 doi: 10.1088/1361-6560/ab3242 – ident: ref13 doi: 10.1109/TMI.2018.2869871 – ident: ref7 doi: 10.1109/JPROC.2019.2936809 – ident: ref15 doi: 10.1109/TMI.2018.2888491 – start-page: 715 year: 2017 ident: ref12 article-title: Accelerated iterative image reconstruction using a deep learning based leapfrogging strategy publication-title: Proc Int Meet Fully Three-Dimensional Image Reconstruction Radiology Nucl Medicine – ident: ref22 doi: 10.1109/TMI.2018.2865356 – ident: ref23 doi: 10.1109/CVPR.2016.90 – ident: ref2 doi: 10.1053/j.semnuclmed.2012.08.006 – ident: ref16 doi: 10.1007/s00259-019-04468-4 – ident: ref3 doi: 10.1088/1361-6560/aa7670 – ident: ref8 doi: 10.1038/nature25988 – ident: ref5 doi: 10.1109/TRPMS.2017.2771490 – ident: ref10 doi: 10.1109/TRPMS.2018.2877644 – ident: ref28 doi: 10.1214/aos/1021379863 – ident: ref29 doi: 10.1117/12.2513096 – ident: ref24 doi: 10.1109/NSSMIC.2004.1462760 – ident: ref6 doi: 10.1109/TMI.2018.2833635 – ident: ref26 doi: 10.1111/j.2517-6161.1990.tb01798.x – ident: ref19 doi: 10.1007/978-1-4419-9569-8_10 – ident: ref1 doi: 10.1088/0031-9155/51/15/R01 – ident: ref25 doi: 10.1109/NSS/MIC42101.2019.9059998 – start-page: 234 year: 2015 ident: ref14 article-title: U-Net: Convolutional networks for biomedical image segmentation publication-title: Medical Image Computing and Computer-Assisted Intervention MICCAI 2015 doi: 10.1007/978-3-319-24574-4_28 – ident: ref11 doi: 10.1088/1361-6560/ab0dc0 – ident: ref27 doi: 10.1109/42.538946 – ident: ref4 doi: 10.1109/TRPMS.2018.2844559 – ident: ref21 doi: 10.1109/TMI.1987.4307826 – ident: ref17 doi: 10.1109/NSSMIC.2018.8824563 – ident: ref9 doi: 10.1016/j.media.2019.03.013 |
SSID | ssj0001782945 |
Score | 2.4871073 |
Snippet | We propose a forward-backward splitting algorithm to integrate deep learning into maximum- a-posteriori (MAP) positron emission tomography (PET) image... We propose a forward-backward splitting algorithm to integrate deep learning into maximum- (MAP) positron emission tomography (PET) image reconstruction. The... We propose a forward–backward splitting algorithm to integrate deep learning into maximum- a-posteriori (MAP) positron emission tomography (PET) image... We propose a forward-backward splitting algorithm to integrate deep learning into maximum-a-posteriori (MAP) positron emission tomography (PET) image... We propose a forward-backward splitting algorithm to integrate deep learning into maximum- a-posteriori (MAP) positron emission tomography (PET) image... |
SourceID | pubmedcentral proquest pubmed crossref ieee |
SourceType | Open Access Repository Aggregation Database Index Database Enrichment Source Publisher |
StartPage | 54 |
SubjectTerms | Algorithms Brain Brain modeling Computer simulation Datasets Deep learning Deep learning (DL) Fields (mathematics) Image processing Image reconstruction Machine learning Maximization Medical imaging MRI Neural networks Neuroimaging Noise reduction Optimization Positron emission Positron emission tomography positron emission tomography (PET) Regularization Splitting Tomography Training |
Title | Model-Based Deep Learning PET Image Reconstruction Using Forward-Backward Splitting Expectation-Maximization |
URI | https://ieeexplore.ieee.org/document/9123435 https://www.ncbi.nlm.nih.gov/pubmed/34056150 https://www.proquest.com/docview/2474856800 https://www.proquest.com/docview/2535110949 https://pubmed.ncbi.nlm.nih.gov/PMC7610859 |
Volume | 5 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3dSxwxEB9UKPSlrbUf26qk4JvNuR9J9vJYq4cKiuAJvi35mG2l557UOyj-9U6ye2tPpPQtkGTJMklmfpnfzADsiKysVW0sd5m0XKBWXPvacVLVyusQk2Yj2-JMHV2Kkyt5tQJf-1gYRIzkMxyEZvTl-6mbh6eyPU3XLKn3VVgl4NbGaj2-p5Cq07EmcU6Ij5dFli1iZFK9NyYQf0FoMCeQ2lZZXtJDsbDKczbmU6rkX7pn9BpOF6tuKSe_BvOZHbj7Jwkd__e33sCrzghl39pdsw4r2LyFF5EM6u42YBIKpE34Pik4zw4Qb1mXhfUHOz8cs-MbuoNYwK2P2WdZpB6w0fR3ZOHuh1dBarALsnEjs5qFnMqu9fvzU_Pn-qYLAH0Hl6PD8fcj3lVl4E7IdMYLb4ZDV0SHjS_yGj1aQ2YGKlOmwvvCoDeIMkerMiudyjOXCeMU2rpEZ4r3sNZMG_wIrBTWORJH7a0RCtGKtB6aQnqyUXyqXALZQkaV61KWh8oZkypCl1RXUa5VkGvVyTWB3X7ObZuw45-jN4I8-pGdKBLYXGyFqjvTd1UuSjGUiizsBL703XQag4vFNDid0xgZHLMEmXUCH9qd03-7EAGtSZpdLu2pfkDI9L3c01z_jBm_SxWCRPSn51f7GV7mgWkTH4Y2YY1Ej1tkKs3sdjwjD8NLEso |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3daxQxEB9qRfTFr1pdrRrBN811P5Ls5dFqj6v2itAr9G3Jx6wWr3vF3oH41zvJ7m29UsS3QJIly0wyX7-ZAXgrsrJWtbHcZdJygVpx7WvHSVQrr0NOmo1oiyM1PhGfT-XpBrzvc2EQMYLPcBCGMZbv524ZXGW7mp5ZEu-34DbJfZm32VpXHhUSdjp2Jc7J5uNlkWWrLJlU707JjD8mezAnM7Xts7wmiWJrlZu0zOtgyb-kz-gBTFbnbkEnPwbLhR2439dKOv7vjz2E-50ayj60fPMINrB5DHciHNRdbsEstEib8T0ScZ59QrxgXR3Wb-zr_pQdnNMrxILlelV_lkXwARvNf0Yc7l7wC9KAHZOWG7HVLFRVdm3kn0_Mr7PzLgX0CZyM9qcfx7zry8CdkOmCF94Mh66IIRtf5DV6tIYUDVSmTIX3hUFvEGWOVmVWOpVnLhPGKbR1ic4U27DZzBt8BqwU1jkiR-2tEQrRirQemkJ60lJ8qlwC2YpGleuKlofeGbMqGi-priJdq0DXqqNrAu_6PRdtyY5_rt4K9OhXdqRIYGfFClV3qy-rXJRiKBXp2Am86afpPoYgi2lwvqQ1MoRmyWjWCTxtOaf_diGCvSZpd7nGU_2CUOt7faY5-x5rfpcqpIno5zef9jXcHU8nh9XhwdGXF3AvD7ib6CbagU1iA3xJitPCvor35Q80RxYU |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Model-Based+Deep+Learning+PET+Image+Reconstruction+Using+Forward-Backward+Splitting+Expectation-Maximization&rft.jtitle=IEEE+transactions+on+radiation+and+plasma+medical+sciences&rft.au=Mehranian%2C+Abolfazl&rft.au=Reader%2C+Andrew+J.&rft.date=2021-01-01&rft.pub=IEEE&rft.issn=2469-7311&rft.volume=5&rft.issue=1&rft.spage=54&rft.epage=64&rft_id=info:doi/10.1109%2FTRPMS.2020.3004408&rft.externalDocID=9123435 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2469-7311&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2469-7311&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2469-7311&client=summon |