Evaluation of Deep Learning–Based Approaches to Segment Bowel Air Pockets and Generate Pelvic Attenuation Maps from CAIPIRINHA-Accelerated Dixon MR Images
Attenuation correction remains a challenge in pelvic PET/MRI. In addition to the segmentation/model-based approaches, deep learning methods have shown promise in synthesizing accurate pelvic attenuation maps (μ-maps). However, these methods often misclassify air pockets in the digestive tract, poten...
Saved in:
Published in | Journal of Nuclear Medicine Vol. 63; no. 3; pp. 468 - 475 |
---|---|
Main Authors | , , , , , , , , , , |
Format | Journal Article |
Language | English |
Published |
United States
Society of Nuclear Medicine
01.03.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Attenuation correction remains a challenge in pelvic PET/MRI. In addition to the segmentation/model-based approaches, deep learning methods have shown promise in synthesizing accurate pelvic attenuation maps (μ-maps). However, these methods often misclassify air pockets in the digestive tract, potentially introducing bias in the reconstructed PET images. The aims of this work were to develop deep learning-based methods to automatically segment air pockets and generate pseudo-CT images from CAIPIRINHA-accelerated MR Dixon images.
A convolutional neural network (CNN) was trained to segment air pockets using 3-dimensional CAIPIRINHA-accelerated MR Dixon datasets from 35 subjects and was evaluated against semiautomated segmentations. A separate CNN was trained to synthesize pseudo-CT μ-maps from the Dixon images. Its accuracy was evaluated by comparing the deep learning-, model-, and CT-based μ-maps using data from 30 of the subjects. Finally, the impact of different μ-maps and air pocket segmentation methods on the PET quantification was investigated.
Air pockets segmented using the CNN agreed well with semiautomated segmentations, with a mean Dice similarity coefficient of 0.75. The volumetric similarity score between 2 segmentations was 0.85 ± 0.14. The mean absolute relative changes with respect to the CT-based μ-maps were 2.6% and 5.1% in the whole pelvis for the deep learning-based and model-based μ-maps, respectively. The average relative change between PET images reconstructed with deep learning-based and CT-based μ-maps was 2.6%.
We developed a deep learning-based method to automatically segment air pockets from CAIPIRINHA-accelerated Dixon images, with accuracy comparable to that of semiautomatic segmentations. The μ-maps synthesized using a deep learning-based method from CAIPIRINHA-accelerated Dixon images were more accurate than those generated with the model-based approach available on integrated PET/MRI scanners. |
---|---|
AbstractList | Attenuation correction remains a challenge in pelvic PET/MRI. In addition to the segmentation/model-based approaches, deep learning methods have shown promise in synthesizing accurate pelvic attenuation maps (μ-maps). However, these methods often misclassify air pockets in the digestive tract, potentially introducing bias in the reconstructed PET images. The aims of this work were to develop deep learning-based methods to automatically segment air pockets and generate pseudo-CT images from CAIPIRINHA-accelerated MR Dixon images.
A convolutional neural network (CNN) was trained to segment air pockets using 3-dimensional CAIPIRINHA-accelerated MR Dixon datasets from 35 subjects and was evaluated against semiautomated segmentations. A separate CNN was trained to synthesize pseudo-CT μ-maps from the Dixon images. Its accuracy was evaluated by comparing the deep learning-, model-, and CT-based μ-maps using data from 30 of the subjects. Finally, the impact of different μ-maps and air pocket segmentation methods on the PET quantification was investigated.
Air pockets segmented using the CNN agreed well with semiautomated segmentations, with a mean Dice similarity coefficient of 0.75. The volumetric similarity score between 2 segmentations was 0.85 ± 0.14. The mean absolute relative changes with respect to the CT-based μ-maps were 2.6% and 5.1% in the whole pelvis for the deep learning-based and model-based μ-maps, respectively. The average relative change between PET images reconstructed with deep learning-based and CT-based μ-maps was 2.6%.
We developed a deep learning-based method to automatically segment air pockets from CAIPIRINHA-accelerated Dixon images, with accuracy comparable to that of semiautomatic segmentations. The μ-maps synthesized using a deep learning-based method from CAIPIRINHA-accelerated Dixon images were more accurate than those generated with the model-based approach available on integrated PET/MRI scanners. Attenuation correction remains a challenge in pelvic PET/MRI. In addition to the segmentation/model-based approaches, deep learning methods have shown promise in synthesizing accurate pelvic attenuation maps (μ-maps). However, these methods often misclassify air pockets in the digestive tract, potentially introducing bias in the reconstructed PET images. The aims of this work were to develop deep learning–based methods to automatically segment air pockets and generate pseudo-CT images from CAIPIRINHA-accelerated MR Dixon images. Methods: A convolutional neural network (CNN) was trained to segment air pockets using 3-dimensional CAIPIRINHA-accelerated MR Dixon datasets from 35 subjects and was evaluated against semiautomated segmentations. A separate CNN was trained to synthesize pseudo-CT μ-maps from the Dixon images. Its accuracy was evaluated by comparing the deep learning–, model-, and CT-based μ-maps using data from 30 of the subjects. Finally, the impact of different μ-maps and air pocket segmentation methods on the PET quantification was investigated. Results: Air pockets segmented using the CNN agreed well with semiautomated segmentations, with a mean Dice similarity coefficient of 0.75. The volumetric similarity score between 2 segmentations was 0.85 ± 0.14. The mean absolute relative changes with respect to the CT-based μ-maps were 2.6% and 5.1% in the whole pelvis for the deep learning–based and model-based μ-maps, respectively. The average relative change between PET images reconstructed with deep learning–based and CT-based μ-maps was 2.6%. Conclusion: We developed a deep learning–based method to automatically segment air pockets from CAIPIRINHA-accelerated Dixon images, with accuracy comparable to that of semiautomatic segmentations. The μ-maps synthesized using a deep learning–based method from CAIPIRINHA-accelerated Dixon images were more accurate than those generated with the model-based approach available on integrated PET/MRI scanners. Attenuation correction remains a challenge in pelvic PET/MRI. In addition to the segmentation/model-based approaches, deep learning methods have shown promise in synthesizing accurate pelvic attenuation maps (μ-maps). However, these methods often misclassify air pockets in the digestive tract, potentially introducing bias in the reconstructed PET images. The aims of this work were to develop deep learning-based methods to automatically segment air pockets and generate pseudo-CT images from CAIPIRINHA-accelerated MR Dixon images. Methods: A convolutional neural network (CNN) was trained to segment air pockets using 3-dimensional CAIPIRINHA-accelerated MR Dixon datasets from 35 subjects and was evaluated against semiautomated segmentations. A separate CNN was trained to synthesize pseudo-CT μ-maps from the Dixon images. Its accuracy was evaluated by comparing the deep learning-, model-, and CT-based μ-maps using data from 30 of the subjects. Finally, the impact of different μ-maps and air pocket segmentation methods on the PET quantification was investigated. Results: Air pockets segmented using the CNN agreed well with semiautomated segmentations, with a mean Dice similarity coefficient of 0.75. The volumetric similarity score between 2 segmentations was 0.85 ± 0.14. The mean absolute relative changes with respect to the CT-based μ-maps were 2.6% and 5.1% in the whole pelvis for the deep learning-based and model-based μ-maps, respectively. The average relative change between PET images reconstructed with deep learning-based and CT-based μ-maps was 2.6%. Conclusion: We developed a deep learning-based method to automatically segment air pockets from CAIPIRINHA-accelerated Dixon images, with accuracy comparable to that of semiautomatic segmentations. The μ-maps synthesized using a deep learning-based method from CAIPIRINHA-accelerated Dixon images were more accurate than those generated with the model-based approach available on integrated PET/MRI scanners.Attenuation correction remains a challenge in pelvic PET/MRI. In addition to the segmentation/model-based approaches, deep learning methods have shown promise in synthesizing accurate pelvic attenuation maps (μ-maps). However, these methods often misclassify air pockets in the digestive tract, potentially introducing bias in the reconstructed PET images. The aims of this work were to develop deep learning-based methods to automatically segment air pockets and generate pseudo-CT images from CAIPIRINHA-accelerated MR Dixon images. Methods: A convolutional neural network (CNN) was trained to segment air pockets using 3-dimensional CAIPIRINHA-accelerated MR Dixon datasets from 35 subjects and was evaluated against semiautomated segmentations. A separate CNN was trained to synthesize pseudo-CT μ-maps from the Dixon images. Its accuracy was evaluated by comparing the deep learning-, model-, and CT-based μ-maps using data from 30 of the subjects. Finally, the impact of different μ-maps and air pocket segmentation methods on the PET quantification was investigated. Results: Air pockets segmented using the CNN agreed well with semiautomated segmentations, with a mean Dice similarity coefficient of 0.75. The volumetric similarity score between 2 segmentations was 0.85 ± 0.14. The mean absolute relative changes with respect to the CT-based μ-maps were 2.6% and 5.1% in the whole pelvis for the deep learning-based and model-based μ-maps, respectively. The average relative change between PET images reconstructed with deep learning-based and CT-based μ-maps was 2.6%. Conclusion: We developed a deep learning-based method to automatically segment air pockets from CAIPIRINHA-accelerated Dixon images, with accuracy comparable to that of semiautomatic segmentations. The μ-maps synthesized using a deep learning-based method from CAIPIRINHA-accelerated Dixon images were more accurate than those generated with the model-based approach available on integrated PET/MRI scanners. Attenuation correction remains a challenge in pelvic PET/MRI. In addition to the segmentation/model-based approaches, deep learning methods have shown promise in synthesizing accurate pelvic attenuation maps (μ-maps). However, these methods often misclassify air pockets in the digestive tract, potentially introducing bias in the reconstructed PET images. The aims of this work were to develop deep learning–based methods to automatically segment air pockets and generate pseudo-CT images from CAIPIRINHA-accelerated MR Dixon images. Methods: A convolutional neural network (CNN) was trained to segment air pockets using 3-dimensional CAIPIRINHA-accelerated MR Dixon datasets from 35 subjects and was evaluated against semiautomated segmentations. A separate CNN was trained to synthesize pseudo-CT μ-maps from the Dixon images. Its accuracy was evaluated by comparing the deep learning–, model-, and CT-based μ-maps using data from 30 of the subjects. Finally, the impact of different μ-maps and air pocket segmentation methods on the PET quantification was investigated. Results: Air pockets segmented using the CNN agreed well with semiautomated segmentations, with a mean Dice similarity coefficient of 0.75. The volumetric similarity score between 2 segmentations was 0.85 ± 0.14. The mean absolute relative changes with respect to the CT-based μ-maps were 2.6% and 5.1% in the whole pelvis for the deep learning–based and model-based μ-maps, respectively. The average relative change between PET images reconstructed with deep learning–based and CT-based μ-maps was 2.6%. Conclusion: We developed a deep learning–based method to automatically segment air pockets from CAIPIRINHA-accelerated Dixon images, with accuracy comparable to that of semiautomatic segmentations. The μ-maps synthesized using a deep learning–based method from CAIPIRINHA-accelerated Dixon images were more accurate than those generated with the model-based approach available on integrated PET/MRI scanners. |
Author | Izquierdo-Garcia, David Torrado-Carvajal, Angel Vera-Olmos, Javier Kamen, Ali Malpica, Norberto Ng, Thomas S.C. Reaungamornrat, Ja Morales, Manuel A. Catana, Ciprian Catalano, Onofrio A. Sari, Hasan |
Author_xml | – sequence: 1 givenname: Hasan surname: Sari fullname: Sari, Hasan – sequence: 2 givenname: Ja surname: Reaungamornrat fullname: Reaungamornrat, Ja – sequence: 3 givenname: Onofrio A. surname: Catalano fullname: Catalano, Onofrio A. – sequence: 4 givenname: Javier surname: Vera-Olmos fullname: Vera-Olmos, Javier – sequence: 5 givenname: David surname: Izquierdo-Garcia fullname: Izquierdo-Garcia, David – sequence: 6 givenname: Manuel A. surname: Morales fullname: Morales, Manuel A. – sequence: 7 givenname: Angel surname: Torrado-Carvajal fullname: Torrado-Carvajal, Angel – sequence: 8 givenname: Thomas S.C. surname: Ng fullname: Ng, Thomas S.C. – sequence: 9 givenname: Norberto surname: Malpica fullname: Malpica, Norberto – sequence: 10 givenname: Ali surname: Kamen fullname: Kamen, Ali – sequence: 11 givenname: Ciprian surname: Catana fullname: Catana, Ciprian |
BackLink | https://www.ncbi.nlm.nih.gov/pubmed/34301782$$D View this record in MEDLINE/PubMed |
BookMark | eNp1Us1u1DAYtFAR_YEH4IIsceGSYjuO41yQ0m1pV1pgVUDiZnm9X7ZeEju1kwVuvANXno4nwe1uEVTiZFmeGc983xyiPecdIPSUkmNWifLl2o0dLI8pS3dBSc4eoANGiyoTgn3aQweECpoVBSn20WGMa0KIkFI-Qvs5zwktJTtAP882uh31YL3DvsGnAD2egQ7OutWv7z9OdIQlrvs-eG2uIOLB4_ew6sAN-MR_gRbXNuC5N59hiFi7JT4HB0EPgOfQbqzB9TCA233wRvcRN8F3eFJP59PL6duLOquNgfaWssSn9usN7BJPO72C-Bg9bHQb4cnuPEIfX599mFxks3fn00k9ywxn5ZBpUpCKLXjJcsMl0abMQUDVUC4YF7ThctFI0ZilXOQsBScAnEjKdEkZ1UbnR-jVVrcfF2mgJqULulV9sJ0O35TXVv374uyVWvmNklUpacWTwIudQPDXI8RBdTamWK124MeoWFEUlEiZkwR9fg-69mNwKZ5ighey4rISCfXsb0d_rNwtLgHKLcAEH2OARhk73E45GbStokTdVERtK6JSRdS2IolJ7zHvxP_P-Q37RsHI |
CitedBy_id | crossref_primary_10_3390_cells12242776 crossref_primary_10_1088_1748_0221_19_04_C04028 crossref_primary_10_3390_app14114655 crossref_primary_10_1038_s41467_022_33562_9 crossref_primary_10_1186_s40658_023_00569_0 crossref_primary_10_1007_s00259_022_05909_3 |
Cites_doi | 10.1109/TMI.2019.2897538 10.1088/1361-6560/ab652c 10.1007/978-3-319-68127-6_2 10.1007/978-3-030-12029-0_40 10.1088/1361-6560/abb0f8 10.1109/TRPMS.2020.3006844 10.1007/s00259-020-04816-9 10.2967/jnumed.108.054726 10.1016/j.media.2016.10.004 10.1186/s12880-015-0068-x 10.2967/jnumed.115.156000 10.1016/j.media.2018.11.010 10.1148/radiol.2018180958 10.1109/TMI.2010.2046908 10.2967/jnumed.118.209288 10.2967/jnumed.115.166967 10.18383/j.tom.2018.00016 10.1016/j.neuroimage.2006.01.015 10.1007/s00259-002-0796-3 10.1109/TMI.2006.880587 10.1038/s41598-020-60311-z 10.1148/radiol.2017170700 10.1088/1361-6560/aada6d 10.2967/jnumed.110.079145 10.1016/j.cmpb.2009.09.002 10.1016/j.ejrad.2017.09.007 10.1007/978-3-319-59050-9_28 10.2967/jnumed.117.198051 |
ContentType | Journal Article |
Copyright | 2022 by the Society of Nuclear Medicine and Molecular Imaging. Copyright Society of Nuclear Medicine Mar 1, 2022 2022 by the Society of Nuclear Medicine and Molecular Imaging. 2022 |
Copyright_xml | – notice: 2022 by the Society of Nuclear Medicine and Molecular Imaging. – notice: Copyright Society of Nuclear Medicine Mar 1, 2022 – notice: 2022 by the Society of Nuclear Medicine and Molecular Imaging. 2022 |
DBID | AAYXX CITATION CGR CUY CVF ECM EIF NPM 4T- 8FD FR3 K9. M7Z NAPCQ P64 7X8 5PM |
DOI | 10.2967/jnumed.120.261032 |
DatabaseName | CrossRef Medline MEDLINE MEDLINE (Ovid) MEDLINE MEDLINE PubMed Docstoc Technology Research Database Engineering Research Database ProQuest Health & Medical Complete (Alumni) Biochemistry Abstracts 1 Nursing & Allied Health Premium Biotechnology and BioEngineering Abstracts MEDLINE - Academic PubMed Central (Full Participant titles) |
DatabaseTitle | CrossRef MEDLINE Medline Complete MEDLINE with Full Text PubMed MEDLINE (Ovid) Nursing & Allied Health Premium Technology Research Database Docstoc Biochemistry Abstracts 1 ProQuest Health & Medical Complete (Alumni) Engineering Research Database Biotechnology and BioEngineering Abstracts MEDLINE - Academic |
DatabaseTitleList | MEDLINE Nursing & Allied Health Premium MEDLINE - Academic |
Database_xml | – sequence: 1 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: EIF name: MEDLINE url: https://proxy.k.utb.cz/login?url=https://www.webofscience.com/wos/medline/basic-search sourceTypes: Index Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Medicine |
EISSN | 2159-662X 1535-5667 |
EndPage | 475 |
ExternalDocumentID | PMC8978194 34301782 10_2967_jnumed_120_261032 |
Genre | Journal Article Research Support, N.I.H., Extramural |
GrantInformation_xml | – fundername: NCI NIH HHS grantid: R01 CA218187 |
GroupedDBID | 123 18M 41~ 5VS 96U AAYXX ACGFO AEGXH AIAGR ALMA_UNASSIGNED_HOLDINGS CITATION GX1 N9A RHI TSM U5U W8F --- -~X .55 .GJ 29L 2WC 3O- 3V. 53G 5RE 7RV 7X7 88E 88I 8AF 8AO 8FE 8FG 8FH 8FI 8FJ 8R4 8R5 8WZ A6W ABEFU ABSQV ABUWG ACGOD ACIWK ACPRK ADDZX ADMOG AENEX AFFNX AFKRA AFOSN AFRAH AHMBA AI. ALIPV ARAPS AZQEC BBNVY BENPR BGLVJ BHPHI BKEYQ BPHCQ BVXVI CCPQU CGR CS3 CUY CVF DIK DU5 DWQXO E3Z EBD EBS ECM EIF EJD EMOBN EX3 F5P F9R FYUFA GNUQQ H13 HCIFZ HMCUK I-F IL9 INIJC J5H KQ8 L7B LK8 M1P M2P M2Q M7P N4W NAPCQ NPM OK1 P2P P62 PQQKQ PROAC PSQYO Q2X R0Z RHF RNS RWL S0X SJN SV3 TAE TR2 TUS UKHRP VH1 WH7 WOQ WOW X7M YHG YQJ ZGI ZXP 4T- 8FD FR3 K9. M7Z P64 7X8 5PM |
ID | FETCH-LOGICAL-c427t-a05092b4723c480ac73e6e9f1462461f48bf86fcd8b323010ee40812a7121aca3 |
ISSN | 0161-5505 1535-5667 |
IngestDate | Thu Aug 21 17:27:29 EDT 2025 Fri Jul 11 03:57:17 EDT 2025 Mon Jun 30 10:41:06 EDT 2025 Wed Feb 19 02:26:40 EST 2025 Thu Apr 24 23:11:29 EDT 2025 Tue Jul 01 02:06:58 EDT 2025 |
IsDoiOpenAccess | false |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 3 |
Keywords | pseudo-CT deep learning PET/MRI attenuation correction PET quantification |
Language | English |
License | 2022 by the Society of Nuclear Medicine and Molecular Imaging. |
LinkModel | OpenURL |
MergedId | FETCHMERGED-LOGICAL-c427t-a05092b4723c480ac73e6e9f1462461f48bf86fcd8b323010ee40812a7121aca3 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 Published online July 22, 2021. |
OpenAccessLink | https://jnm.snmjournals.org/content/jnumed/early/2021/07/22/jnumed.120.261032.full.pdf |
PMID | 34301782 |
PQID | 2645894896 |
PQPubID | 40808 |
PageCount | 8 |
ParticipantIDs | pubmedcentral_primary_oai_pubmedcentral_nih_gov_8978194 proquest_miscellaneous_2555108830 proquest_journals_2645894896 pubmed_primary_34301782 crossref_citationtrail_10_2967_jnumed_120_261032 crossref_primary_10_2967_jnumed_120_261032 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2022-03-00 20220301 |
PublicationDateYYYYMMDD | 2022-03-01 |
PublicationDate_xml | – month: 03 year: 2022 text: 2022-03-00 |
PublicationDecade | 2020 |
PublicationPlace | United States |
PublicationPlace_xml | – name: United States – name: New York |
PublicationTitle | Journal of Nuclear Medicine |
PublicationTitleAlternate | J Nucl Med |
PublicationYear | 2022 |
Publisher | Society of Nuclear Medicine |
Publisher_xml | – name: Society of Nuclear Medicine |
References | 2022032309452427000_63.3.468.2 2022032309452427000_63.3.468.11 2022032309452427000_63.3.468.3 2022032309452427000_63.3.468.4 Pozaruk (2022032309452427000_63.3.468.18) 2021; 48 2022032309452427000_63.3.468.6 Balakrishnan (2022032309452427000_63.3.468.10) 2019; 38 2022032309452427000_63.3.468.7 2022032309452427000_63.3.468.16 2022032309452427000_63.3.468.8 2022032309452427000_63.3.468.17 Freitag (2022032309452427000_63.3.468.5) 2017; 96 de Vos (2022032309452427000_63.3.468.9) 2019; 52 Liu (2022032309452427000_63.3.468.13) 2018; 286 Gong (2022032309452427000_63.3.468.31) 2021; 5 Dong (2022032309452427000_63.3.468.14) 2020; 65 Maspero (2022032309452427000_63.3.468.15) 2018; 63 2022032309452427000_63.3.468.22 2022032309452427000_63.3.468.23 2022032309452427000_63.3.468.20 2022032309452427000_63.3.468.21 2022032309452427000_63.3.468.26 2022032309452427000_63.3.468.24 2022032309452427000_63.3.468.25 2022032309452427000_63.3.468.28 Catalano (2022032309452427000_63.3.468.27) 2018; 8 2022032309452427000_63.3.468.29 Catana (2022032309452427000_63.3.468.1) 2020; 65 Hartenstein (2022032309452427000_63.3.468.12) 2020; 10 Bradshaw (2022032309452427000_63.3.468.19) 2018; 4 2022032309452427000_63.3.468.30 |
References_xml | – volume: 38 start-page: 1788 year: 2019 ident: 2022032309452427000_63.3.468.10 article-title: Dalca A v. VoxelMorph: a learning framework for deformable medical image registration publication-title: IEEE Trans Med Imaging. doi: 10.1109/TMI.2019.2897538 – volume: 65 start-page: 055011 year: 2020 ident: 2022032309452427000_63.3.468.14 article-title: Deep learning-based attenuation correction in the absence of structural information for whole-body positron emission tomography imaging publication-title: Phys Med Biol. doi: 10.1088/1361-6560/ab652c – ident: 2022032309452427000_63.3.468.29 doi: 10.1007/978-3-319-68127-6_2 – ident: 2022032309452427000_63.3.468.23 doi: 10.1007/978-3-030-12029-0_40 – volume: 65 start-page: 23TR02 year: 2020 ident: 2022032309452427000_63.3.468.1 article-title: Attenuation correction for human PET/MRI studies publication-title: Phys Med Biol. doi: 10.1088/1361-6560/abb0f8 – ident: 2022032309452427000_63.3.468.30 – volume: 5 start-page: 185 year: 2021 ident: 2022032309452427000_63.3.468.31 article-title: MR-based attenuation correction for brain PET using 3D cycle-consistent adversarial network publication-title: IEEE Trans Radiat Plasma Med Sci. doi: 10.1109/TRPMS.2020.3006844 – ident: 2022032309452427000_63.3.468.6 – volume: 48 start-page: 9 year: 2021 ident: 2022032309452427000_63.3.468.18 article-title: Augmented deep learning model for improved quantitative accuracy of MR-based PET attenuation correction in PSMA PET-MRI prostate imaging publication-title: Eur J Nucl Med Mol Imaging. doi: 10.1007/s00259-020-04816-9 – ident: 2022032309452427000_63.3.468.2 doi: 10.2967/jnumed.108.054726 – ident: 2022032309452427000_63.3.468.8 doi: 10.1016/j.media.2016.10.004 – ident: 2022032309452427000_63.3.468.25 doi: 10.1186/s12880-015-0068-x – ident: 2022032309452427000_63.3.468.4 doi: 10.2967/jnumed.115.156000 – volume: 52 start-page: 128 year: 2019 ident: 2022032309452427000_63.3.468.9 article-title: A deep learning framework for unsupervised affine and deformable image registration publication-title: Med Image Anal. doi: 10.1016/j.media.2018.11.010 – ident: 2022032309452427000_63.3.468.11 doi: 10.1148/radiol.2018180958 – ident: 2022032309452427000_63.3.468.20 doi: 10.1109/TMI.2010.2046908 – ident: 2022032309452427000_63.3.468.17 doi: 10.2967/jnumed.118.209288 – ident: 2022032309452427000_63.3.468.3 doi: 10.2967/jnumed.115.166967 – volume: 4 start-page: 138 year: 2018 ident: 2022032309452427000_63.3.468.19 article-title: Feasibility of deep learning-based PET/MR attenuation correction in the pelvis using only diagnostic MR images publication-title: Tomography. doi: 10.18383/j.tom.2018.00016 – ident: 2022032309452427000_63.3.468.22 doi: 10.1016/j.neuroimage.2006.01.015 – ident: 2022032309452427000_63.3.468.26 doi: 10.1007/s00259-002-0796-3 – ident: 2022032309452427000_63.3.468.24 doi: 10.1109/TMI.2006.880587 – volume: 10 start-page: 3398 year: 2020 ident: 2022032309452427000_63.3.468.12 article-title: Prostate cancer nodal staging: using deep learning to predict 68Ga-PSMA-positivity from CT imaging alone publication-title: Sci Rep. doi: 10.1038/s41598-020-60311-z – volume: 286 start-page: 676 year: 2018 ident: 2022032309452427000_63.3.468.13 article-title: Deep learning MR imaging-based attenuation correction for PET/MR imaging publication-title: Radiology. doi: 10.1148/radiol.2017170700 – volume: 63 start-page: 185001 year: 2018 ident: 2022032309452427000_63.3.468.15 article-title: Dose evaluation of fast synthetic-CT generation using a generative adversarial network for general pelvis MR-only radiotherapy publication-title: Phys Med Biol. doi: 10.1088/1361-6560/aada6d – ident: 2022032309452427000_63.3.468.28 doi: 10.2967/jnumed.110.079145 – ident: 2022032309452427000_63.3.468.21 doi: 10.1016/j.cmpb.2009.09.002 – volume: 8 start-page: 62 year: 2018 ident: 2022032309452427000_63.3.468.27 article-title: Diagnostic performance of PET/MR in the evaluation of active inflammation in Crohn disease publication-title: Am J Nucl Med Mol Imaging. – volume: 96 start-page: 12 year: 2017 ident: 2022032309452427000_63.3.468.5 article-title: Improved clinical workflow for simultaneous whole-body PET/MRI using high-resolution CAIPIRINHA-accelerated MR-based attenuation correction publication-title: Eur J Radiol. doi: 10.1016/j.ejrad.2017.09.007 – ident: 2022032309452427000_63.3.468.7 doi: 10.1007/978-3-319-59050-9_28 – ident: 2022032309452427000_63.3.468.16 doi: 10.2967/jnumed.117.198051 |
SSID | ssj0006888 ssj0062072 |
Score | 2.4225552 |
Snippet | Attenuation correction remains a challenge in pelvic PET/MRI. In addition to the segmentation/model-based approaches, deep learning methods have shown promise... |
SourceID | pubmedcentral proquest pubmed crossref |
SourceType | Open Access Repository Aggregation Database Index Database Enrichment Source |
StartPage | 468 |
SubjectTerms | Air pockets Artificial neural networks Attenuation Clinical Investigation Computed tomography Deep Learning Gastrointestinal tract Humans Image processing Image Processing, Computer-Assisted - methods Image reconstruction Image segmentation Magnetic resonance imaging Magnetic Resonance Imaging - methods Medical imaging Neural networks Pelvis Pelvis - diagnostic imaging Positron emission Positron emission tomography Positron-Emission Tomography - methods Synthesis Tomography Tomography, X-Ray Computed |
Title | Evaluation of Deep Learning–Based Approaches to Segment Bowel Air Pockets and Generate Pelvic Attenuation Maps from CAIPIRINHA-Accelerated Dixon MR Images |
URI | https://www.ncbi.nlm.nih.gov/pubmed/34301782 https://www.proquest.com/docview/2645894896 https://www.proquest.com/docview/2555108830 https://pubmed.ncbi.nlm.nih.gov/PMC8978194 |
Volume | 63 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1db9MwFLXKkBAviG8KAxmJJ6qMJnGc9LEbndqxftC1aG-R49yMojap2lQgfgo_hN_HdZwm6VahwUtUOU7s9pxe32tfHxPyLgq9UHBwDTC5MBgAN0STS8N1QlM4jLW8MEuQHfDulJ1dOpe12u9K1tImDY7kz737Sv4HVSxDXNUu2X9AtngpFuBnxBeviDBeb4Vxp5DqVj7fR4DlVi_1yjjG4SlUTma2ZUoLOVzAVbb2f5x8B0RmtmqM0BxCqnWatQJ1Co0RzNF-NNop-tP56_tiudZbUU7avVFv3Bt020ZbShy01CMhms4fqtq40VughVpXfd5y99lcp41IdVRFsaqfSUVhlFmZk7gQev97V1RShsYg0DCJRbKKsUmd4VuuoKQqRTOb9h3GSbSaJeUk7RfsojGcL3RC4ZlQnkB1sgPj5CLbq7DPjoEeqB6jYU9ZbtRzqzmrxvyZhWb6FJ_rI4fV4mrt-luMD4dHpoUlXGkNlsPkNjVgMPRPp-fn_qRzOblD7loYnqiTMz59LlXquZedd1p0TK-mqyY-3Ghg1x-6EeRcz9WtOD-Th-RBjh5tawo-IjWIH5N7_RzBJ-RXyUSaRFQxke4ykZZMpGlCcybSjIkUmUhzJlJkIt0ykWom0goTqWIiVUyk-5lIMybS_phqJj4l09PO5KRr5Kd-GJJZboqmAn1YK2CuZUvmNYV0beDQinBIV9qHEfOCyOORDL3AxvjZbAIw9Gst4ZqWKaSwn5GDOInhBaGRyd0ALVEQYGAiTSsAWwpAh19KC4Ro1klz-9v7MpfEVyezzH0MjRVcvobLR7h8DVedvC8eWWo9mL9VPtwC6udmY423mOO1mNfidfK2uI1GXa3UiRiSDdZxMJDB8d_GLj7X-Bet2Qy_NPr1deLuMKOooATjd-_Es6-ZcLynBO5a7OUt2n1F7pf_v0NykK428Brd7zR4k3H9D14e3v0 |
linkProvider | Colorado Alliance of Research Libraries |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Evaluation+of+Deep+Learning-Based+Approaches+to+Segment+Bowel+Air+Pockets+and+Generate+Pelvic+Attenuation+Maps+from+CAIPIRINHA-Accelerated+Dixon+MR+Images&rft.jtitle=The+Journal+of+nuclear+medicine+%281978%29&rft.au=Sari%2C+Hasan&rft.au=Reaungamornrat%2C+Ja&rft.au=Catalano%2C+Onofrio+A&rft.au=Vera-Olmos%2C+Javier&rft.date=2022-03-01&rft.issn=1535-5667&rft.eissn=1535-5667&rft.volume=63&rft.issue=3&rft.spage=468&rft_id=info:doi/10.2967%2Fjnumed.120.261032&rft.externalDBID=NO_FULL_TEXT |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0161-5505&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0161-5505&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0161-5505&client=summon |