Deep Learning-Based Image Segmentation on Multimodal Medical Imaging
Multimodality medical imaging techniques have been increasingly applied in clinical practice and research studies. Corresponding multimodal image analysis and ensemble learning schemes have seen rapid growth and bring unique value to medical applications. Motivated by the recent success of applying...
Saved in:
Published in | IEEE transactions on radiation and plasma medical sciences Vol. 3; no. 2; pp. 162 - 169 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
United States
IEEE
01.03.2019
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Multimodality medical imaging techniques have been increasingly applied in clinical practice and research studies. Corresponding multimodal image analysis and ensemble learning schemes have seen rapid growth and bring unique value to medical applications. Motivated by the recent success of applying deep learning methods to medical image processing, we first propose an algorithmic architecture for supervised multimodal image analysis with cross-modality fusion at the feature learning level, classifier level, and decision-making level. We then design and implement an image segmentation system based on deep convolutional neural networks to contour the lesions of soft tissue sarcomas using multimodal images, including those from magnetic resonance imaging, computed tomography, and positron emission tomography. The network trained with multimodal images shows superior performance compared to networks trained with single-modal images. For the task of tumor segmentation, performing image fusion within the network (i.e., fusing at convolutional or fully connected layers) is generally better than fusing images at the network output (i.e., voting). This paper provides empirical guidance for the design and application of multimodal image analysis. |
---|---|
AbstractList | Multimodality medical imaging techniques have been increasingly applied in clinical practice and research studies. Corresponding multimodal image analysis and ensemble learning schemes have seen rapid growth and bring unique value to medical applications. Motivated by the recent success of applying deep learning methods to medical image processing, we first propose an algorithmic architecture for supervised multimodal image analysis with cross-modality fusion at the feature learning level, classifier level, and decision-making level. We then design and implement an image segmentation system based on deep convolutional neural networks to contour the lesions of soft tissue sarcomas using multimodal images, including those from magnetic resonance imaging, computed tomography, and positron emission tomography. The network trained with multimodal images shows superior performance compared to networks trained with single-modal images. For the task of tumor segmentation, performing image fusion within the network (i.e., fusing at convolutional or fully connected layers) is generally better than fusing images at the network output (i.e., voting). This paper provides empirical guidance for the design and application of multimodal image analysis. Multi-modality medical imaging techniques have been increasingly applied in clinical practice and research studies. Corresponding multi-modal image analysis and ensemble learning schemes have seen rapid growth and bring unique value to medical applications. Motivated by the recent success of applying deep learning methods to medical image processing, we first propose an algorithmic architecture for supervised multi-modal image analysis with cross-modality fusion at the feature learning level, classifier level, and decision-making level. We then design and implement an image segmentation system based on deep Convolutional Neural Networks (CNN) to contour the lesions of soft tissue sarcomas using multi-modal images, including those from Magnetic Resonance Imaging (MRI), Computed Tomography (CT) and Positron Emission Tomography (PET). The network trained with multi-modal images shows superior performance compared to networks trained with single-modal images. For the task of tumor segmentation, performing image fusion within the network (i.e. fusing at convolutional or fully connected layers) is generally better than fusing images at the network output (i.e. voting). This study provides empirical guidance for the design and application of multi-modal image analysis. Multi-modality medical imaging techniques have been increasingly applied in clinical practice and research studies. Corresponding multi-modal image analysis and ensemble learning schemes have seen rapid growth and bring unique value to medical applications. Motivated by the recent success of applying deep learning methods to medical image processing, we first propose an algorithmic architecture for supervised multi-modal image analysis with cross-modality fusion at the feature learning level, classifier level, and decision-making level. We then design and implement an image segmentation system based on deep Convolutional Neural Networks (CNN) to contour the lesions of soft tissue sarcomas using multi-modal images, including those from Magnetic Resonance Imaging (MRI), Computed Tomography (CT) and Positron Emission Tomography (PET). The network trained with multi-modal images shows superior performance compared to networks trained with single-modal images. For the task of tumor segmentation, performing image fusion within the network (i.e. fusing at convolutional or fully connected layers) is generally better than fusing images at the network output (i.e. voting). This study provides empirical guidance for the design and application of multi-modal image analysis.Multi-modality medical imaging techniques have been increasingly applied in clinical practice and research studies. Corresponding multi-modal image analysis and ensemble learning schemes have seen rapid growth and bring unique value to medical applications. Motivated by the recent success of applying deep learning methods to medical image processing, we first propose an algorithmic architecture for supervised multi-modal image analysis with cross-modality fusion at the feature learning level, classifier level, and decision-making level. We then design and implement an image segmentation system based on deep Convolutional Neural Networks (CNN) to contour the lesions of soft tissue sarcomas using multi-modal images, including those from Magnetic Resonance Imaging (MRI), Computed Tomography (CT) and Positron Emission Tomography (PET). The network trained with multi-modal images shows superior performance compared to networks trained with single-modal images. For the task of tumor segmentation, performing image fusion within the network (i.e. fusing at convolutional or fully connected layers) is generally better than fusing images at the network output (i.e. voting). This study provides empirical guidance for the design and application of multi-modal image analysis. |
Author | Guo, Zhe Huang, Heng Li, Xiang Guo, Ning Li, Quanzheng |
Author_xml | – sequence: 1 givenname: Zhe orcidid: 0000-0003-4080-2449 surname: Guo fullname: Guo, Zhe email: guo_zion@bit.edu.cn organization: School of Information and Electronics, Beijing Institute of Technology, Beijing, China – sequence: 2 givenname: Xiang orcidid: 0000-0002-9851-6376 surname: Li fullname: Li, Xiang email: xli60@mgh.harvard.edu organization: Department of Radiology, Massachusetts General Hospital, Boston, MA, USA – sequence: 3 givenname: Heng surname: Huang fullname: Huang, Heng email: heng.huang@pitt.edu organization: Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, PA, USA – sequence: 4 givenname: Ning surname: Guo fullname: Guo, Ning email: guo.ning@mgh.harvard.edu organization: Department of Radiology, Massachusetts General Hospital, Boston, MA, USA – sequence: 5 givenname: Quanzheng orcidid: 0000-0002-9651-5820 surname: Li fullname: Li, Quanzheng email: li.quanzheng@mgh.harvard.edu organization: Department of Radiology, Massachusetts General Hospital, Boston, MA, USA |
BackLink | https://www.ncbi.nlm.nih.gov/pubmed/34722958$$D View this record in MEDLINE/PubMed |
BookMark | eNp9kVtLwzAYhoMoHvcHFKTgjTedOTRtciPoPA02FN19iM23GWmT2bSC_97MzaFeCIEvJM_7nd49tOm8A4QOCe4TguXZ5PFh_NSnmIg-FRIzLjfQLs1ymRYMs831nZAd1AvhFWNMCkFlxrfRDssKSiUXu-jqCmCejEA3zrpZeqkDmGRY6xkkTzCrwbW6td4l8Yy7qrW1N7pKxmBsGeMCjLIDtDXVVYDeKu6jyc31ZHCXju5vh4OLUVpmHLep1KzgWSEMSKEFphxn5plTbYBzPgUtBZPcYEZKJqZxsNxIBnzxlIPJC7aPzpdp591zDaaMzTW6UvPG1rr5UF5b9fvH2Rc18-9KcM4wxTHB6SpB4986CK2qbSihqrQD3wVFuSSUCJblET35g776rnFxOhU3lxNCcioidfyzo3Ur3_uNAF0CZeNDaGC6RghWCx_Vl49q4aNa-RhF4o-otEsb4lS2-l96tJRaAFjXElxKXAj2CY_qqX0 |
CODEN | ITRPFI |
CitedBy_id | crossref_primary_10_1109_JBHI_2023_3262548 crossref_primary_10_1016_j_ejso_2025_109760 crossref_primary_10_1038_s41598_024_51329_8 crossref_primary_10_1109_ACCESS_2019_2946582 crossref_primary_10_1177_20552076241291306 crossref_primary_10_1049_iet_ipr_2019_0900 crossref_primary_10_1016_j_engappai_2023_105955 crossref_primary_10_1002_mp_16703 crossref_primary_10_2174_1573405616666200218130043 crossref_primary_10_1016_j_chaos_2020_110225 crossref_primary_10_1007_s00259_023_06299_w crossref_primary_10_1016_j_gsf_2020_07_007 crossref_primary_10_1016_j_media_2021_102135 crossref_primary_10_1186_s13007_023_01015_z crossref_primary_10_1053_j_semnuclmed_2020_09_002 crossref_primary_10_1016_j_cobme_2022_100382 crossref_primary_10_1007_s40747_022_00660_6 crossref_primary_10_1007_s00259_021_05624_5 crossref_primary_10_1021_acsnano_0c09685 crossref_primary_10_1109_TRPMS_2020_3033172 crossref_primary_10_1016_j_bspc_2021_103212 crossref_primary_10_3389_fonc_2022_950706 crossref_primary_10_1038_s41598_019_51096_x crossref_primary_10_1007_s42979_022_01088_y crossref_primary_10_1007_s13246_023_01359_9 crossref_primary_10_1109_ACCESS_2022_3182055 crossref_primary_10_1109_TRPMS_2024_3417901 crossref_primary_10_3390_app10155032 crossref_primary_10_1016_j_stlm_2021_100034 crossref_primary_10_1109_JBHI_2020_3030853 crossref_primary_10_3390_diagnostics10110904 crossref_primary_10_1016_j_inffus_2022_12_010 crossref_primary_10_1109_TRPMS_2022_3221471 crossref_primary_10_23736_S1824_4785_19_03213_8 crossref_primary_10_1108_DTA_08_2022_0330 crossref_primary_10_1016_j_inffus_2024_102690 crossref_primary_10_1007_s10462_023_10621_1 crossref_primary_10_1007_s40747_024_01708_5 crossref_primary_10_1145_3468506 crossref_primary_10_1016_j_asoc_2022_109631 crossref_primary_10_3390_ijms22084159 crossref_primary_10_1007_s41871_024_00223_y crossref_primary_10_1016_j_mri_2024_04_028 crossref_primary_10_1111_cgf_15250 crossref_primary_10_1109_TRPMS_2021_3059780 crossref_primary_10_1007_s11831_023_09940_x crossref_primary_10_1109_ACCESS_2025_3550015 crossref_primary_10_3390_app14177776 crossref_primary_10_1080_21681163_2024_2332398 crossref_primary_10_1007_s00259_019_04373_w crossref_primary_10_1109_TRPMS_2024_3380090 crossref_primary_10_1109_OJSP_2024_3389812 crossref_primary_10_1007_s40995_024_01729_2 crossref_primary_10_1007_s11227_022_04631_z crossref_primary_10_1002_rob_22219 crossref_primary_10_3389_fpubh_2023_1273253 crossref_primary_10_1109_ACCESS_2020_2964111 crossref_primary_10_1016_j_cag_2023_08_030 crossref_primary_10_1002_mp_14847 crossref_primary_10_1007_s11760_022_02439_1 crossref_primary_10_1007_s10278_024_01062_5 crossref_primary_10_1016_j_semcancer_2020_04_002 crossref_primary_10_1016_j_cbpa_2021_102111 crossref_primary_10_1002_jmri_28519 crossref_primary_10_1109_TRPMS_2020_3025071 crossref_primary_10_1109_ACCESS_2021_3052791 crossref_primary_10_2106_JBJS_RVW_24_00057 crossref_primary_10_1016_j_eswa_2024_126239 crossref_primary_10_3390_s24020715 crossref_primary_10_1227_neu_0000000000002018 crossref_primary_10_1016_j_isprsjprs_2021_04_012 crossref_primary_10_1109_TCBB_2019_2963873 crossref_primary_10_1016_j_cmpb_2023_107875 crossref_primary_10_3233_JIFS_237994 crossref_primary_10_24113_ijoscience_v7i4_387 crossref_primary_10_3390_brainsci11081055 crossref_primary_10_1007_s11633_022_1329_0 crossref_primary_10_1007_s11042_020_09271_0 crossref_primary_10_1088_1361_6579_ac10ab crossref_primary_10_3390_s22020523 crossref_primary_10_2174_2666255816666230601150351 crossref_primary_10_1016_j_fraope_2024_100149 crossref_primary_10_1007_s00330_024_11167_8 crossref_primary_10_3390_electronics10222855 crossref_primary_10_1109_TCBB_2023_3281638 crossref_primary_10_1109_TRPMS_2021_3066428 crossref_primary_10_1109_JBHI_2024_3426664 crossref_primary_10_1088_1361_6560_ac72f0 crossref_primary_10_1016_j_compbiomed_2020_103669 crossref_primary_10_1098_rsta_2020_0400 crossref_primary_10_1016_j_bspc_2020_102395 crossref_primary_10_1016_j_patcog_2022_108956 crossref_primary_10_1016_j_bspc_2023_105658 crossref_primary_10_1016_j_bspc_2021_103319 crossref_primary_10_1002_smtd_202001025 crossref_primary_10_1002_ima_22381 crossref_primary_10_3389_fpsyg_2021_741665 crossref_primary_10_3390_s24186054 crossref_primary_10_1007_s13139_022_00745_7 crossref_primary_10_1016_j_cmpb_2021_106043 crossref_primary_10_32604_iasc_2023_031470 crossref_primary_10_1088_1742_6596_1828_1_012036 crossref_primary_10_1007_s00500_023_09417_w crossref_primary_10_1049_ipr2_12434 crossref_primary_10_3390_s22124530 crossref_primary_10_1080_0284186X_2021_1994645 crossref_primary_10_1007_s00432_023_04787_y crossref_primary_10_32604_cmc_2024_054558 crossref_primary_10_1007_s00138_021_01261_y crossref_primary_10_1016_j_jocs_2024_102384 crossref_primary_10_1016_j_compeleceng_2022_108545 crossref_primary_10_1007_s13721_024_00491_0 crossref_primary_10_1016_j_compbiomed_2024_108827 crossref_primary_10_1016_j_patcog_2022_108858 crossref_primary_10_1007_s11220_021_00330_w crossref_primary_10_1145_3627707 crossref_primary_10_1515_biol_2022_0665 crossref_primary_10_1109_TMI_2022_3210113 crossref_primary_10_1007_s12350_022_03007_3 crossref_primary_10_1007_s00259_021_05244_z crossref_primary_10_1007_s11760_020_01768_3 crossref_primary_10_1109_ACCESS_2024_3480271 crossref_primary_10_1007_s00259_022_05891_w crossref_primary_10_1186_s40537_021_00444_8 crossref_primary_10_1186_s12859_023_05486_8 crossref_primary_10_1088_1361_6560_ac6d9c crossref_primary_10_1007_s12350_022_03010_8 crossref_primary_10_1109_JIOT_2021_3051414 crossref_primary_10_1109_JBHI_2022_3220788 crossref_primary_10_1016_j_knosys_2024_112536 crossref_primary_10_1002_mp_16338 crossref_primary_10_1109_TBME_2023_3297249 crossref_primary_10_1007_s00259_021_05341_z crossref_primary_10_3169_itej_76_78 crossref_primary_10_2352_J_ImagingSci_Technol_2020_64_6_060402 crossref_primary_10_3390_bdcc6010029 crossref_primary_10_1016_j_compscitech_2022_109853 crossref_primary_10_1109_TRPMS_2020_2983391 crossref_primary_10_3390_jsan13060080 crossref_primary_10_1155_2020_8866700 crossref_primary_10_1016_j_bbe_2024_05_004 crossref_primary_10_2174_1573405616999200817103920 crossref_primary_10_1109_TRPMS_2020_2986414 crossref_primary_10_1038_s41598_020_62414_z crossref_primary_10_1016_j_bioactmat_2024_11_021 crossref_primary_10_1109_TRPMS_2019_2911597 crossref_primary_10_1117_1_JMI_11_6_062608 crossref_primary_10_1016_j_bspc_2022_104344 crossref_primary_10_1109_ACCESS_2019_2942937 crossref_primary_10_3389_fnins_2022_801769 crossref_primary_10_1109_TRPMS_2023_3332619 crossref_primary_10_1016_j_metrad_2024_100080 crossref_primary_10_1109_TGRS_2021_3079294 crossref_primary_10_1109_TRPMS_2023_3239520 crossref_primary_10_1007_s11042_024_19333_2 crossref_primary_10_3390_diagnostics12092031 crossref_primary_10_1002_jmri_28126 crossref_primary_10_3390_app15062987 crossref_primary_10_1016_j_ejmp_2021_04_016 crossref_primary_10_2174_1573405616666210108122048 crossref_primary_10_1016_j_compbiomed_2023_106715 crossref_primary_10_1155_2022_9640177 crossref_primary_10_1016_j_csbj_2025_02_024 crossref_primary_10_1016_j_eswa_2023_122567 crossref_primary_10_1016_j_neunet_2023_11_006 crossref_primary_10_1007_s11042_023_16490_8 crossref_primary_10_3233_XST_230429 crossref_primary_10_4103_jmp_jmp_52_22 crossref_primary_10_1007_s11042_024_18206_y crossref_primary_10_1093_jamia_ocaa302 crossref_primary_10_26701_ems_1262875 crossref_primary_10_3390_jimaging9070138 crossref_primary_10_1088_1361_6560_ab440d crossref_primary_10_3390_life14010145 crossref_primary_10_1038_s41598_024_63739_9 crossref_primary_10_1016_j_jrras_2024_101210 crossref_primary_10_1038_s41598_025_94267_9 crossref_primary_10_1002_mp_16615 crossref_primary_10_1007_s00371_020_02014_0 crossref_primary_10_1016_j_eswa_2022_117006 crossref_primary_10_32604_cmc_2022_020561 crossref_primary_10_1016_j_array_2019_100004 crossref_primary_10_1016_j_scs_2020_102589 crossref_primary_10_1038_s41598_023_42880_x crossref_primary_10_1016_j_measurement_2025_116701 crossref_primary_10_1088_1361_6641_abd925 crossref_primary_10_1016_j_apacoust_2023_109278 crossref_primary_10_32604_csse_2023_036455 crossref_primary_10_1007_s10278_023_00900_2 crossref_primary_10_1016_j_ebiom_2019_08_054 crossref_primary_10_1109_TRPMS_2020_3030611 crossref_primary_10_1016_j_bspc_2024_106876 crossref_primary_10_1007_s00521_022_07583_w crossref_primary_10_1007_s10489_022_03610_4 crossref_primary_10_1002_mp_14903 crossref_primary_10_1007_s10143_023_02014_3 crossref_primary_10_3390_bioengineering11030278 crossref_primary_10_1155_2024_4678554 crossref_primary_10_1007_s40846_021_00615_1 crossref_primary_10_2478_amns_2025_0210 crossref_primary_10_1109_TRPMS_2020_3030045 crossref_primary_10_1016_j_jceh_2024_101397 crossref_primary_10_1109_ACCESS_2020_2987848 crossref_primary_10_1007_s10278_024_01105_x crossref_primary_10_1007_s12652_022_03749_5 crossref_primary_10_1109_ACCESS_2024_3525183 crossref_primary_10_1109_TRPMS_2024_3398360 crossref_primary_10_1186_s12880_022_00933_z crossref_primary_10_1109_TRPMS_2020_2995717 crossref_primary_10_1007_s11042_024_20239_2 crossref_primary_10_2967_jnumed_118_220582 crossref_primary_10_1007_s12530_024_09648_8 |
Cites_doi | 10.1109/ICBBE.2010.5517037 10.1109/TMI.2016.2538465 10.1016/j.neuroimage.2010.01.062 10.1016/j.inffus.2013.12.002 10.1007/978-3-030-00937-3_79 10.1093/annonc/mdu254 10.1016/j.media.2017.06.014 10.1109/ICDM.2012.24 10.1118/1.4948498 10.1006/nimg.2002.1071 10.1016/j.media.2017.07.005 10.1016/j.media.2013.05.004 10.1109/34.709601 10.1109/TMI.2014.2377694 10.1016/j.eswa.2014.09.020 10.1007/978-3-319-46723-8_14 10.1007/s10462-009-9124-7 10.1007/978-3-319-24574-4_78 10.1016/j.inffus.2012.09.005 10.1145/2487575.2487612 10.1007/s10278-013-9622-7 10.1155/2010/506182 10.1023/A:1010933404324 10.1109/TMM.2013.2244870 10.1118/1.2842076 10.1007/978-3-319-66179-7_65 10.1038/nature14539 10.1016/j.jacr.2017.12.026 10.1016/j.asoc.2016.03.028 10.1146/annurev-bioeng-071516-044442 10.1109/TBME.2015.2466616 10.1109/TCBB.2014.2377729 10.1088/0031-9155/60/14/5471 10.1007/978-3-319-46723-8_48 10.1002/mp.12918 10.1109/TMI.2013.2265603 10.2307/1932409 10.1109/ISBI.2007.356923 10.1007/BF00116037 10.1038/srep22161 10.1016/j.neuroimage.2014.06.077 10.1016/j.media.2016.05.004 10.1186/1471-2121-8-40 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019 |
DBID | 97E RIA RIE AAYXX CITATION NPM 7QO 8FD F28 FR3 K9. NAPCQ P64 7X8 5PM |
DOI | 10.1109/TRPMS.2018.2890359 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) - NZ CrossRef PubMed Biotechnology Research Abstracts Technology Research Database ANTE: Abstracts in New Technology & Engineering Engineering Research Database ProQuest Health & Medical Complete (Alumni) Nursing & Allied Health Premium Biotechnology and BioEngineering Abstracts MEDLINE - Academic PubMed Central (Full Participant titles) |
DatabaseTitle | CrossRef PubMed Nursing & Allied Health Premium Biotechnology Research Abstracts Technology Research Database ProQuest Health & Medical Complete (Alumni) Engineering Research Database ANTE: Abstracts in New Technology & Engineering Biotechnology and BioEngineering Abstracts MEDLINE - Academic |
DatabaseTitleList | Nursing & Allied Health Premium PubMed MEDLINE - Academic |
Database_xml | – sequence: 1 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: RIE name: IEEE/IET Electronic Library url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Physics |
EISSN | 2469-7303 |
EndPage | 169 |
ExternalDocumentID | PMC8553020 34722958 10_1109_TRPMS_2018_2890359 8599078 |
Genre | orig-research Journal Article |
GrantInformation_xml | – fundername: China Scholarship Council funderid: 10.13039/501100004543 – fundername: National Institute of Biomedical Imaging and Bioengineering grantid: 1P41EB022544-01A1 funderid: 10.13039/100000070 – fundername: National Institutes of Health grantid: C06 CA059267 funderid: 10.13039/100000002 – fundername: National Institute on Aging grantid: 1RF1AG052653-01A1 funderid: 10.13039/100000049 – fundername: NCI NIH HHS grantid: C06 CA059267 – fundername: NIA NIH HHS grantid: RF1 AG052653 – fundername: NIBIB NIH HHS grantid: P41 EB022544 |
GroupedDBID | 0R~ 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABJNI ABQJQ ABVLG ACGFS AGQYO AHBIQ AKJIK ALMA_UNASSIGNED_HOLDINGS ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ EBS EJD IFIPE IPLJI JAVBF OCL RIA RIE AAYXX CITATION RIG NPM 7QO 8FD F28 FR3 K9. NAPCQ P64 7X8 5PM |
ID | FETCH-LOGICAL-c450t-9a375478de98a802504db52ade555fea98395d031c38f0186d93e595d06ed673 |
IEDL.DBID | RIE |
ISSN | 2469-7311 |
IngestDate | Thu Aug 21 18:08:12 EDT 2025 Fri Jul 11 02:30:31 EDT 2025 Mon Jun 30 17:58:49 EDT 2025 Thu Apr 03 06:53:25 EDT 2025 Thu Apr 24 23:06:29 EDT 2025 Tue Jul 01 03:04:14 EDT 2025 Wed Aug 27 02:29:09 EDT 2025 |
IsDoiOpenAccess | false |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 2 |
Keywords | Convolutional Neural Network Computed Tomography (CT) Magnetic Resonance Imaging (MRI) Multi-modal Image Positron Emission Tomography (PET) |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c450t-9a375478de98a802504db52ade555fea98395d031c38f0186d93e595d06ed673 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 Co-first authors, these authors contribute equally. |
ORCID | 0000-0003-4080-2449 0000-0002-9651-5820 0000-0002-9851-6376 |
OpenAccessLink | https://www.ncbi.nlm.nih.gov/pmc/articles/8553020 |
PMID | 34722958 |
PQID | 2296111628 |
PQPubID | 4437208 |
PageCount | 8 |
ParticipantIDs | pubmed_primary_34722958 crossref_primary_10_1109_TRPMS_2018_2890359 ieee_primary_8599078 pubmedcentral_primary_oai_pubmedcentral_nih_gov_8553020 proquest_journals_2296111628 crossref_citationtrail_10_1109_TRPMS_2018_2890359 proquest_miscellaneous_2591218346 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2019-03-01 |
PublicationDateYYYYMMDD | 2019-03-01 |
PublicationDate_xml | – month: 03 year: 2019 text: 2019-03-01 day: 01 |
PublicationDecade | 2010 |
PublicationPlace | United States |
PublicationPlace_xml | – name: United States – name: Piscataway |
PublicationTitle | IEEE transactions on radiation and plasma medical sciences |
PublicationTitleAbbrev | TRPMS |
PublicationTitleAlternate | IEEE Trans Radiat Plasma Med Sci |
PublicationYear | 2019 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref35 ref13 ref34 ref12 ref37 ref15 ref36 ref14 group (ref39) 2014; 25 ref31 ref30 ref33 ref11 ref32 ref10 ref2 ref17 ref38 ref16 ref18 ref46 ref24 ref48 ref26 ref47 ref25 ref42 ho (ref41) 1998; 20 ronneberger (ref45) 2015 ref22 ref44 ref21 ref43 beyer (ref1) 2000; 41 ref28 ref27 ref8 ref7 ref9 ref4 ref3 ref6 ref5 zhu (ref23) 2014 lorenzi (ref29) 2016; 6 ref40 wang (ref20) 2015; 37 ngiam (ref19) 2011 |
References_xml | – ident: ref33 doi: 10.1109/ICBBE.2010.5517037 – ident: ref13 doi: 10.1109/TMI.2016.2538465 – ident: ref28 doi: 10.1016/j.neuroimage.2010.01.062 – ident: ref7 doi: 10.1016/j.inffus.2013.12.002 – ident: ref46 doi: 10.1007/978-3-030-00937-3_79 – volume: 25 start-page: 102 year: 2014 ident: ref39 article-title: Soft tissue and visceral sarcomas: ESMO Clinical practice guidelines for diagnosis, treatment and follow-up publication-title: Ann Oncol doi: 10.1093/annonc/mdu254 – ident: ref16 doi: 10.1016/j.media.2017.06.014 – ident: ref21 doi: 10.1109/ICDM.2012.24 – ident: ref4 doi: 10.1118/1.4948498 – ident: ref3 doi: 10.1006/nimg.2002.1071 – ident: ref8 doi: 10.1016/j.media.2017.07.005 – ident: ref2 doi: 10.1016/j.media.2013.05.004 – volume: 20 start-page: 832 year: 1998 ident: ref41 article-title: The random subspace method for constructing decision forests publication-title: IEEE Trans Pattern Anal Mach Intell doi: 10.1109/34.709601 – ident: ref5 doi: 10.1109/TMI.2014.2377694 – ident: ref15 doi: 10.1016/j.eswa.2014.09.020 – start-page: 1 year: 2011 ident: ref19 article-title: Multimodal deep learning publication-title: Proc Int Conf Mach Learn – ident: ref25 doi: 10.1007/978-3-319-46723-8_14 – volume: 37 start-page: 1083 year: 2015 ident: ref20 article-title: On deep multi-view representation learning publication-title: Proceedings of the 32nd Intl Conf on Machine Learning – ident: ref6 doi: 10.1007/s10462-009-9124-7 – ident: ref24 doi: 10.1007/978-3-319-24574-4_78 – ident: ref32 doi: 10.1016/j.inffus.2012.09.005 – ident: ref22 doi: 10.1145/2487575.2487612 – ident: ref38 doi: 10.1007/s10278-013-9622-7 – ident: ref40 doi: 10.1155/2010/506182 – ident: ref42 doi: 10.1023/A:1010933404324 – ident: ref31 doi: 10.1109/TMM.2013.2244870 – ident: ref35 doi: 10.1118/1.2842076 – volume: 41 start-page: 1369 year: 2000 ident: ref1 article-title: A combined PET/CT scanner for clinical oncology publication-title: J Nucl Med – start-page: 1 year: 2014 ident: ref23 article-title: Multi-view perceptron: A deep model for learning face identity and view representations publication-title: Proc Adv Neural Inf Process Syst – ident: ref17 doi: 10.1007/978-3-319-66179-7_65 – ident: ref9 doi: 10.1038/nature14539 – ident: ref11 doi: 10.1016/j.jacr.2017.12.026 – ident: ref30 doi: 10.1016/j.asoc.2016.03.028 – ident: ref10 doi: 10.1146/annurev-bioeng-071516-044442 – ident: ref34 doi: 10.1109/TBME.2015.2466616 – ident: ref27 doi: 10.1109/TCBB.2014.2377729 – ident: ref37 doi: 10.1088/0031-9155/60/14/5471 – ident: ref14 doi: 10.1007/978-3-319-46723-8_48 – ident: ref18 doi: 10.1002/mp.12918 – ident: ref48 doi: 10.1109/TMI.2013.2265603 – ident: ref43 doi: 10.2307/1932409 – ident: ref36 doi: 10.1109/ISBI.2007.356923 – ident: ref44 doi: 10.1007/BF00116037 – volume: 6 year: 2016 ident: ref29 article-title: Multimodal image analysis in Alzheimer's disease via statistical modelling of non-local intensity correlations publication-title: Sci Rep doi: 10.1038/srep22161 – ident: ref26 doi: 10.1016/j.neuroimage.2014.06.077 – ident: ref12 doi: 10.1016/j.media.2016.05.004 – start-page: 234 year: 2015 ident: ref45 article-title: U-Net: Convolutional networks for biomedical image segmentation publication-title: Proc Med Image Comput Comput -Assist Intervent (MICCAI) – ident: ref47 doi: 10.1186/1471-2121-8-40 |
SSID | ssj0001782945 |
Score | 2.5659351 |
Snippet | Multimodality medical imaging techniques have been increasingly applied in clinical practice and research studies. Corresponding multimodal image analysis and... Multi-modality medical imaging techniques have been increasingly applied in clinical practice and research studies. Corresponding multi-modal image analysis... |
SourceID | pubmedcentral proquest pubmed crossref ieee |
SourceType | Open Access Repository Aggregation Database Index Database Enrichment Source Publisher |
StartPage | 162 |
SubjectTerms | Artificial neural networks Biomedical imaging Clinical decision making Computed tomography Computed tomography (CT) Computer vision convolutional neural network (CNN) Decision making Deep learning Empirical analysis Feature extraction Image analysis Image processing Image segmentation Imaging techniques Machine learning Magnetic resonance imaging magnetic resonance imaging (MRI) Medical imaging Medical research multimodal image Neural networks Positron emission Positron emission tomography positron emission tomography (PET) Soft tissues Tomography Tumors |
Title | Deep Learning-Based Image Segmentation on Multimodal Medical Imaging |
URI | https://ieeexplore.ieee.org/document/8599078 https://www.ncbi.nlm.nih.gov/pubmed/34722958 https://www.proquest.com/docview/2296111628 https://www.proquest.com/docview/2591218346 https://pubmed.ncbi.nlm.nih.gov/PMC8553020 |
Volume | 3 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LS8QwEB5UELz4ftQXFbxp13bTNMnRJyroxRW8lbSZVdHtiu5e_PVO0rQ-EBFKKc20NJlJZqaZ-QZgl0x4wcixiPqFwCg1SkeKLGc68QJJfZXCYeldXWfnt-nlHb-bgP02FwYRXfAZduyl28s3w3Jsf5UdSE5rp5CTMEmOW52r9fk_hVSdcjWJu-TxRYIlSZMjE6uDHjnxNzaQS3bszhrjFi2UWaBEZYu9f1FJrsbKb-bmz6jJL2robA6umg7U0SdPnfGo6JTvP7Ad_9vDeZj19mh4WAvQAkxgtQjTLi60fFuCkxPEl9CDsN5HR6TzTHgxoEUovMH7gU9cqkI6XCrvYGjobX73xxHSY8vQOzvtHZ9HvvBCVKY8HkVK28K4QhpUUkuHcmYK3tWGuMf7qBVZVdzQclAy2acxzYxiyO2tDE0m2ApMVcMK1yAsjTQx14nGfpHGrCjIHEtMKg3TQiRdHUDSDH1eelByWxvjOXfOSaxyx7ncci73nAtgr33mpYbk-JN6yQ5zS-lHOIDNhsO5n7VvOUlDRmt_1qXmnbaZ5pvdRNEVDsdEw1Vizco0C2C1Foj23Y1ABSC-iUpLYLG8v7dUjw8O01u68k3x-u9fuwEz1CdVx75twtTodYxbZAyNim03Cz4AvdACaA |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3dT9swED9Bp2m8wDYGZGNbJu1tS0niOLYfYQyVjfJCJ_EWOfEVEDRFtH3hr-fsOOFDaEKKoii-WLHv7Dvbd78D-E4mvGC0sIjGpcAoM0pHiixnuvESSX1VwmHpDY_zwb_szyk_XYKfXSwMIjrnM-zbR3eWb6bVwm6V7UhOc6eQy_CK9D5Pm2it-x0VUnbKZSVOac0XCZYkbZRMrHZGtIw_sa5csm_P1hi3eKHMQiUqm-79gVJyWVaeMzif-k0-UEQHazBsm9D4n1z2F_OyX90-QXd8aRvfwqq3SMPdRoTewRLW7-G18wytZuuwv494HXoY1rNoj7SeCQ8nNA2FJ3g28aFLdUiXC-adTA3V5s9_HCF99gFGB79HvwaRT70QVRmP55HSNjWukAaV1NLhnJmSp9oQ__gYtSK7ihuaEComx9SnuVEMuX2Vo8kF24BePa1xC8LKSBNznWgcl1nMypIMssRk0jAtRJLqAJK264vKw5Lb7BhXhVuexKpwnCss5wrPuQB-dN9cN6Ac_6Vet93cUfoeDmC75XDhx-2sIGnIafbPUyr-1hXTiLPHKLrG6YJouEqsYZnlAWw2AtHV3QpUAOKRqHQEFs37cUl9ce5QvaVL4BR_fP5vv8KbwWh4VBwdHv_9BCvUPtV4wm1Db36zwM9kGs3LL25E3AFdOwWy |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Deep+Learning-based+Image+Segmentation+on+Multimodal+Medical+Imaging&rft.jtitle=IEEE+transactions+on+radiation+and+plasma+medical+sciences&rft.au=Guo%2C+Zhe&rft.au=Li%2C+Xiang&rft.au=Huang%2C+Heng&rft.au=Guo%2C+Ning&rft.date=2019-03-01&rft.issn=2469-7311&rft.volume=3&rft.issue=2&rft.spage=162&rft_id=info:doi/10.1109%2Ftrpms.2018.2890359&rft_id=info%3Apmid%2F34722958&rft.externalDocID=34722958 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2469-7311&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2469-7311&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2469-7311&client=summon |