Detecting Anatomical Landmarks From Limited Medical Imaging Data Using Two-Stage Task-Oriented Deep Neural Networks
One of the major challenges in anatomical landmark detection, based on deep neural networks, is the limited availability of medical imaging data for network learning. To address this problem, we present a two-stage task-oriented deep learning method to detect large-scale anatomical landmarks simulta...
Saved in:
Published in | IEEE transactions on image processing Vol. 26; no. 10; pp. 4753 - 4764 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
United States
IEEE
01.10.2017
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | One of the major challenges in anatomical landmark detection, based on deep neural networks, is the limited availability of medical imaging data for network learning. To address this problem, we present a two-stage task-oriented deep learning method to detect large-scale anatomical landmarks simultaneously in real time, using limited training data. Specifically, our method consists of two deep convolutional neural networks (CNN), with each focusing on one specific task. Specifically, to alleviate the problem of limited training data, in the first stage, we propose a CNN based regression model using millions of image patches as input, aiming to learn inherent associations between local image patches and target anatomical landmarks. To further model the correlations among image patches, in the second stage, we develop another CNN model, which includes a) a fully convolutional network that shares the same architecture and network weights as the CNN used in the first stage and also b) several extra layers to jointly predict coordinates of multiple anatomical landmarks. Importantly, our method can jointly detect large-scale (e.g., thousands of) landmarks in real time. We have conducted various experiments for detecting 1200 brain landmarks from the 3D T1-weighted magnetic resonance images of 700 subjects, and also 7 prostate landmarks from the 3D computed tomography images of 73 subjects. The experimental results show the effectiveness of our method regarding both accuracy and efficiency in the anatomical landmark detection. |
---|---|
AbstractList | One of the major challenges in anatomical landmark detection, based on deep neural networks, is the limited availability of medical imaging data for network learning. To address this problem, we present a two-stage task-oriented deep learning (T
2
DL) method to detect
large-scale
anatomical landmarks simultaneously in
real time
, using
limited training data
. Specifically, our method consists of two deep convolutional neural networks (CNN), with each focusing on one specific task. Specifically, to alleviate the problem of limited training data, in the first stage, we propose a CNN based regression model using millions of image patches as input, aiming to learn inherent associations between local image patches and target anatomical landmarks. To further model the correlations among image patches, in the second stage, we develop another CNN model, which includes a) a fully convolutional network (FCN) that shares the same architecture and network weights as the CNN used in the first stage and also b) several extra layers to jointly predict coordinates of multiple anatomical landmarks. Importantly, our method can jointly detect large-scale (e.g., thousands of) landmarks in real time. We have conducted various experiments for detecting 1200 brain landmarks from the 3D T1-weighted magnetic resonance (MR) images of 700 subjects, and also 7 prostate landmarks from the 3D computed tomography (CT) images of 73 subjects. The experimental results show the effectiveness of our method regarding both accuracy and efficiency in the anatomical landmark detection. One of the major challenges in anatomical landmark detection, based on deep neural networks, is the limited availability of medical imaging data for network learning. To address this problem, we present a two-stage task-oriented deep learning method to detect large-scale anatomical landmarks simultaneously in real time , using limited training data . Specifically, our method consists of two deep convolutional neural networks (CNN), with each focusing on one specific task. Specifically, to alleviate the problem of limited training data, in the first stage, we propose a CNN based regression model using millions of image patches as input, aiming to learn inherent associations between local image patches and target anatomical landmarks. To further model the correlations among image patches, in the second stage, we develop another CNN model, which includes a) a fully convolutional network that shares the same architecture and network weights as the CNN used in the first stage and also b) several extra layers to jointly predict coordinates of multiple anatomical landmarks. Importantly, our method can jointly detect large-scale ( e.g. , thousands of) landmarks in real time. We have conducted various experiments for detecting 1200 brain landmarks from the 3D T1-weighted magnetic resonance images of 700 subjects, and also 7 prostate landmarks from the 3D computed tomography images of 73 subjects. The experimental results show the effectiveness of our method regarding both accuracy and efficiency in the anatomical landmark detection. One of the major challenges in anatomical landmark detection, based on deep neural networks, is the limited availability of medical imaging data for network learning. To address this problem, we present a two-stage task-oriented deep learning method to detect large-scale anatomical landmarks simultaneously in real time, using limited training data. Specifically, our method consists of two deep convolutional neural networks (CNN), with each focusing on one specific task. Specifically, to alleviate the problem of limited training data, in the first stage, we propose a CNN based regression model using millions of image patches as input, aiming to learn inherent associations between local image patches and target anatomical landmarks. To further model the correlations among image patches, in the second stage, we develop another CNN model, which includes a) a fully convolutional network that shares the same architecture and network weights as the CNN used in the first stage and also b) several extra layers to jointly predict coordinates of multiple anatomical landmarks. Importantly, our method can jointly detect large-scale (e.g., thousands of) landmarks in real time. We have conducted various experiments for detecting 1200 brain landmarks from the 3D T1-weighted magnetic resonance images of 700 subjects, and also 7 prostate landmarks from the 3D computed tomography images of 73 subjects. The experimental results show the effectiveness of our method regarding both accuracy and efficiency in the anatomical landmark detection.One of the major challenges in anatomical landmark detection, based on deep neural networks, is the limited availability of medical imaging data for network learning. To address this problem, we present a two-stage task-oriented deep learning method to detect large-scale anatomical landmarks simultaneously in real time, using limited training data. Specifically, our method consists of two deep convolutional neural networks (CNN), with each focusing on one specific task. Specifically, to alleviate the problem of limited training data, in the first stage, we propose a CNN based regression model using millions of image patches as input, aiming to learn inherent associations between local image patches and target anatomical landmarks. To further model the correlations among image patches, in the second stage, we develop another CNN model, which includes a) a fully convolutional network that shares the same architecture and network weights as the CNN used in the first stage and also b) several extra layers to jointly predict coordinates of multiple anatomical landmarks. Importantly, our method can jointly detect large-scale (e.g., thousands of) landmarks in real time. We have conducted various experiments for detecting 1200 brain landmarks from the 3D T1-weighted magnetic resonance images of 700 subjects, and also 7 prostate landmarks from the 3D computed tomography images of 73 subjects. The experimental results show the effectiveness of our method regarding both accuracy and efficiency in the anatomical landmark detection. |
Author | Liu, Mingxia Shen, Dinggang Zhang, Jun |
Author_xml | – sequence: 1 givenname: Jun orcidid: 0000-0001-5579-7094 surname: Zhang fullname: Zhang, Jun email: xdzhangjun@gmail.com organization: Department of Radiology and the Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA – sequence: 2 givenname: Mingxia surname: Liu fullname: Liu, Mingxia email: mingxia_liu@med.unc.edu organization: Department of Radiology and the Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA – sequence: 3 givenname: Dinggang surname: Shen fullname: Shen, Dinggang email: dgshen@med.unc.edu organization: Department of Radiology and the Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA |
BackLink | https://www.ncbi.nlm.nih.gov/pubmed/28678706$$D View this record in MEDLINE/PubMed |
BookMark | eNp9kc1r3DAQxUVJaT6ae6FQDL304q0kyxr7UgjZpl3YJIVszkKWx1sltrWx5Ib895G7m5Dm0JMGze893vAOyV7veiTkA6Mzxmj5dbX4NeOUwYwDjx_yDTlgpWAppYLvxZnmkAIT5T459P6GUiZyJt-RfV5IKIDKA-LnGNAE26-Tk14H11mj22Sp-7rTw61PzgbXJUvb2YB1co713_Wi0-tJMddBJ9d-Glf3Lr0Keo3JSvvb9HKw2E-SOeImucBxiLILDPcumr4nbxvdejzevUfk-uz76vRnurz8sTg9WaYmZzSkktaiMlhJ4JWmBYgsr0QGWgiEssamkFxDkWvJGylYI3hljAGsAQB1Znh2RL5tfTdj1WFtYqIYQ20GG297UE5b9e-mt7_V2v1ROfCSF3k0-LIzGNzdiD6oznqDbat7dKNXrGQSGCsLGdHPr9AbNw59PC9SnIqCgmSR-vQy0XOUpz4iILeAGZz3AzbK2KCDdVNA2ypG1VS8isWrqXi1Kz4K6Svhk_d_JB-3EouIzziUknGaZ4_pMLjs |
CODEN | IIPRE4 |
CitedBy_id | crossref_primary_10_1007_s00247_021_05004_z crossref_primary_10_1109_TMI_2022_3180343 crossref_primary_10_1007_s11042_017_5581_1 crossref_primary_10_1109_TIP_2020_2973510 crossref_primary_10_1007_s00521_021_06873_z crossref_primary_10_1007_s11548_024_03089_z crossref_primary_10_1016_j_neucom_2020_02_069 crossref_primary_10_1088_1361_6501_ac5436 crossref_primary_10_3389_fnins_2021_670287 crossref_primary_10_1109_TMI_2019_2899328 crossref_primary_10_1007_s42600_024_00394_z crossref_primary_10_1016_j_compbiomed_2018_05_018 crossref_primary_10_3233_JIFS_220167 crossref_primary_10_1148_radiol_2018180547 crossref_primary_10_1109_TMI_2022_3174513 crossref_primary_10_1049_iet_ipr_2019_0617 crossref_primary_10_1109_TMI_2022_3222730 crossref_primary_10_1109_TMI_2018_2865671 crossref_primary_10_1016_j_cmpb_2021_106581 crossref_primary_10_1007_s00530_021_00884_5 crossref_primary_10_1007_s11280_022_01051_0 crossref_primary_10_1109_ACCESS_2019_2926288 crossref_primary_10_1109_JBHI_2018_2882392 crossref_primary_10_1007_s11042_022_12792_5 crossref_primary_10_1016_j_asoc_2021_107445 crossref_primary_10_1109_TPAMI_2018_2889096 crossref_primary_10_1155_2020_9843275 crossref_primary_10_1109_TITS_2022_3158253 crossref_primary_10_1109_JBHI_2020_3002582 crossref_primary_10_1016_j_bspc_2024_106398 crossref_primary_10_1109_TMI_2022_3149281 crossref_primary_10_1038_s41598_023_28669_y crossref_primary_10_1007_s00530_017_0579_0 crossref_primary_10_1038_s44172_023_00066_3 crossref_primary_10_1109_JBHI_2020_2997760 crossref_primary_10_1016_j_media_2019_03_007 crossref_primary_10_1016_j_compmedimag_2024_102364 crossref_primary_10_1016_j_media_2020_101659 crossref_primary_10_1016_j_eswa_2022_119166 crossref_primary_10_1109_JBHI_2021_3110680 crossref_primary_10_1259_dmfr_20220081 crossref_primary_10_1117_1_JEI_32_6_063021 crossref_primary_10_1016_j_patcog_2021_108341 crossref_primary_10_3390_app13148295 crossref_primary_10_1016_j_media_2017_10_005 crossref_primary_10_3390_electronics12194038 crossref_primary_10_1109_JBHI_2017_2732287 crossref_primary_10_1016_j_bspc_2024_107067 crossref_primary_10_1109_ACCESS_2025_3541601 crossref_primary_10_1007_s00138_022_01303_z crossref_primary_10_1016_j_neuroimage_2019_05_037 crossref_primary_10_1007_s11042_018_6463_x crossref_primary_10_1007_s11042_024_19416_0 crossref_primary_10_3390_jcm11113203 crossref_primary_10_1016_j_media_2019_101621 crossref_primary_10_1002_rcs_2093 crossref_primary_10_1109_ACCESS_2019_2929365 crossref_primary_10_1007_s00530_017_0576_3 crossref_primary_10_1016_j_eswa_2023_120269 crossref_primary_10_1109_TMI_2018_2875814 crossref_primary_10_1109_TMI_2020_3009002 crossref_primary_10_1097_SCS_0000000000009299 crossref_primary_10_1109_JBHI_2020_2994114 crossref_primary_10_1016_j_future_2022_04_011 crossref_primary_10_1097_MD_0000000000024427 crossref_primary_10_1038_s41551_021_00704_1 crossref_primary_10_1109_TIP_2019_2919937 crossref_primary_10_1109_TNNLS_2021_3055772 crossref_primary_10_1016_j_bspc_2022_104400 crossref_primary_10_1109_JBHI_2021_3090966 crossref_primary_10_1007_s00530_017_0580_7 crossref_primary_10_1007_s11548_020_02240_w crossref_primary_10_1007_s11831_023_09967_0 crossref_primary_10_1016_j_knee_2022_11_026 crossref_primary_10_1007_s11548_021_02523_w crossref_primary_10_3390_app112110277 crossref_primary_10_1109_TCYB_2019_2909925 crossref_primary_10_1007_s11548_022_02770_5 crossref_primary_10_1016_j_media_2023_102759 crossref_primary_10_1136_bjophthalmol_2019_315723 crossref_primary_10_1016_j_media_2020_101904 crossref_primary_10_1002_mp_13264 crossref_primary_10_1002_mp_14355 crossref_primary_10_1109_TIP_2024_3356174 crossref_primary_10_1016_j_media_2018_02_009 crossref_primary_10_1007_s11042_022_12284_6 crossref_primary_10_1016_j_bspc_2023_104608 crossref_primary_10_1007_s00530_017_0577_2 crossref_primary_10_3390_jpm12030387 crossref_primary_10_1109_JBHI_2022_3166068 crossref_primary_10_1109_ACCESS_2020_3035809 crossref_primary_10_3390_app122010190 crossref_primary_10_3348_jksr_2019_80_2_176 crossref_primary_10_1109_TIP_2018_2820424 crossref_primary_10_1007_s10527_020_09961_x crossref_primary_10_1016_j_neucom_2024_127325 crossref_primary_10_3389_fnins_2021_760975 crossref_primary_10_1016_j_displa_2024_102743 crossref_primary_10_1109_JBHI_2018_2791863 |
Cites_doi | 10.1109/CVPR.2013.446 10.1109/TPAMI.2015.2430325 10.1016/j.imavis.2009.07.010 10.1109/TMI.2013.2258030 10.1007/978-3-540-85988-8_69 10.1016/j.patcog.2015.01.019 10.1016/j.media.2015.06.012 10.1016/j.cviu.2012.10.004 10.1109/CVPR.2015.7298664 10.1109/ICCVW.2015.21 10.1109/TBME.2015.2503421 10.1109/EMBC.2015.7318454 10.1109/CVPR.2005.177 10.1007/978-3-319-46726-9_1 10.1016/j.media.2016.11.002 10.1016/j.neuroimage.2014.05.078 10.1007/978-3-319-46723-8_27 10.1007/978-3-319-46726-9_27 10.1109/TMI.2016.2582386 10.1109/TMI.2016.2597270 10.1109/ICCV.2015.419 10.1109/TIP.2012.2214045 10.1016/j.media.2014.01.002 10.1109/TIP.2015.2446944 10.1109/TMI.2011.2162634 10.1109/ISBI.2016.7493535 10.1109/TIP.2015.2502485 10.1023/A:1010933404324 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2017 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2017 |
DBID | 97E RIA RIE AAYXX CITATION NPM 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 5PM |
DOI | 10.1109/TIP.2017.2721106 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Xplore CrossRef PubMed Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional MEDLINE - Academic PubMed Central (Full Participant titles) |
DatabaseTitle | CrossRef PubMed Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional MEDLINE - Academic |
DatabaseTitleList | Technology Research Database PubMed MEDLINE - Academic |
Database_xml | – sequence: 1 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Applied Sciences Engineering |
EISSN | 1941-0042 |
EndPage | 4764 |
ExternalDocumentID | PMC5729285 28678706 10_1109_TIP_2017_2721106 7961205 |
Genre | orig-research Journal Article |
GrantInformation_xml | – fundername: NIH grantid: EB006733; EB008374; EB009634; MH100217; AG041721; AG049371; AG042599 funderid: 10.13039/100000002 – fundername: NIBIB NIH HHS grantid: R01 EB006733 – fundername: NIBIB NIH HHS grantid: R01 EB009634 – fundername: NIA NIH HHS grantid: R01 AG042599 – fundername: NCI NIH HHS grantid: R01 CA140413 – fundername: NIA NIH HHS grantid: R01 AG049371 – fundername: NIBIB NIH HHS grantid: R01 EB008374 – fundername: NIA NIH HHS grantid: R01 AG041721 – fundername: NIMH NIH HHS grantid: R01 MH100217 – fundername: NIA NIH HHS grantid: RF1 AG053867 |
GroupedDBID | --- -~X .DC 0R~ 29I 4.4 53G 5GY 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABFSI ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AETIX AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 E.L EBS EJD F5P HZ~ H~9 ICLAB IFIPE IFJZH IPLJI JAVBF LAI M43 MS~ O9- OCL P2P RIA RIE RNS TAE TN5 VH1 AAYOK AAYXX CITATION RIG NPM 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 5PM |
ID | FETCH-LOGICAL-c510t-60d4bceb672ba087435b437a44e79def862a785a62f641f42bccc7ed777ea3c23 |
IEDL.DBID | RIE |
ISSN | 1057-7149 1941-0042 |
IngestDate | Thu Aug 21 13:44:11 EDT 2025 Fri Jul 11 01:04:56 EDT 2025 Mon Jun 30 10:15:40 EDT 2025 Mon Jul 21 06:07:53 EDT 2025 Thu Apr 24 22:56:28 EDT 2025 Tue Jul 01 02:03:15 EDT 2025 Wed Aug 27 02:30:44 EDT 2025 |
IsDoiOpenAccess | false |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 10 |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information. |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c510t-60d4bceb672ba087435b437a44e79def862a785a62f641f42bccc7ed777ea3c23 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 These authors contribute equally to this study. |
ORCID | 0000-0001-5579-7094 |
OpenAccessLink | https://www.ncbi.nlm.nih.gov/pmc/articles/5729285 |
PMID | 28678706 |
PQID | 1920480761 |
PQPubID | 85429 |
PageCount | 12 |
ParticipantIDs | pubmedcentral_primary_oai_pubmedcentral_nih_gov_5729285 crossref_primary_10_1109_TIP_2017_2721106 proquest_journals_1920480761 pubmed_primary_28678706 proquest_miscellaneous_1916711986 ieee_primary_7961205 crossref_citationtrail_10_1109_TIP_2017_2721106 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2017-10-01 |
PublicationDateYYYYMMDD | 2017-10-01 |
PublicationDate_xml | – month: 10 year: 2017 text: 2017-10-01 day: 01 |
PublicationDecade | 2010 |
PublicationPlace | United States |
PublicationPlace_xml | – name: United States – name: New York |
PublicationTitle | IEEE transactions on image processing |
PublicationTitleAbbrev | TIP |
PublicationTitleAlternate | IEEE Trans Image Process |
PublicationYear | 2017 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref13 ref12 ref37 ref15 ref36 ref14 liang (ref33) 2015 ref30 ref11 ref32 ref10 krizhevsky (ref39) 2012 sermanet (ref31) 2013 ref2 ref17 ref16 ref19 zheng (ref4) 2015 ref18 cootes (ref38) 2012 zhang (ref29) 0 criminisi (ref20) 2009 ref24 ref23 ref26 ref25 ref22 ref21 ronneberger (ref40) 2015 ref28 abadi (ref35) 2016 ref27 ref8 ref7 suzani (ref5) 2015 ref9 ref3 ref6 zhang (ref1) 2014 li (ref34) 2016 26201875 - Med Image Anal. 2015 Aug;24(1):205-19 27898305 - Med Image Anal. 2017 Feb;36:123-134 26736354 - Conf Proc IEEE Eng Med Biol Soc. 2015 Aug;2015 :683-6 28055830 - IEEE Trans Med Imaging. 2017 Jan;36(1):332-342 24911377 - Neuroimage. 2014 Oct 15;100:91-105 27333602 - IEEE Trans Med Imaging. 2016 Dec;35(12 ):2524-2533 28975161 - Med Image Comput Comput Assist Interv. 2016 Oct;9902:1-9 28534798 - IEEE J Biomed Health Inform. 2017 Nov;21(6):1607-1616 22910113 - IEEE Trans Image Process. 2013 Jan;22(1):31-42 26625402 - IEEE Trans Biomed Eng. 2016 Sep;63(9):1820-1829 21788183 - IEEE Trans Med Imaging. 2011 Dec;30(12):2087-100 26599969 - IEEE Trans Image Process. 2016 Feb;25(2):700-12 26087493 - IEEE Trans Image Process. 2015 Nov;24(11):3425-40 18979793 - Med Image Comput Comput Assist Interv. 2008;11(Pt 1):576-84 24561486 - Med Image Anal. 2014 Apr;18(3):487-99 23591481 - IEEE Trans Med Imaging. 2013 Aug;32(8):1462-72 |
References_xml | – ident: ref2 doi: 10.1109/CVPR.2013.446 – start-page: 265 year: 2016 ident: ref35 article-title: TensorFlow: A system for large-scale machine learning publication-title: Proc USENIX Symp Oper Syst Design Implementation – start-page: 94 year: 2014 ident: ref1 article-title: Facial landmark detection by deep multi-task learning publication-title: Proc Eur Conf Comput Vis – ident: ref25 doi: 10.1109/TPAMI.2015.2430325 – start-page: 69 year: 2009 ident: ref20 article-title: Decision forests with long-range spatial context for organ localization in CT volumes publication-title: Proc Int Conf Med Image Comput Comput -Assist Intervent – ident: ref11 doi: 10.1016/j.imavis.2009.07.010 – ident: ref14 doi: 10.1109/TMI.2013.2258030 – ident: ref12 doi: 10.1007/978-3-540-85988-8_69 – year: 2013 ident: ref31 publication-title: Overfeat Integrated Recognition Localization and Detection Using Convolutional Networks – start-page: 278 year: 2012 ident: ref38 article-title: Robust and accurate shape model fitting using random forest regression voting publication-title: Proc Eur Conf Comput Vis – ident: ref28 doi: 10.1016/j.patcog.2015.01.019 – ident: ref37 doi: 10.1016/j.media.2015.06.012 – ident: ref23 doi: 10.1016/j.cviu.2012.10.004 – ident: ref32 doi: 10.1109/CVPR.2015.7298664 – start-page: 565 year: 2015 ident: ref4 article-title: 3D deep learning for efficient and robust landmark detection in volumetric data publication-title: Proc Int Conf Med Image Comput Comput -Assist Intervent – ident: ref10 doi: 10.1109/ICCVW.2015.21 – ident: ref13 doi: 10.1109/TBME.2015.2503421 – ident: ref8 doi: 10.1109/EMBC.2015.7318454 – ident: ref22 doi: 10.1109/CVPR.2005.177 – ident: ref24 doi: 10.1007/978-3-319-46726-9_1 – ident: ref30 doi: 10.1016/j.media.2016.11.002 – ident: ref27 doi: 10.1016/j.neuroimage.2014.05.078 – ident: ref6 doi: 10.1007/978-3-319-46723-8_27 – year: 2015 ident: ref33 publication-title: Unconstrained facial landmark localization with backbone-branches fully-convolutional networks – ident: ref7 doi: 10.1007/978-3-319-46726-9_27 – ident: ref36 doi: 10.1109/TMI.2016.2582386 – start-page: 941514 year: 2015 ident: ref5 article-title: Deep learning for automatic localization, identification, and segmentation of vertebral bodies in volumetric MR images publication-title: Proc SPIE – ident: ref18 doi: 10.1109/TMI.2016.2597270 – ident: ref3 doi: 10.1109/ICCV.2015.419 – ident: ref21 doi: 10.1109/TIP.2012.2214045 – ident: ref15 doi: 10.1016/j.media.2014.01.002 – ident: ref16 doi: 10.1109/TIP.2015.2446944 – year: 0 ident: ref29 article-title: Alzheimer's disease diagnosis using landmark-based features from longitudinal structural MR images publication-title: IEEE J Biomed Health Inform – ident: ref19 doi: 10.1109/TMI.2011.2162634 – start-page: 1097 year: 2012 ident: ref39 article-title: Imagenet classification with deep convolutional neural networks publication-title: Proc Adv Neural Inf Process Syst – ident: ref9 doi: 10.1109/ISBI.2016.7493535 – ident: ref17 doi: 10.1109/TIP.2015.2502485 – start-page: 234 year: 2015 ident: ref40 article-title: U-Net: Convolutional networks for biomedical image segmentation publication-title: Proc Int Conf Med Image Comput Comput -Assist Intervent – ident: ref26 doi: 10.1023/A:1010933404324 – start-page: 379 year: 2016 ident: ref34 article-title: R-FCN: Object detection via region-based fully convolutional networks publication-title: Proc Adv Neural Inf Process Syst – reference: 28534798 - IEEE J Biomed Health Inform. 2017 Nov;21(6):1607-1616 – reference: 24911377 - Neuroimage. 2014 Oct 15;100:91-105 – reference: 18979793 - Med Image Comput Comput Assist Interv. 2008;11(Pt 1):576-84 – reference: 24561486 - Med Image Anal. 2014 Apr;18(3):487-99 – reference: 26599969 - IEEE Trans Image Process. 2016 Feb;25(2):700-12 – reference: 26625402 - IEEE Trans Biomed Eng. 2016 Sep;63(9):1820-1829 – reference: 23591481 - IEEE Trans Med Imaging. 2013 Aug;32(8):1462-72 – reference: 28975161 - Med Image Comput Comput Assist Interv. 2016 Oct;9902:1-9 – reference: 27333602 - IEEE Trans Med Imaging. 2016 Dec;35(12 ):2524-2533 – reference: 26087493 - IEEE Trans Image Process. 2015 Nov;24(11):3425-40 – reference: 21788183 - IEEE Trans Med Imaging. 2011 Dec;30(12):2087-100 – reference: 26736354 - Conf Proc IEEE Eng Med Biol Soc. 2015 Aug;2015 :683-6 – reference: 22910113 - IEEE Trans Image Process. 2013 Jan;22(1):31-42 – reference: 28055830 - IEEE Trans Med Imaging. 2017 Jan;36(1):332-342 – reference: 26201875 - Med Image Anal. 2015 Aug;24(1):205-19 – reference: 27898305 - Med Image Anal. 2017 Feb;36:123-134 |
SSID | ssj0014516 |
Score | 2.6020849 |
Snippet | One of the major challenges in anatomical landmark detection, based on deep neural networks, is the limited availability of medical imaging data for network... |
SourceID | pubmedcentral proquest pubmed crossref ieee |
SourceType | Open Access Repository Aggregation Database Index Database Enrichment Source Publisher |
StartPage | 4753 |
SubjectTerms | Accuracy Anatomical landmark detection Artificial neural networks Biological neural networks Biomedical imaging Brain Computed tomography Computer architecture deep convolutional neural networks Landmarks limited medical imaging data Machine learning Magnetic resonance imaging Medical imaging Neural networks Patches (structures) Prostate Real time Target recognition task-oriented Testing Three-dimensional displays Training Training data |
Title | Detecting Anatomical Landmarks From Limited Medical Imaging Data Using Two-Stage Task-Oriented Deep Neural Networks |
URI | https://ieeexplore.ieee.org/document/7961205 https://www.ncbi.nlm.nih.gov/pubmed/28678706 https://www.proquest.com/docview/1920480761 https://www.proquest.com/docview/1916711986 https://pubmed.ncbi.nlm.nih.gov/PMC5729285 |
Volume | 26 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3db9QwDI_GnuCBwcZHYaAg8YJE765pmrSPE8dpQ2zwcJP2VqWpO9Bx7XTtCYm_HjtNq900Id4ixYkS2W7s2v6ZsfcGHS4bowIKkCqUkS7DtDQ6lGmZQBKjarqU__MLdXopv1wlV3vs41gLAwAu-QwmNHSx_LKxW_pVNtUZvscEWPoAHbe-VmuMGFDDWRfZTHSo0ewfQpKzbLo8-045XHoinLtDXYtEqkhS1c5r5Nqr3Gdp3k2YvPUCLQ7Y-XD2PvFkNdl2xcT-uQPr-L-Xe8Iee1OUn_Sy85TtQX3IDrxZyr3St4fs0S3MwiPWzoHiDjjmJzV67A5ugH81dbk2m1XLF5tmzX3ZFPdhIH62ds2Q-Nx0hrssBb783YRo6V4DX5p2FX4jvGVaMge44QQZgssu-hz19hm7XHxefjoNfeeG0KKOd6GalbKwUCgtCjNL0UpJChlrIyXorIQK3Sij08QoUSkZVVIU1loNJQoGmNiK-Dnbr5saXjJO6D42MQJnAT82gA6priotlShjKWZZwKYDB3PrYc2pu8av3Lk3syxH9ufE_tyzP2AfxhU3PaTHP2iPiFMjnWdSwI4HIcm9zrc52squQF9FAXs3TqO2UgjG1NBsiSZSOoqyFHd-0cvUuPcgkwHTO9I2EhAS-O5M_fOHQwRP0EUSafLq_tO-Zg_pTn0K4jHb7zZbeIOmVFe8dTr0Fw19GJg |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3db9MwELem8QA8bLABCwwwEi9IpE0cx04eJ0rVQlt4yKS9RY5zGag0mZpUSPz1nJ0PrdOEeLPkD9m6u9xd7u53hLxX6HDpAAWQARcu92XuRrmSLo_yEMIARdOm_C9XYnbJv1yFVwfk41ALAwA2-QxGZmhj-Xmld-ZX2VjGqI8NYOkD1Psha6u1hpiBaTlrY5uhdCUa_n1Q0ovHyfy7yeKSI2YdHtO3iEXC8KrY00e2wcp9tubdlMlbOmh6TJb97dvUk_Vo12Qj_ecOsOP_Pu8JOeqMUXrRcs9TcgDlCTnuDFPaiX19Qh7fQi08JfUETOQBx_SiRJ_dAg7QhSrzjdquazrdVhvaFU7RLhBE5xvbDolOVKOozVOgye_KRVv3Gmii6rX7zSAumy0TgBtqQENw26rNUq-fkcvp5-TTzO16N7gapbxxhZfzTEMmJMuUF6GdEmY8kIpzkHEOBTpSSkahEqwQ3C84y7TWEnJkDVCBZsFzclhWJZwRavB9dKgYzgJ-bgBdUlkUkguWB5x5sUPGPQVT3QGbm_4av1Lr4HhxiuRPDfnTjvwO-TDsuGlBPf6x9tRQaljXEckh5z2TpJ3U1ylay7ZEX_gOeTdMo7yaIIwqodqZNb6Qvh9HePKLlqeGs3uedIjc47ZhgcEC358pf_6wmOAhOkksCl_ef9u35OEsWS7SxXz19RV5ZN7XJiSek8Nmu4PXaFg12RsrT38Bzm4b4g |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Detecting+Anatomical+Landmarks+From+Limited+Medical+Imaging+Data+Using+Two-Stage+Task-Oriented+Deep+Neural+Networks&rft.jtitle=IEEE+transactions+on+image+processing&rft.au=Zhang%2C+Jun&rft.au=Liu%2C+Mingxia&rft.au=Shen%2C+Dinggang&rft.date=2017-10-01&rft.eissn=1941-0042&rft.volume=26&rft.issue=10&rft.spage=4753&rft_id=info:doi/10.1109%2FTIP.2017.2721106&rft_id=info%3Apmid%2F28678706&rft.externalDocID=28678706 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1057-7149&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1057-7149&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1057-7149&client=summon |