Deep learning for abdominal adipose tissue segmentation with few labelled samples
Purpose Fully automated abdominal adipose tissue segmentation from computed tomography (CT) scans plays an important role in biomedical diagnoses and prognoses. However, to identify and segment subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT) in the abdominal region, the tradition...
Saved in:
Published in | International journal for computer assisted radiology and surgery Vol. 17; no. 3; pp. 579 - 587 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
Cham
Springer International Publishing
01.03.2022
Springer Nature B.V |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Purpose
Fully automated abdominal adipose tissue segmentation from computed tomography (CT) scans plays an important role in biomedical diagnoses and prognoses. However, to identify and segment subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT) in the abdominal region, the traditional routine process used in clinical practise is unattractive, expensive, time-consuming and leads to false segmentation. To address this challenge, this paper introduces and develops an effective global-anatomy-level convolutional neural network (ConvNet) automated segmentation of abdominal adipose tissue from CT scans termed EFNet to accommodate multistage semantic segmentation and high similarity intensity characteristics of the two classes (VAT and SAT) in the abdominal region.
Methods
EFNet consists of three pathways: (1) The first pathway is the max unpooling operator, which was used to reduce computational consumption. (2) The second pathway is concatenation, which was applied to recover the shape segmentation results. (3) The third pathway is anatomy pyramid pooling, which was adopted to obtain fine-grained features. The usable anatomical information was encoded in the output of EFNet and allowed for the control of the density of the fine-grained features.
Results
We formulated an end-to-end manner for the learning process of EFNet, where the representation features can be jointly learned through a mixed feature fusion layer. We immensely evaluated our model on different datasets and compared it to existing deep learning networks. Our proposed model called EFNet outperformed other state-of-the-art models on the segmentation results and demonstrated tremendous performances for abdominal adipose tissue segmentation.
Conclusion
EFNet is extremely fast with remarkable performance for fully automated segmentation of the VAT and SAT in abdominal adipose tissue from CT scans. The proposed method demonstrates a strength ability for automated detection and segmentation of abdominal adipose tissue in clinical practise. |
---|---|
AbstractList | Purpose
Fully automated abdominal adipose tissue segmentation from computed tomography (CT) scans plays an important role in biomedical diagnoses and prognoses. However, to identify and segment subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT) in the abdominal region, the traditional routine process used in clinical practise is unattractive, expensive, time-consuming and leads to false segmentation. To address this challenge, this paper introduces and develops an effective global-anatomy-level convolutional neural network (ConvNet) automated segmentation of abdominal adipose tissue from CT scans termed EFNet to accommodate multistage semantic segmentation and high similarity intensity characteristics of the two classes (VAT and SAT) in the abdominal region.
Methods
EFNet consists of three pathways: (1) The first pathway is the max unpooling operator, which was used to reduce computational consumption. (2) The second pathway is concatenation, which was applied to recover the shape segmentation results. (3) The third pathway is anatomy pyramid pooling, which was adopted to obtain fine-grained features. The usable anatomical information was encoded in the output of EFNet and allowed for the control of the density of the fine-grained features.
Results
We formulated an end-to-end manner for the learning process of EFNet, where the representation features can be jointly learned through a mixed feature fusion layer. We immensely evaluated our model on different datasets and compared it to existing deep learning networks. Our proposed model called EFNet outperformed other state-of-the-art models on the segmentation results and demonstrated tremendous performances for abdominal adipose tissue segmentation.
Conclusion
EFNet is extremely fast with remarkable performance for fully automated segmentation of the VAT and SAT in abdominal adipose tissue from CT scans. The proposed method demonstrates a strength ability for automated detection and segmentation of abdominal adipose tissue in clinical practise. PurposeFully automated abdominal adipose tissue segmentation from computed tomography (CT) scans plays an important role in biomedical diagnoses and prognoses. However, to identify and segment subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT) in the abdominal region, the traditional routine process used in clinical practise is unattractive, expensive, time-consuming and leads to false segmentation. To address this challenge, this paper introduces and develops an effective global-anatomy-level convolutional neural network (ConvNet) automated segmentation of abdominal adipose tissue from CT scans termed EFNet to accommodate multistage semantic segmentation and high similarity intensity characteristics of the two classes (VAT and SAT) in the abdominal region.MethodsEFNet consists of three pathways: (1) The first pathway is the max unpooling operator, which was used to reduce computational consumption. (2) The second pathway is concatenation, which was applied to recover the shape segmentation results. (3) The third pathway is anatomy pyramid pooling, which was adopted to obtain fine-grained features. The usable anatomical information was encoded in the output of EFNet and allowed for the control of the density of the fine-grained features.ResultsWe formulated an end-to-end manner for the learning process of EFNet, where the representation features can be jointly learned through a mixed feature fusion layer. We immensely evaluated our model on different datasets and compared it to existing deep learning networks. Our proposed model called EFNet outperformed other state-of-the-art models on the segmentation results and demonstrated tremendous performances for abdominal adipose tissue segmentation. ConclusionEFNet is extremely fast with remarkable performance for fully automated segmentation of the VAT and SAT in abdominal adipose tissue from CT scans. The proposed method demonstrates a strength ability for automated detection and segmentation of abdominal adipose tissue in clinical practise. Fully automated abdominal adipose tissue segmentation from computed tomography (CT) scans plays an important role in biomedical diagnoses and prognoses. However, to identify and segment subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT) in the abdominal region, the traditional routine process used in clinical practise is unattractive, expensive, time-consuming and leads to false segmentation. To address this challenge, this paper introduces and develops an effective global-anatomy-level convolutional neural network (ConvNet) automated segmentation of abdominal adipose tissue from CT scans termed EFNet to accommodate multistage semantic segmentation and high similarity intensity characteristics of the two classes (VAT and SAT) in the abdominal region.PURPOSEFully automated abdominal adipose tissue segmentation from computed tomography (CT) scans plays an important role in biomedical diagnoses and prognoses. However, to identify and segment subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT) in the abdominal region, the traditional routine process used in clinical practise is unattractive, expensive, time-consuming and leads to false segmentation. To address this challenge, this paper introduces and develops an effective global-anatomy-level convolutional neural network (ConvNet) automated segmentation of abdominal adipose tissue from CT scans termed EFNet to accommodate multistage semantic segmentation and high similarity intensity characteristics of the two classes (VAT and SAT) in the abdominal region.EFNet consists of three pathways: (1) The first pathway is the max unpooling operator, which was used to reduce computational consumption. (2) The second pathway is concatenation, which was applied to recover the shape segmentation results. (3) The third pathway is anatomy pyramid pooling, which was adopted to obtain fine-grained features. The usable anatomical information was encoded in the output of EFNet and allowed for the control of the density of the fine-grained features.METHODSEFNet consists of three pathways: (1) The first pathway is the max unpooling operator, which was used to reduce computational consumption. (2) The second pathway is concatenation, which was applied to recover the shape segmentation results. (3) The third pathway is anatomy pyramid pooling, which was adopted to obtain fine-grained features. The usable anatomical information was encoded in the output of EFNet and allowed for the control of the density of the fine-grained features.We formulated an end-to-end manner for the learning process of EFNet, where the representation features can be jointly learned through a mixed feature fusion layer. We immensely evaluated our model on different datasets and compared it to existing deep learning networks. Our proposed model called EFNet outperformed other state-of-the-art models on the segmentation results and demonstrated tremendous performances for abdominal adipose tissue segmentation.RESULTSWe formulated an end-to-end manner for the learning process of EFNet, where the representation features can be jointly learned through a mixed feature fusion layer. We immensely evaluated our model on different datasets and compared it to existing deep learning networks. Our proposed model called EFNet outperformed other state-of-the-art models on the segmentation results and demonstrated tremendous performances for abdominal adipose tissue segmentation.EFNet is extremely fast with remarkable performance for fully automated segmentation of the VAT and SAT in abdominal adipose tissue from CT scans. The proposed method demonstrates a strength ability for automated detection and segmentation of abdominal adipose tissue in clinical practise.CONCLUSIONEFNet is extremely fast with remarkable performance for fully automated segmentation of the VAT and SAT in abdominal adipose tissue from CT scans. The proposed method demonstrates a strength ability for automated detection and segmentation of abdominal adipose tissue in clinical practise. Fully automated abdominal adipose tissue segmentation from computed tomography (CT) scans plays an important role in biomedical diagnoses and prognoses. However, to identify and segment subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT) in the abdominal region, the traditional routine process used in clinical practise is unattractive, expensive, time-consuming and leads to false segmentation. To address this challenge, this paper introduces and develops an effective global-anatomy-level convolutional neural network (ConvNet) automated segmentation of abdominal adipose tissue from CT scans termed EFNet to accommodate multistage semantic segmentation and high similarity intensity characteristics of the two classes (VAT and SAT) in the abdominal region. EFNet consists of three pathways: (1) The first pathway is the max unpooling operator, which was used to reduce computational consumption. (2) The second pathway is concatenation, which was applied to recover the shape segmentation results. (3) The third pathway is anatomy pyramid pooling, which was adopted to obtain fine-grained features. The usable anatomical information was encoded in the output of EFNet and allowed for the control of the density of the fine-grained features. We formulated an end-to-end manner for the learning process of EFNet, where the representation features can be jointly learned through a mixed feature fusion layer. We immensely evaluated our model on different datasets and compared it to existing deep learning networks. Our proposed model called EFNet outperformed other state-of-the-art models on the segmentation results and demonstrated tremendous performances for abdominal adipose tissue segmentation. EFNet is extremely fast with remarkable performance for fully automated segmentation of the VAT and SAT in abdominal adipose tissue from CT scans. The proposed method demonstrates a strength ability for automated detection and segmentation of abdominal adipose tissue in clinical practise. |
Author | Hounye, Alphonse Houssou Zhang, Jianglin Hou, Muzhou Wang, Zheng Qi, Min |
Author_xml | – sequence: 1 givenname: Zheng surname: Wang fullname: Wang, Zheng organization: School of Mathematics and Statistics, Central South University, Science and Engineering School, Hunan First Normal University – sequence: 2 givenname: Alphonse Houssou surname: Hounye fullname: Hounye, Alphonse Houssou organization: School of Mathematics and Statistics, Central South University – sequence: 3 givenname: Jianglin surname: Zhang fullname: Zhang, Jianglin organization: Department of Detmatology, The Second Clinical Medical College, Shenzhen Peoples Hospital, Jinan University. The First Affiliated Hospital, Southern University of Science and Technology – sequence: 4 givenname: Muzhou orcidid: 0000-0001-6658-2187 surname: Hou fullname: Hou, Muzhou email: houmuzhou@sina.com organization: School of Mathematics and Statistics, Central South University – sequence: 5 givenname: Min surname: Qi fullname: Qi, Min email: qimin05@csu.edu.cn organization: Department of Plastic Surgery, Xiangya Hospital, Central South University |
BackLink | https://www.ncbi.nlm.nih.gov/pubmed/34845590$$D View this record in MEDLINE/PubMed |
BookMark | eNp9kc1q3TAQRkVJaH7aF8iiCLrpxs3Ikmx5WZImLVwIgWQtZHl8q2BLrmRzydtXN04ayCILIS3OGUbfd0IOfPBIyBmD7wygPk-MSaEKKFk-kvNCfSDHTFWsqETZHPx_MzgiJyk9AAhZc_mRHHGhhJQNHJPbS8SJDmiid35L-xCpabswOm8Gajo3hYR0diktSBNuR_SzmV3wdOfmP7THHR1Mi8OAHU1mnAZMn8hhb4aEn5_vU3J_9fPu4lexubn-ffFjU1hey7lAK3soTatEU7fWQmuZ4pbbXlXYCMYMUygESAFgUCoDdWM4wwoq6DtgDT8l39a5Uwx_F0yzHl2yeRXjMSxJlxUIxXNQMqNf36APYYn5h3uKl03ViJJl6ssztbQjdnqKbjTxUb-ElQG1AjaGlCL22ro1jTkaN2gGet-LXnvRuRf91ItWWS3fqC_T35X4KqUM-y3G17Xfsf4BKhCeRA |
CitedBy_id | crossref_primary_10_3390_cancers16132364 crossref_primary_10_1016_j_heliyon_2024_e29670 crossref_primary_10_53903_01212095_280 |
Cites_doi | 10.1007/978-3-030-57373-7_2 10.1109/TPAMI.2015.2496141 10.1007/s11063-020-10230-x 10.1002/mrm.27550 10.1016/j.cmpb.2017.03.017 10.1373/clinchem.2017.277376 10.1038/s41598-016-0028-x 10.1109/TPAMI.2016.2644615 10.1016/j.media.2019.101619 10.1210/jcem-54-2-254 10.1016/j.media.2016.05.004 10.1007/s10439-019-02349-3 10.1118/1.2842076 10.1002/mp.14465 10.1016/j.neucom.2021.03.035 10.1038/oby.2007.29 10.1002/mrm.28022 10.14201/ADCAIJ2020927990 10.1109/TPAMI.2017.2699184 10.1109/JBHI.2018.2818620 10.1016/j.neuroimage.2018.11.042 10.1148/radiol.2018181432 10.1117/1.JMI.5.3.036501 10.1109/CVPR.2006.68 10.1007/978-3-319-24574-4_28 10.1109/CVPR.2017.660 10.1117/12.2254139 10.1007/978-3-030-01234-2_49 10.1007/978-1-4842-3516-4_2 |
ContentType | Journal Article |
Copyright | CARS 2021 2021. CARS. CARS 2021. |
Copyright_xml | – notice: CARS 2021 – notice: 2021. CARS. – notice: CARS 2021. |
DBID | AAYXX CITATION CGR CUY CVF ECM EIF NPM K9. 7X8 |
DOI | 10.1007/s11548-021-02533-8 |
DatabaseName | CrossRef Medline MEDLINE MEDLINE (Ovid) MEDLINE MEDLINE PubMed ProQuest Health & Medical Complete (Alumni) MEDLINE - Academic |
DatabaseTitle | CrossRef MEDLINE Medline Complete MEDLINE with Full Text PubMed MEDLINE (Ovid) ProQuest Health & Medical Complete (Alumni) MEDLINE - Academic |
DatabaseTitleList | ProQuest Health & Medical Complete (Alumni) MEDLINE - Academic MEDLINE |
Database_xml | – sequence: 1 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: EIF name: MEDLINE url: https://proxy.k.utb.cz/login?url=https://www.webofscience.com/wos/medline/basic-search sourceTypes: Index Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Medicine Computer Science |
EISSN | 1861-6429 |
EndPage | 587 |
ExternalDocumentID | 34845590 10_1007_s11548_021_02533_8 |
Genre | Journal Article |
GrantInformation_xml | – fundername: scientific research fund of hunan provincial education department grantid: 20C0402 |
GroupedDBID | --- -5E -5G -BR -EM -Y2 -~C .86 .VR 06C 06D 0R~ 0VY 1N0 203 29J 29~ 2J2 2JN 2JY 2KG 2KM 2LR 2VQ 2~H 30V 4.4 406 408 409 40D 40E 53G 5GY 5VS 67Z 6NX 8TC 8UJ 95- 95. 95~ 96X AAAVM AABHQ AACDK AAHNG AAIAL AAJBT AAJKR AANXM AANZL AARHV AARTL AASML AATNV AATVU AAUYE AAWCG AAYIU AAYQN AAYTO AAYZH ABAKF ABBBX ABBXA ABDZT ABECU ABFTD ABFTV ABHLI ABHQN ABIPD ABJNI ABJOX ABKCH ABKTR ABMNI ABMQK ABNWP ABOCM ABPLI ABQBU ABQSL ABSXP ABTEG ABTKH ABTMW ABULA ABWNU ABXPI ACAOD ACDTI ACGFS ACHSB ACHXU ACKNC ACMLO ACOKC ACOMO ACPIV ACSNA ACZOJ ADHHG ADHIR ADINQ ADJJI ADKNI ADKPE ADRFC ADTPH ADURQ ADYFF ADZKW AEBTG AEFQL AEGAL AEGNC AEJHL AEJRE AEKMD AEMSY AENEX AEOHA AEPYU AESKC AETCA AETLH AEVLU AEXYK AFBBN AFLOW AFQWF AFWTZ AFZKB AGAYW AGDGC AGJBK AGMZJ AGQEE AGQMX AGRTI AGWIL AGWZB AGYKE AHAVH AHBYD AHIZS AHKAY AHSBF AHYZX AIAKS AIGIU AIIXL AILAN AITGF AJBLW AJRNO AJZVZ AKMHD ALMA_UNASSIGNED_HOLDINGS ALWAN AMKLP AMXSW AMYLF AMYQR ARMRJ ASPBG AVWKF AVXWI AXYYD AZFZN B-. BA0 BDATZ BGNMA BSONS CAG COF CS3 CSCUP DNIVK DPUIP EBD EBLON EBS EIOEI EJD EMOBN EN4 ESBYG F5P FEDTE FERAY FFXSO FIGPU FINBP FNLPD FRRFC FSGXE FWDCC G-Y G-Z GGCAI GGRSB GJIRD GNWQR GQ6 GQ7 GQ8 GXS H13 HF~ HG5 HG6 HLICF HMJXF HQYDN HRMNR HVGLF HZ~ IHE IJ- IKXTQ IMOTQ IWAJR IXC IXD IXE IZIGR IZQ I~X I~Z J-C J0Z JBSCW JCJTX JZLTJ KDC KOV KPH LLZTM M4Y MA- N2Q N9A NPVJJ NQJWS NU0 O9- O93 O9I O9J OAM P2P P9S PF0 PT4 QOR QOS R89 R9I RNS ROL RPX RSV S16 S1Z S27 S37 S3B SAP SDH SHX SISQX SJYHP SMD SNE SNPRN SNX SOHCF SOJ SPISZ SRMVM SSLCW SSXJD STPWE SV3 SZ9 SZN T13 TSG TSK TSV TT1 TUC U2A U9L UG4 UOJIU UTJUX UZXMN VC2 VFIZW W23 W48 WJK WK8 YLTOR Z45 Z7R Z7V Z7X Z82 Z83 Z87 Z88 ZMTXR ZOVNA ~A9 AAYXX ABBRH ABDBE ABFSG ACSTC ADHKG ADKFA AEZWR AFDZB AFHIU AFOHR AGQPQ AHPBZ AHWEU AIXLP ATHPR AYFIA CITATION CGR CUY CVF ECM EIF NPM ABRTQ K9. 7X8 |
ID | FETCH-LOGICAL-c375t-ec5f02ab8497bcc0bc183c3cf86e9411a18e4405400ae58a079a31e6060fd0193 |
IEDL.DBID | U2A |
ISSN | 1861-6410 1861-6429 |
IngestDate | Fri Jul 11 09:37:23 EDT 2025 Fri Jul 25 09:57:54 EDT 2025 Wed Feb 19 02:26:47 EST 2025 Thu Apr 24 23:08:32 EDT 2025 Tue Jul 01 00:15:12 EDT 2025 Fri Feb 21 02:47:52 EST 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 3 |
Keywords | Subcutaneous adipose tissue (SAT) Visceral adipose tissue (VAT) Convolutional neural network (ConvNet) Computed tomography (CT) |
Language | English |
License | 2021. CARS. |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c375t-ec5f02ab8497bcc0bc183c3cf86e9411a18e4405400ae58a079a31e6060fd0193 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
ORCID | 0000-0001-6658-2187 |
PMID | 34845590 |
PQID | 2632969421 |
PQPubID | 2043910 |
PageCount | 9 |
ParticipantIDs | proquest_miscellaneous_2604831005 proquest_journals_2632969421 pubmed_primary_34845590 crossref_citationtrail_10_1007_s11548_021_02533_8 crossref_primary_10_1007_s11548_021_02533_8 springer_journals_10_1007_s11548_021_02533_8 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2022-03-01 |
PublicationDateYYYYMMDD | 2022-03-01 |
PublicationDate_xml | – month: 03 year: 2022 text: 2022-03-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | Cham |
PublicationPlace_xml | – name: Cham – name: Germany – name: Heidelberg |
PublicationSubtitle | A journal for interdisciplinary research, development and applications of image guided diagnosis and therapy |
PublicationTitle | International journal for computer assisted radiology and surgery |
PublicationTitleAbbrev | Int J CARS |
PublicationTitleAlternate | Int J Comput Assist Radiol Surg |
PublicationYear | 2022 |
Publisher | Springer International Publishing Springer Nature B.V |
Publisher_xml | – name: Springer International Publishing – name: Springer Nature B.V |
References | Langner, Hedstrom, Morwald, Weghuber, Forslund, Bergsten, Ahlstrom, Kullberg (CR10) 2019; 81 Anthimopoulos, Christodoulidis, Ebner, Geiser, Christe, Mougiakakou (CR11) 2018; 23 Wang, Qiu, Thai, Moore, Hong, Zheng (CR9) 2017; 144 CR18 Klein, Van-der Heide, Lips, Van-Vulpen, Staring, Pluim (CR36) 2008; 35 CR38 CR14 CR13 Weston, Korfiatis, Kline, Philbrick, Kostandy, Sakinis, Sugimoto, Takahashi, Erickson (CR17) 2019; 290 CR31 CR30 Fu, Ippolito, Ludwig, Nizamuddin, Li, Yang (CR1) 2020; 47 Estrada, Lu, Conjeti, Orozco-Ruiz, Panos-Willuhn, Breteler, Reuter (CR16) 2020; 84 Kullberg, Hedström, Brandberg, Strand, Johansson, Bergström, Ahlström (CR8) 2017; 7 Yan, Wang, Lu, Summers (CR33) 2018; 5 Napolitano, Miller, Murgatroyd, Coward, Wright, Finer, De Bruin, Bullmore, Nunez (CR5) 2008; 16 Badrinarayanan, Kendall, Cipolla (CR12) 2017; 39 Dubrovina, Kisilev, Ginsburg, Hashoul, Kimmel (CR7) 2018; 6 Kissebah, Vydelingum, Murray, Evans, Hartz, Kalkhoff, Adams (CR3) 1982; 54 Xia, Sun, Song, Mou (CR21) 2020; 51 Hussein, Malik, Ong, Slik (CR19) 2020; 603 CR29 CR28 CR27 CR25 Jia, Jiang, Lin, Li, Xu, Yu (CR32) 2021; 448 Aljunid (CR2) 2021 CR24 WickstrØm, Kampffmeyer, Jenssen (CR20) 2020; 60 Chen, Papandreou, Kokkinos, Murphy, Yuille (CR22) 2018; 40 Turchenko, Chalmers, Luczak (CR15) 2017; 18 Dosovitskiy, Fischer, Springenberg, Riedmiller, Brox (CR37) 2016; 38 Hinton, Tieleman (CR35) 2012; 4 Colditz, Peterson (CR4) 2018; 64 Roy, Conjeti, Navab, Wachinger (CR23) 2019; 186 Wang, Meng, Weng, Chen, Lu, Liu, Zhang (CR39) 2020; 48 Havaei, Davy, Warde-Farley, Biard, Courville, Bengio, Pal, Jodoin, Larochelle (CR6) 2017; 35 Zhao, Mathieu, Goroshin, Lecun (CR26) 2016; 15 Fatima (CR34) 2020; 9 S Klein (2533_CR36) 2008; 35 AD Weston (2533_CR17) 2019; 290 J Kullberg (2533_CR8) 2017; 7 MM Anthimopoulos (2533_CR11) 2018; 23 2533_CR18 V Badrinarayanan (2533_CR12) 2017; 39 2533_CR38 Y Wang (2533_CR9) 2017; 144 S Estrada (2533_CR16) 2020; 84 T Langner (2533_CR10) 2019; 81 2533_CR31 2533_CR30 2533_CR14 2533_CR13 J Zhao (2533_CR26) 2016; 15 A Dosovitskiy (2533_CR37) 2016; 38 A Napolitano (2533_CR5) 2008; 16 A Dubrovina (2533_CR7) 2018; 6 S Jia (2533_CR32) 2021; 448 N Fatima (2533_CR34) 2020; 9 G Hinton (2533_CR35) 2012; 4 GA Colditz (2533_CR4) 2018; 64 K WickstrØm (2533_CR20) 2020; 60 Z Wang (2533_CR39) 2020; 48 2533_CR29 2533_CR28 2533_CR27 H Xia (2533_CR21) 2020; 51 LC Chen (2533_CR22) 2018; 40 Y Fu (2533_CR1) 2020; 47 SM Aljunid (2533_CR2) 2021 2533_CR25 2533_CR24 V Turchenko (2533_CR15) 2017; 18 AH Kissebah (2533_CR3) 1982; 54 BR Hussein (2533_CR19) 2020; 603 M Havaei (2533_CR6) 2017; 35 AG Roy (2533_CR23) 2019; 186 K Yan (2533_CR33) 2018; 5 |
References_xml | – volume: 603 start-page: 321 year: 2020 end-page: 330 ident: CR19 article-title: Semantic segmentation of herbarium specimens using deep learning techniques, lecture notes in electrical publication-title: Engineering – start-page: 13 year: 2021 end-page: 22 ident: CR2 article-title: Obesity, a costly epidemic publication-title: Laparoscopic sleeve gastrectomy doi: 10.1007/978-3-030-57373-7_2 – ident: CR18 – volume: 38 start-page: 1734 issue: 9 year: 2016 end-page: 1747 ident: CR37 article-title: Discriminative unsupervised feature learning with exemplar convolutional neural networks publication-title: IEEE Trans Pattern Anal Mach Intell doi: 10.1109/TPAMI.2015.2496141 – volume: 51 start-page: 2915 issue: 3 year: 2020 end-page: 2927 ident: CR21 article-title: Md-net: multi-scale dilated convolution network for ct images segmentation publication-title: Neural Process Lett doi: 10.1007/s11063-020-10230-x – ident: CR14 – volume: 81 start-page: 2736 issue: 4 year: 2019 end-page: 2745 ident: CR10 article-title: Fully convolutional networks for automated segmentation of abdominal adipose tissue depots in multicenter water–fat MRI publication-title: Magn Reson Med doi: 10.1002/mrm.27550 – ident: CR30 – volume: 144 start-page: 97 year: 2017 end-page: 104 ident: CR9 article-title: A two-step convolutional neural network based computer-aided detection scheme for automatically segmenting adipose tissue volume depicting on CT images publication-title: Comput Methods Programs Biomed doi: 10.1016/j.cmpb.2017.03.017 – volume: 64 start-page: 154 issue: 1 year: 2018 end-page: 162 ident: CR4 article-title: Obesity and cancer: evidence, impact, and future directions publication-title: Clin Chem doi: 10.1373/clinchem.2017.277376 – ident: CR29 – volume: 7 start-page: 1 issue: 1 year: 2017 end-page: 11 ident: CR8 article-title: Automated analysis of liver fat, muscle and adipose tissue distribution from CT suitable for large-scale studies publication-title: Scientif Rep doi: 10.1038/s41598-016-0028-x – ident: CR25 – ident: CR27 – volume: 39 start-page: 2481 issue: 12 year: 2017 end-page: 2495 ident: CR12 article-title: Segnet: A deep convolutional encoder-decoder architecture for image segmentation publication-title: IEEE Trans Pattern Anal Machine Intell doi: 10.1109/TPAMI.2016.2644615 – volume: 60 start-page: 101619 year: 2020 ident: CR20 article-title: Uncertainty and interpretability in convolutional neural networks for semantic segmentation of colorectal polyps publication-title: Med Image Anal doi: 10.1016/j.media.2019.101619 – volume: 4 start-page: 26 year: 2012 end-page: 30 ident: CR35 article-title: Lecture 6.5-rmsprop: divide the gradient by a running average of its recent magnitude publication-title: COURSERA Neural Netw Mach Learn – volume: 54 start-page: 254 issue: 2 year: 1982 end-page: 260 ident: CR3 article-title: Relation of body fat distribution to metabolic complications of obesity publication-title: J Clin Endocrinol Metab doi: 10.1210/jcem-54-2-254 – volume: 35 start-page: 18 year: 2017 end-page: 31 ident: CR6 article-title: Brain tumor segmentation with Deep Neural Networks publication-title: Med Image Anal doi: 10.1016/j.media.2016.05.004 – volume: 48 start-page: 312 issue: 1 year: 2020 end-page: 328 ident: CR39 article-title: An effective cnn method for fully automated segmenting subcutaneous and visceral adipose tissue on ct scans publication-title: Annal Biomed Eng doi: 10.1007/s10439-019-02349-3 – ident: CR38 – volume: 35 start-page: 1407 issue: 4 year: 2008 end-page: 1417 ident: CR36 article-title: Automatic segmentation of the prostate in 3D MR images by atlas matching using localized mutual information publication-title: Med Phys doi: 10.1118/1.2842076 – volume: 47 start-page: 5723 issue: 11 year: 2020 end-page: 5730 ident: CR1 article-title: Automatic segmentation of ct images for ventral body composition analysis publication-title: Med Phys doi: 10.1002/mp.14465 – ident: CR31 – ident: CR13 – volume: 448 start-page: 179 year: 2021 end-page: 204 ident: CR32 article-title: A survey: Deep learning for hyperspectral image classification with few labeled samples publication-title: Neurocomputing doi: 10.1016/j.neucom.2021.03.035 – volume: 16 start-page: 191 issue: 1 year: 2008 end-page: 198 ident: CR5 article-title: Validation of a quantitative magnetic resonance method for measuring human body composition publication-title: Obesity doi: 10.1038/oby.2007.29 – volume: 84 start-page: 1471 issue: 3 year: 2020 end-page: 1483 ident: CR16 article-title: Fatsegnet: A fully automated deep learning pipeline for adipose tissue segmentation on abdominal dixon mri publication-title: Magn Reson Med doi: 10.1002/mrm.28022 – volume: 9 start-page: 79 issue: 2 year: 2020 end-page: 90 ident: CR34 article-title: Enhancing performance of a deep neural network: A comparative analysis of optimization algorithms publication-title: ADCAIJ Adv Distributed Comput Artif Intell J doi: 10.14201/ADCAIJ2020927990 – volume: 40 start-page: 834 issue: 4 year: 2018 end-page: 848 ident: CR22 article-title: DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs publication-title: IEEE Trans Pattern Anal Mach Intell doi: 10.1109/TPAMI.2017.2699184 – volume: 6 start-page: 243 issue: 3 year: 2018 end-page: 247 ident: CR7 article-title: Computational mammography using deep neural networks publication-title: Comput Methods Biomech Biomed Eng: Imaging Visual – volume: 15 start-page: 3563 issue: 1 year: 2016 end-page: 3593 ident: CR26 article-title: Stacked What-Where Auto-encoders publication-title: Comput Sci – ident: CR28 – volume: 23 start-page: 714 issue: 2 year: 2018 end-page: 722 ident: CR11 article-title: Semantic segmentation of pathological lung tissue with dilated fully convolutional networks publication-title: IEEE J Biomed Health Informatics doi: 10.1109/JBHI.2018.2818620 – volume: 186 start-page: 713 year: 2019 end-page: 727 ident: CR23 article-title: Quicknat: A fully convolutional network for quick and accurate segmentation of neuroan atomy publication-title: Neuroimage doi: 10.1016/j.neuroimage.2018.11.042 – ident: CR24 – volume: 18 start-page: 1 year: 2017 ident: CR15 article-title: A deep convolutional auto-encoder with pooling - unpooling layers in caffe publication-title: Int J Comput – volume: 290 start-page: 669 issue: 3 year: 2019 end-page: 679 ident: CR17 article-title: Automated abdominal segmentation of ct scans for body composition analysis using deep learning publication-title: Radiology doi: 10.1148/radiol.2018181432 – volume: 5 start-page: 036501 issue: 3 year: 2018 ident: CR33 article-title: Deeplesion: automated mining of large-scale lesion annotations and universal lesion detection with deep learning publication-title: J Med Imaging doi: 10.1117/1.JMI.5.3.036501 – volume: 60 start-page: 101619 year: 2020 ident: 2533_CR20 publication-title: Med Image Anal doi: 10.1016/j.media.2019.101619 – ident: 2533_CR38 doi: 10.1109/CVPR.2006.68 – volume: 23 start-page: 714 issue: 2 year: 2018 ident: 2533_CR11 publication-title: IEEE J Biomed Health Informatics doi: 10.1109/JBHI.2018.2818620 – volume: 38 start-page: 1734 issue: 9 year: 2016 ident: 2533_CR37 publication-title: IEEE Trans Pattern Anal Mach Intell doi: 10.1109/TPAMI.2015.2496141 – volume: 6 start-page: 243 issue: 3 year: 2018 ident: 2533_CR7 publication-title: Comput Methods Biomech Biomed Eng: Imaging Visual – volume: 144 start-page: 97 year: 2017 ident: 2533_CR9 publication-title: Comput Methods Programs Biomed doi: 10.1016/j.cmpb.2017.03.017 – ident: 2533_CR27 – volume: 47 start-page: 5723 issue: 11 year: 2020 ident: 2533_CR1 publication-title: Med Phys doi: 10.1002/mp.14465 – ident: 2533_CR25 – volume: 448 start-page: 179 year: 2021 ident: 2533_CR32 publication-title: Neurocomputing doi: 10.1016/j.neucom.2021.03.035 – ident: 2533_CR13 doi: 10.1007/978-3-319-24574-4_28 – volume: 35 start-page: 1407 issue: 4 year: 2008 ident: 2533_CR36 publication-title: Med Phys doi: 10.1118/1.2842076 – volume: 64 start-page: 154 issue: 1 year: 2018 ident: 2533_CR4 publication-title: Clin Chem doi: 10.1373/clinchem.2017.277376 – ident: 2533_CR31 – ident: 2533_CR14 doi: 10.1109/CVPR.2017.660 – volume: 84 start-page: 1471 issue: 3 year: 2020 ident: 2533_CR16 publication-title: Magn Reson Med doi: 10.1002/mrm.28022 – volume: 603 start-page: 321 year: 2020 ident: 2533_CR19 publication-title: Engineering – volume: 15 start-page: 3563 issue: 1 year: 2016 ident: 2533_CR26 publication-title: Comput Sci – volume: 4 start-page: 26 year: 2012 ident: 2533_CR35 publication-title: COURSERA Neural Netw Mach Learn – volume: 7 start-page: 1 issue: 1 year: 2017 ident: 2533_CR8 publication-title: Scientif Rep doi: 10.1038/s41598-016-0028-x – volume: 40 start-page: 834 issue: 4 year: 2018 ident: 2533_CR22 publication-title: IEEE Trans Pattern Anal Mach Intell doi: 10.1109/TPAMI.2017.2699184 – volume: 186 start-page: 713 year: 2019 ident: 2533_CR23 publication-title: Neuroimage doi: 10.1016/j.neuroimage.2018.11.042 – ident: 2533_CR24 – volume: 16 start-page: 191 issue: 1 year: 2008 ident: 2533_CR5 publication-title: Obesity doi: 10.1038/oby.2007.29 – ident: 2533_CR28 – volume: 81 start-page: 2736 issue: 4 year: 2019 ident: 2533_CR10 publication-title: Magn Reson Med doi: 10.1002/mrm.27550 – volume: 39 start-page: 2481 issue: 12 year: 2017 ident: 2533_CR12 publication-title: IEEE Trans Pattern Anal Machine Intell doi: 10.1109/TPAMI.2016.2644615 – volume: 18 start-page: 1 year: 2017 ident: 2533_CR15 publication-title: Int J Comput – volume: 48 start-page: 312 issue: 1 year: 2020 ident: 2533_CR39 publication-title: Annal Biomed Eng doi: 10.1007/s10439-019-02349-3 – volume: 35 start-page: 18 year: 2017 ident: 2533_CR6 publication-title: Med Image Anal doi: 10.1016/j.media.2016.05.004 – ident: 2533_CR18 doi: 10.1117/12.2254139 – volume: 54 start-page: 254 issue: 2 year: 1982 ident: 2533_CR3 publication-title: J Clin Endocrinol Metab doi: 10.1210/jcem-54-2-254 – volume: 290 start-page: 669 issue: 3 year: 2019 ident: 2533_CR17 publication-title: Radiology doi: 10.1148/radiol.2018181432 – volume: 51 start-page: 2915 issue: 3 year: 2020 ident: 2533_CR21 publication-title: Neural Process Lett doi: 10.1007/s11063-020-10230-x – volume: 5 start-page: 036501 issue: 3 year: 2018 ident: 2533_CR33 publication-title: J Med Imaging doi: 10.1117/1.JMI.5.3.036501 – volume: 9 start-page: 79 issue: 2 year: 2020 ident: 2533_CR34 publication-title: ADCAIJ Adv Distributed Comput Artif Intell J doi: 10.14201/ADCAIJ2020927990 – ident: 2533_CR29 doi: 10.1007/978-3-030-01234-2_49 – ident: 2533_CR30 doi: 10.1007/978-1-4842-3516-4_2 – start-page: 13 volume-title: Laparoscopic sleeve gastrectomy year: 2021 ident: 2533_CR2 doi: 10.1007/978-3-030-57373-7_2 |
SSID | ssj0045735 |
Score | 2.2748885 |
Snippet | Purpose
Fully automated abdominal adipose tissue segmentation from computed tomography (CT) scans plays an important role in biomedical diagnoses and... Fully automated abdominal adipose tissue segmentation from computed tomography (CT) scans plays an important role in biomedical diagnoses and prognoses.... PurposeFully automated abdominal adipose tissue segmentation from computed tomography (CT) scans plays an important role in biomedical diagnoses and prognoses.... |
SourceID | proquest pubmed crossref springer |
SourceType | Aggregation Database Index Database Enrichment Source Publisher |
StartPage | 579 |
SubjectTerms | Abdomen Abdominal Fat - diagnostic imaging Adipose tissue Anatomy Artificial neural networks Automation Body fat Computed tomography Computer Imaging Computer Science Deep Learning Health Informatics Humans Imaging Medical imaging Medicine Medicine & Public Health Neural Networks, Computer Original Article Pattern Recognition and Graphics Radiology Semantic segmentation Subcutaneous Fat Surgery Tomography, X-Ray Computed Vision |
Title | Deep learning for abdominal adipose tissue segmentation with few labelled samples |
URI | https://link.springer.com/article/10.1007/s11548-021-02533-8 https://www.ncbi.nlm.nih.gov/pubmed/34845590 https://www.proquest.com/docview/2632969421 https://www.proquest.com/docview/2604831005 |
Volume | 17 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3da9swED_WFEZf2i5rN29d0WBvq0CyZEt-DF27sJFCYYHuyUjyqQw2J8wp_fcnOXLC6Fbos8-yudPpfqf7AvjgbdgY3JRUFTlSKbyiRghLvfbBB3IWGxMLnGdX5XQuv9wUN6korBuy3YeQZH9Sb4vdIrqmMaUg2GkhqN6B3SL47jGRa55PhvNXFqofq8l1yWkpOUulMv9e429z9ABjPoiP9mbn8hD2E14kk7WAX8AzbMdwMMxiIEk1x_B8loLkL-H6E-KSpHEQtySgUmJss-indxHT_FguOiSrnuGkw9tfqfqoJfFOlni8J2FnxPv8hnQmNg_ujmB-efHtfErT5ATqhCpWFF3hWW6slpWyzjHrguY64bwusZKcG65RyojWmMFCG6YqIzgGZ4b5JoA-cQyjdtHiayDM6sZxxUxpY5BWVbb0jVTW5szKCvMM-MDA2qW24nG6xc962xA5Mr0OTK97ptc6g4-bd5brphqPUp8McqmTgnV1bDNflZXMeQbvN4-DasR4h2lxcRdpYr_8sGqRwau1PDefE1LL4EyxDM4GAW8X__-_vHka-VvYy2O5RJ-zdgKj1e87fBdAzMqewu7k8_evF6f93v0DR4vnKw |
linkProvider | Springer Nature |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV3NaxQxFH_UCtqL1ap1bNUIetJAvmaSOXgorWVruwWhC72NSSYpgs4uzpbi_-MfapLN7FKqgoee500mvI-8X-Z9AbzxJigG1RWWJXNYcC-x5txgr3y4A1njWh0LnMen1WgiPp2X52vwa6iFSdnuQ0gyndSrYreIrnFMKQh-mnOscirlsft5FS5q_YejgyDVt4wdfjzbH-E8SwBbLss5drb0hGmjRC2NtcTYoMuWW68qVwtKNVVOiIhfiHal0kTWmlMX4D3xbYBBPKx7B-4G8KGi7UzY3nDei1KmMZ5UVRRXgpJcmvPnPV93fzcw7Y14bHJzhw_hQcanaG-hUI9gzXVbsDnMfkD5KNiCe-MclH8Mnw-cm6E8fuICBRSMtGmnaVoY0u3X2bR3aJ4EjHp38T1XO3Uo_gNG3l2hoIkxftCiXsdmxf0TmNwKd5_Cejft3DNAxKjWUkl0ZWJQWNam8q2QxjBiRO1YAXRgYGNzG_M4TeNbs2rAHJneBKY3iemNKuDd8p3ZoonHP6l3B7k02aD7Jra1r6taMFrA6-XjYIoxvqI7N72MNLE_f1i1LGB7Ic_l57hQIlzeSAHvBwGvFv_7Xp7_H_kruD86G580J0enxzuwwWKpRsqX24X1-Y9L9yIAqLl5mfQXwZfbNpjfZzEh5Q |
linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1Lb9QwEB6VIlVceBQogQJGghNYtWMndg4cKpZVS2kFEiv1FmxnXCFBdkVSVfwrfiJ21tkVKiBx6DmOY83D_pyZbwbgubfBMLgpqSpypFJ4RY0Qlnrtwx3IWWxMJDgfn5QHM_nutDjdgJ8jF2bIdh9DkktOQ6zS1PZ7i8bvrYlvEWnTmF4QzmwhqE5plUf44yJc2rrXh5Og4Rd5Pn376c0BTX0FqBOq6Cm6wrPcWC0rZZ1j1gW7dsJ5XWIlOTdco5QRyzCDhTZMVUZwDFCf-SZAIhHmvQbXZWQfBw-a5fvj3i8LNbT05LrktJScJZrOn9f8-1F4Cd9eis0OR970NtxMWJXsL43rDmxguw23xj4QJG0L27B1nAL0d-HjBHFBUiuKMxIQMTG2mQ-dw4hpvizmHZJ-UDbp8OxbYj61JP4PJh4vSLDKGEtoSGdi4eLuHsyuRLr3YbOdt_gACLO6cVwxU9oYIFaVLX0jlbU5s7LCPAM-CrB2qaR57KzxtV4XY45Cr4PQ60Hotc7g5eqdxbKgxz9H7456qZNzd3UscV-Vlcx5Bs9Wj4NbxliLaXF-HsfEWv1h1iKDnaU-V58TUstwkWMZvBoVvJ7872t5-H_Dn8LWh8m0fn94cvQIbuSRtTGkzu3CZv_9HB8HLNXbJ4P5Evh81f7yC3XQJhg |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Deep+learning+for+abdominal+adipose+tissue+segmentation+with+few+labelled+samples&rft.jtitle=International+journal+for+computer+assisted+radiology+and+surgery&rft.au=Wang%2C+Zheng&rft.au=Hounye%2C+Alphonse+Houssou&rft.au=Zhang%2C+Jianglin&rft.au=Hou%2C+Muzhou&rft.date=2022-03-01&rft.issn=1861-6410&rft.eissn=1861-6429&rft.volume=17&rft.issue=3&rft.spage=579&rft.epage=587&rft_id=info:doi/10.1007%2Fs11548-021-02533-8&rft.externalDBID=n%2Fa&rft.externalDocID=10_1007_s11548_021_02533_8 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1861-6410&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1861-6410&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1861-6410&client=summon |