Collaborative Learning for Annotation‐Efficient Volumetric MR Image Segmentation

Background Deep learning has presented great potential in accurate MR image segmentation when enough labeled data are provided for network optimization. However, manually annotating three‐dimensional (3D) MR images is tedious and time‐consuming, requiring experts with rich domain knowledge and exper...

Full description

Saved in:
Bibliographic Details
Published inJournal of magnetic resonance imaging Vol. 60; no. 4; pp. 1604 - 1614
Main Authors Osman, Yousuf Babiker M., Li, Cheng, Huang, Weijian, Wang, Shanshan
Format Journal Article
LanguageEnglish
Published Hoboken, USA John Wiley & Sons, Inc 01.10.2024
Wiley Subscription Services, Inc
Subjects
Online AccessGet full text
ISSN1053-1807
1522-2586
1522-2586
DOI10.1002/jmri.29194

Cover

Loading…
Abstract Background Deep learning has presented great potential in accurate MR image segmentation when enough labeled data are provided for network optimization. However, manually annotating three‐dimensional (3D) MR images is tedious and time‐consuming, requiring experts with rich domain knowledge and experience. Purpose To build a deep learning method exploring sparse annotations, namely only a single two‐dimensional slice label for each 3D training MR image. Study Type Retrospective. Population Three‐dimensional MR images of 150 subjects from two publicly available datasets were included. Among them, 50 (1377 image slices) are for prostate segmentation. The other 100 (8800 image slices) are for left atrium segmentation. Five‐fold cross‐validation experiments were carried out utilizing the first dataset. For the second dataset, 80 subjects were used for training and 20 were used for testing. Field Strength/Sequence 1.5 T and 3.0 T; axial T2‐weighted and late gadolinium‐enhanced, 3D respiratory navigated, inversion recovery prepared gradient echo pulse sequence. Assessment A collaborative learning method by integrating the strengths of semi‐supervised and self‐supervised learning schemes was developed. The method was trained using labeled central slices and unlabeled noncentral slices. Segmentation performance on testing set was reported quantitatively and qualitatively. Statistical Tests Quantitative evaluation metrics including boundary intersection‐over‐union (B‐IoU), Dice similarity coefficient, average symmetric surface distance, and relative absolute volume difference were calculated. Paired t test was performed, and P < 0.05 was considered statistically significant. Results Compared to fully supervised training with only the labeled central slice, mean teacher, uncertainty‐aware mean teacher, deep co‐training, interpolation consistency training (ICT), and ambiguity‐consensus mean teacher, the proposed method achieved a substantial improvement in segmentation accuracy, increasing the mean B‐IoU significantly by more than 10.0% for prostate segmentation (proposed method B‐IoU: 70.3% ± 7.6% vs. ICT B‐IoU: 60.3% ± 11.2%) and by more than 6.0% for left atrium segmentation (proposed method B‐IoU: 66.1% ± 6.8% vs. ICT B‐IoU: 60.1% ± 7.1%). Data Conclusions A collaborative learning method trained using sparse annotations can segment prostate and left atrium with high accuracy. Level of Evidence 0 Technical Efficacy Stage 1
AbstractList Deep learning has presented great potential in accurate MR image segmentation when enough labeled data are provided for network optimization. However, manually annotating three-dimensional (3D) MR images is tedious and time-consuming, requiring experts with rich domain knowledge and experience.BACKGROUNDDeep learning has presented great potential in accurate MR image segmentation when enough labeled data are provided for network optimization. However, manually annotating three-dimensional (3D) MR images is tedious and time-consuming, requiring experts with rich domain knowledge and experience.To build a deep learning method exploring sparse annotations, namely only a single two-dimensional slice label for each 3D training MR image.PURPOSETo build a deep learning method exploring sparse annotations, namely only a single two-dimensional slice label for each 3D training MR image.Retrospective.STUDY TYPERetrospective.Three-dimensional MR images of 150 subjects from two publicly available datasets were included. Among them, 50 (1377 image slices) are for prostate segmentation. The other 100 (8800 image slices) are for left atrium segmentation. Five-fold cross-validation experiments were carried out utilizing the first dataset. For the second dataset, 80 subjects were used for training and 20 were used for testing.POPULATIONThree-dimensional MR images of 150 subjects from two publicly available datasets were included. Among them, 50 (1377 image slices) are for prostate segmentation. The other 100 (8800 image slices) are for left atrium segmentation. Five-fold cross-validation experiments were carried out utilizing the first dataset. For the second dataset, 80 subjects were used for training and 20 were used for testing.1.5 T and 3.0 T; axial T2-weighted and late gadolinium-enhanced, 3D respiratory navigated, inversion recovery prepared gradient echo pulse sequence.FIELD STRENGTH/SEQUENCE1.5 T and 3.0 T; axial T2-weighted and late gadolinium-enhanced, 3D respiratory navigated, inversion recovery prepared gradient echo pulse sequence.A collaborative learning method by integrating the strengths of semi-supervised and self-supervised learning schemes was developed. The method was trained using labeled central slices and unlabeled noncentral slices. Segmentation performance on testing set was reported quantitatively and qualitatively.ASSESSMENTA collaborative learning method by integrating the strengths of semi-supervised and self-supervised learning schemes was developed. The method was trained using labeled central slices and unlabeled noncentral slices. Segmentation performance on testing set was reported quantitatively and qualitatively.Quantitative evaluation metrics including boundary intersection-over-union (B-IoU), Dice similarity coefficient, average symmetric surface distance, and relative absolute volume difference were calculated. Paired t test was performed, and P < 0.05 was considered statistically significant.STATISTICAL TESTSQuantitative evaluation metrics including boundary intersection-over-union (B-IoU), Dice similarity coefficient, average symmetric surface distance, and relative absolute volume difference were calculated. Paired t test was performed, and P < 0.05 was considered statistically significant.Compared to fully supervised training with only the labeled central slice, mean teacher, uncertainty-aware mean teacher, deep co-training, interpolation consistency training (ICT), and ambiguity-consensus mean teacher, the proposed method achieved a substantial improvement in segmentation accuracy, increasing the mean B-IoU significantly by more than 10.0% for prostate segmentation (proposed method B-IoU: 70.3% ± 7.6% vs. ICT B-IoU: 60.3% ± 11.2%) and by more than 6.0% for left atrium segmentation (proposed method B-IoU: 66.1% ± 6.8% vs. ICT B-IoU: 60.1% ± 7.1%).RESULTSCompared to fully supervised training with only the labeled central slice, mean teacher, uncertainty-aware mean teacher, deep co-training, interpolation consistency training (ICT), and ambiguity-consensus mean teacher, the proposed method achieved a substantial improvement in segmentation accuracy, increasing the mean B-IoU significantly by more than 10.0% for prostate segmentation (proposed method B-IoU: 70.3% ± 7.6% vs. ICT B-IoU: 60.3% ± 11.2%) and by more than 6.0% for left atrium segmentation (proposed method B-IoU: 66.1% ± 6.8% vs. ICT B-IoU: 60.1% ± 7.1%).A collaborative learning method trained using sparse annotations can segment prostate and left atrium with high accuracy.DATA CONCLUSIONSA collaborative learning method trained using sparse annotations can segment prostate and left atrium with high accuracy.0 TECHNICAL EFFICACY: Stage 1.LEVEL OF EVIDENCE0 TECHNICAL EFFICACY: Stage 1.
Deep learning has presented great potential in accurate MR image segmentation when enough labeled data are provided for network optimization. However, manually annotating three-dimensional (3D) MR images is tedious and time-consuming, requiring experts with rich domain knowledge and experience. To build a deep learning method exploring sparse annotations, namely only a single two-dimensional slice label for each 3D training MR image. Retrospective. Three-dimensional MR images of 150 subjects from two publicly available datasets were included. Among them, 50 (1377 image slices) are for prostate segmentation. The other 100 (8800 image slices) are for left atrium segmentation. Five-fold cross-validation experiments were carried out utilizing the first dataset. For the second dataset, 80 subjects were used for training and 20 were used for testing. 1.5 T and 3.0 T; axial T2-weighted and late gadolinium-enhanced, 3D respiratory navigated, inversion recovery prepared gradient echo pulse sequence. A collaborative learning method by integrating the strengths of semi-supervised and self-supervised learning schemes was developed. The method was trained using labeled central slices and unlabeled noncentral slices. Segmentation performance on testing set was reported quantitatively and qualitatively. Quantitative evaluation metrics including boundary intersection-over-union (B-IoU), Dice similarity coefficient, average symmetric surface distance, and relative absolute volume difference were calculated. Paired t test was performed, and P < 0.05 was considered statistically significant. Compared to fully supervised training with only the labeled central slice, mean teacher, uncertainty-aware mean teacher, deep co-training, interpolation consistency training (ICT), and ambiguity-consensus mean teacher, the proposed method achieved a substantial improvement in segmentation accuracy, increasing the mean B-IoU significantly by more than 10.0% for prostate segmentation (proposed method B-IoU: 70.3% ± 7.6% vs. ICT B-IoU: 60.3% ± 11.2%) and by more than 6.0% for left atrium segmentation (proposed method B-IoU: 66.1% ± 6.8% vs. ICT B-IoU: 60.1% ± 7.1%). A collaborative learning method trained using sparse annotations can segment prostate and left atrium with high accuracy. 0 TECHNICAL EFFICACY: Stage 1.
BackgroundDeep learning has presented great potential in accurate MR image segmentation when enough labeled data are provided for network optimization. However, manually annotating three‐dimensional (3D) MR images is tedious and time‐consuming, requiring experts with rich domain knowledge and experience.PurposeTo build a deep learning method exploring sparse annotations, namely only a single two‐dimensional slice label for each 3D training MR image.Study TypeRetrospective.PopulationThree‐dimensional MR images of 150 subjects from two publicly available datasets were included. Among them, 50 (1377 image slices) are for prostate segmentation. The other 100 (8800 image slices) are for left atrium segmentation. Five‐fold cross‐validation experiments were carried out utilizing the first dataset. For the second dataset, 80 subjects were used for training and 20 were used for testing.Field Strength/Sequence1.5 T and 3.0 T; axial T2‐weighted and late gadolinium‐enhanced, 3D respiratory navigated, inversion recovery prepared gradient echo pulse sequence.AssessmentA collaborative learning method by integrating the strengths of semi‐supervised and self‐supervised learning schemes was developed. The method was trained using labeled central slices and unlabeled noncentral slices. Segmentation performance on testing set was reported quantitatively and qualitatively.Statistical TestsQuantitative evaluation metrics including boundary intersection‐over‐union (B‐IoU), Dice similarity coefficient, average symmetric surface distance, and relative absolute volume difference were calculated. Paired t test was performed, and P < 0.05 was considered statistically significant.ResultsCompared to fully supervised training with only the labeled central slice, mean teacher, uncertainty‐aware mean teacher, deep co‐training, interpolation consistency training (ICT), and ambiguity‐consensus mean teacher, the proposed method achieved a substantial improvement in segmentation accuracy, increasing the mean B‐IoU significantly by more than 10.0% for prostate segmentation (proposed method B‐IoU: 70.3% ± 7.6% vs. ICT B‐IoU: 60.3% ± 11.2%) and by more than 6.0% for left atrium segmentation (proposed method B‐IoU: 66.1% ± 6.8% vs. ICT B‐IoU: 60.1% ± 7.1%).Data ConclusionsA collaborative learning method trained using sparse annotations can segment prostate and left atrium with high accuracy.Level of Evidence0Technical EfficacyStage 1
Background Deep learning has presented great potential in accurate MR image segmentation when enough labeled data are provided for network optimization. However, manually annotating three‐dimensional (3D) MR images is tedious and time‐consuming, requiring experts with rich domain knowledge and experience. Purpose To build a deep learning method exploring sparse annotations, namely only a single two‐dimensional slice label for each 3D training MR image. Study Type Retrospective. Population Three‐dimensional MR images of 150 subjects from two publicly available datasets were included. Among them, 50 (1377 image slices) are for prostate segmentation. The other 100 (8800 image slices) are for left atrium segmentation. Five‐fold cross‐validation experiments were carried out utilizing the first dataset. For the second dataset, 80 subjects were used for training and 20 were used for testing. Field Strength/Sequence 1.5 T and 3.0 T; axial T2‐weighted and late gadolinium‐enhanced, 3D respiratory navigated, inversion recovery prepared gradient echo pulse sequence. Assessment A collaborative learning method by integrating the strengths of semi‐supervised and self‐supervised learning schemes was developed. The method was trained using labeled central slices and unlabeled noncentral slices. Segmentation performance on testing set was reported quantitatively and qualitatively. Statistical Tests Quantitative evaluation metrics including boundary intersection‐over‐union (B‐IoU), Dice similarity coefficient, average symmetric surface distance, and relative absolute volume difference were calculated. Paired t test was performed, and P < 0.05 was considered statistically significant. Results Compared to fully supervised training with only the labeled central slice, mean teacher, uncertainty‐aware mean teacher, deep co‐training, interpolation consistency training (ICT), and ambiguity‐consensus mean teacher, the proposed method achieved a substantial improvement in segmentation accuracy, increasing the mean B‐IoU significantly by more than 10.0% for prostate segmentation (proposed method B‐IoU: 70.3% ± 7.6% vs. ICT B‐IoU: 60.3% ± 11.2%) and by more than 6.0% for left atrium segmentation (proposed method B‐IoU: 66.1% ± 6.8% vs. ICT B‐IoU: 60.1% ± 7.1%). Data Conclusions A collaborative learning method trained using sparse annotations can segment prostate and left atrium with high accuracy. Level of Evidence 0 Technical Efficacy Stage 1
Author Li, Cheng
Huang, Weijian
Wang, Shanshan
Osman, Yousuf Babiker M.
Author_xml – sequence: 1
  givenname: Yousuf Babiker M.
  orcidid: 0000-0001-9612-8866
  surname: Osman
  fullname: Osman, Yousuf Babiker M.
  organization: University of Chinese Academy of Sciences
– sequence: 2
  givenname: Cheng
  surname: Li
  fullname: Li, Cheng
  organization: Chinese Academy of Sciences
– sequence: 3
  givenname: Weijian
  surname: Huang
  fullname: Huang, Weijian
  organization: Peng Cheng Laboratory
– sequence: 4
  givenname: Shanshan
  surname: Wang
  fullname: Wang, Shanshan
  email: sophiasswang@hotmail.com
  organization: Peng Cheng Laboratory
BackLink https://www.ncbi.nlm.nih.gov/pubmed/38156427$$D View this record in MEDLINE/PubMed
BookMark eNp9kctq3TAQhkVIya3Z5AGKoZtScDoaSZa1DIdcTjmhcHrZCtmWDzrYUirbKdnlEfKMfZIocbIJpdqM0Hz_j2b-Q7Lrg7eEnFA4pQD4ZdtHd4qKKr5DDqhAzFGUxW66g2A5LUHuk8Nh2AKAUlzskX1WUlFwlAdkvQhdZ6oQzehubbayJnrnN1kbYnbmfRjTe_B_7x_O29bVzvox-xW6qbdjdHV2vc6WvdnY7Lvd9Kn3DL8n71rTDfb4pR6RnxfnPxZX-erb5XJxtsprJiTPUabDm7ZqkBUtM1AhMGGFKpoCS5SlYXXTlBxBCSkVNLyGShZMcWsqKCp2RD7Nvjcx_J7sMOreDbVN43gbpkGjgpIiR14k9OMbdBum6NPvNKOYlsElw0R9eKGmqreNvomuN_FOv24rATADdQzDEG2razfPPEbjOk1BPwWinwLRz4Ekyec3klfXf8J0hv-4zt79h9Rfr9fLWfMI3iGazw
CitedBy_id crossref_primary_10_1002_jmri_29212
Cites_doi 10.1002/jmri.28403
10.1109/TMI.2021.3059282
10.1002/jmri.26534
10.1002/jmri.27585
10.1038/s41592-018-0261-2
10.1016/j.media.2022.102517
10.1109/TCBB.2019.2939522
10.1016/j.neunet.2021.10.008
10.1161/CIR.0000000000001123
10.1109/TMI.2021.3117888
10.1609/aaai.v34i04.6175
10.1016/j.compbiomed.2020.103884
10.1016/j.media.2022.102373
10.1016/j.media.2020.101832
10.1038/s42256-023-00682-w
10.1109/ISBI53787.2023.10230326
10.1007/978-3-030-32245-8_67
10.3322/caac.21763
10.1016/j.media.2023.102880
10.1109/TIP.2020.3011269
10.1016/j.media.2013.12.002
10.1038/s41591-018-0316-z
10.1038/s41467-021-26216-9
10.1016/j.neunet.2020.06.024
10.1109/JBHI.2020.3038847
10.1016/j.patcog.2020.107269
10.1007/978-3-030-87196-3_7
ContentType Journal Article
Copyright 2023 International Society for Magnetic Resonance in Medicine.
2024 International Society for Magnetic Resonance in Medicine
Copyright_xml – notice: 2023 International Society for Magnetic Resonance in Medicine.
– notice: 2024 International Society for Magnetic Resonance in Medicine
DBID AAYXX
CITATION
NPM
7QO
7TK
8FD
FR3
K9.
P64
7X8
DOI 10.1002/jmri.29194
DatabaseName CrossRef
PubMed
Biotechnology Research Abstracts
Neurosciences Abstracts
Technology Research Database
Engineering Research Database
ProQuest Health & Medical Complete (Alumni)
Biotechnology and BioEngineering Abstracts
MEDLINE - Academic
DatabaseTitle CrossRef
PubMed
ProQuest Health & Medical Complete (Alumni)
Engineering Research Database
Biotechnology Research Abstracts
Technology Research Database
Neurosciences Abstracts
Biotechnology and BioEngineering Abstracts
MEDLINE - Academic
DatabaseTitleList MEDLINE - Academic
PubMed
ProQuest Health & Medical Complete (Alumni)

Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
DeliveryMethod fulltext_linktorsrc
Discipline Medicine
EISSN 1522-2586
EndPage 1614
ExternalDocumentID 38156427
10_1002_jmri_29194
JMRI29194
Genre researchArticle
Journal Article
GrantInformation_xml – fundername: Key Laboratory for Magnetic Resonance and Multimodality Imaging of Guangdong Province
  funderid: 2023B1212060052
– fundername: National Natural Science Foundation of China
  funderid: 62222118; U22A2040
– fundername: Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application
  funderid: 2022B1212010011
– fundername: Shenzhen Science and Technology Program
  funderid: RCYX20210706092104034; JCYJ20220531100213029
– fundername: Key Laboratory for Magnetic Resonance and Multimodality Imaging of Guangdong Province
  grantid: 2023B1212060052
– fundername: Shenzhen Science and Technology Program
  grantid: JCYJ20220531100213029
– fundername: National Natural Science Foundation of China
  grantid: 62222118
– fundername: Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application
  grantid: 2022B1212010011
– fundername: Shenzhen Science and Technology Program
  grantid: RCYX20210706092104034
– fundername: National Natural Science Foundation of China
  grantid: U22A2040
GroupedDBID ---
-DZ
.3N
.GA
.GJ
.Y3
05W
0R~
10A
1L6
1OB
1OC
1ZS
24P
31~
33P
3O-
3SF
3WU
4.4
4ZD
50Y
50Z
51W
51X
52M
52N
52O
52P
52R
52S
52T
52U
52V
52W
52X
53G
5GY
5RE
5VS
66C
702
7PT
8-0
8-1
8-3
8-4
8-5
8UM
930
A01
A03
AAESR
AAEVG
AAHHS
AAHQN
AAIPD
AAMNL
AANHP
AANLZ
AAONW
AASGY
AAWTL
AAXRX
AAYCA
AAZKR
ABCQN
ABCUV
ABEML
ABIJN
ABJNI
ABLJU
ABOCM
ABPVW
ABQWH
ABXGK
ACAHQ
ACBWZ
ACCFJ
ACCZN
ACGFO
ACGFS
ACGOF
ACIWK
ACMXC
ACPOU
ACPRK
ACRPL
ACSCC
ACXBN
ACXQS
ACYXJ
ADBBV
ADBTR
ADEOM
ADIZJ
ADKYN
ADMGS
ADNMO
ADOZA
ADXAS
ADZMN
AEEZP
AEGXH
AEIGN
AEIMD
AENEX
AEQDE
AEUQT
AEUYR
AFBPY
AFFPM
AFGKR
AFPWT
AFRAH
AFWVQ
AFZJQ
AHBTC
AHMBA
AIACR
AIAGR
AITYG
AIURR
AIWBW
AJBDE
ALAGY
ALMA_UNASSIGNED_HOLDINGS
ALUQN
ALVPJ
AMBMR
AMYDB
ASPBG
ATUGU
AVWKF
AZBYB
AZFZN
AZVAB
BAFTC
BDRZF
BFHJK
BHBCM
BMXJE
BROTX
BRXPI
BY8
C45
CS3
D-6
D-7
D-E
D-F
DCZOG
DPXWK
DR2
DRFUL
DRMAN
DRSTM
DU5
EBD
EBS
EJD
EMOBN
F00
F01
F04
F5P
FEDTE
FUBAC
G-S
G.N
GNP
GODZA
H.X
HBH
HDBZQ
HF~
HGLYW
HHY
HHZ
HVGLF
HZ~
IX1
J0M
JPC
KBYEO
KQQ
LATKE
LAW
LC2
LC3
LEEKS
LH4
LITHE
LOXES
LP6
LP7
LUTES
LW6
LYRES
M65
MEWTI
MK4
MRFUL
MRMAN
MRSTM
MSFUL
MSMAN
MSSTM
MXFUL
MXMAN
MXSTM
N04
N05
N9A
NF~
NNB
O66
O9-
OIG
OVD
P2P
P2W
P2X
P2Z
P4B
P4D
PALCI
PQQKQ
Q.N
Q11
QB0
QRW
R.K
RGB
RIWAO
RJQFR
ROL
RWI
RX1
RYL
SAMSI
SUPJJ
SV3
TEORI
TWZ
UB1
V2E
V8K
V9Y
W8V
W99
WBKPD
WHWMO
WIB
WIH
WIJ
WIK
WIN
WJL
WOHZO
WQJ
WRC
WUP
WVDHM
WXI
WXSBR
XG1
XV2
ZXP
ZZTAW
~IA
~WT
AAYXX
AEYWJ
AGHNM
AGQPQ
AGYGG
CITATION
AAMMB
AEFGJ
AGXDD
AIDQK
AIDYY
NPM
7QO
7TK
8FD
FR3
K9.
P64
7X8
ID FETCH-LOGICAL-c3574-277774dfbd236f3a0b2035e596d628278a3cdd8420957790d4c0b76394eab06b3
IEDL.DBID DR2
ISSN 1053-1807
1522-2586
IngestDate Fri Jul 11 06:48:02 EDT 2025
Fri Jul 25 12:08:58 EDT 2025
Mon Jul 21 05:39:13 EDT 2025
Thu Apr 24 23:08:29 EDT 2025
Tue Jul 01 03:57:01 EDT 2025
Wed Jan 22 17:12:48 EST 2025
IsPeerReviewed true
IsScholarly true
Issue 4
Keywords self‐supervised learning
volumetric MR image segmentation
sparse annotations
pseudo labeling
semi‐supervised learning
Language English
License 2023 International Society for Magnetic Resonance in Medicine.
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c3574-277774dfbd236f3a0b2035e596d628278a3cdd8420957790d4c0b76394eab06b3
Notes The first two authors contributed equally to this work.
ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ORCID 0000-0001-9612-8866
PMID 38156427
PQID 3128154732
PQPubID 1006400
PageCount 11
ParticipantIDs proquest_miscellaneous_2908124246
proquest_journals_3128154732
pubmed_primary_38156427
crossref_citationtrail_10_1002_jmri_29194
crossref_primary_10_1002_jmri_29194
wiley_primary_10_1002_jmri_29194_JMRI29194
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate October 2024
2024-10-00
2024-Oct
20241001
PublicationDateYYYYMMDD 2024-10-01
PublicationDate_xml – month: 10
  year: 2024
  text: October 2024
PublicationDecade 2020
PublicationPlace Hoboken, USA
PublicationPlace_xml – name: Hoboken, USA
– name: United States
– name: Nashville
PublicationSubtitle JMRI
PublicationTitle Journal of magnetic resonance imaging
PublicationTitleAlternate J Magn Reson Imaging
PublicationYear 2024
Publisher John Wiley & Sons, Inc
Wiley Subscription Services, Inc
Publisher_xml – name: John Wiley & Sons, Inc
– name: Wiley Subscription Services, Inc
References 2021; 25
2023; 73
2023; 57
2021; 67
2023; 5
2019; 16
2023; 147
2020; 107
2022; 41
2020; 123
2022; 145
2021; 54
2021; 12
2023; 88
2023
2020; 130
2021
2020
2021; 18
2019; 25
2022; 80
2019; 49
2019
2017
2015
2014; 18
2022; 77
2021; 40
2020; 29
e_1_2_6_10_1
e_1_2_6_31_1
e_1_2_6_19_1
e_1_2_6_13_1
e_1_2_6_14_1
e_1_2_6_11_1
e_1_2_6_12_1
e_1_2_6_17_1
e_1_2_6_18_1
e_1_2_6_15_1
e_1_2_6_16_1
e_1_2_6_21_1
e_1_2_6_20_1
e_1_2_6_9_1
e_1_2_6_8_1
e_1_2_6_5_1
e_1_2_6_4_1
e_1_2_6_7_1
e_1_2_6_6_1
e_1_2_6_25_1
e_1_2_6_24_1
e_1_2_6_3_1
e_1_2_6_23_1
e_1_2_6_2_1
e_1_2_6_22_1
e_1_2_6_29_1
e_1_2_6_28_1
e_1_2_6_27_1
Zhang J (e_1_2_6_30_1) 2023
e_1_2_6_26_1
References_xml – volume: 73
  start-page: 17
  year: 2023
  end-page: 48
  article-title: Cancer statistics, 2023
  publication-title: CA Cancer J Clin
– start-page: 6925
  year: 2020
  end-page: 6932
– volume: 67
  year: 2021
  article-title: A global benchmark of algorithms for segmenting the left atrium from late gadolinium‐enhanced cardiac magnetic resonance imaging
  publication-title: Med Image Anal
– volume: 25
  start-page: 24
  year: 2019
  end-page: 29
  article-title: A guide to deep learning in healthcare
  publication-title: Nat Med
– volume: 41
  start-page: 608
  year: 2022
  end-page: 620
  article-title: Inconsistency‐aware uncertainty estimation for semi‐supervised medical image segmentation
  publication-title: IEEE Trans Med Imaging
– start-page: 69
  year: 2021
  end-page: 79
– volume: 57
  start-page: 1533
  year: 2023
  end-page: 1540
  article-title: Evaluation of spatial attentive deep learning for automatic placental segmentation on longitudinal MRI
  publication-title: J Magn Reson Imaging
– volume: 18
  start-page: 359
  year: 2014
  end-page: 373
  article-title: Evaluation of prostate segmentation algorithms for MRI: The PROMISE12 challenge
  publication-title: Med Image Anal
– volume: 77
  year: 2022
  article-title: Suggestive annotation of brain MR images with gradient‐guided sampling
  publication-title: Med Image Anal
– volume: 130
  start-page: 85
  year: 2020
  end-page: 99
  article-title: Quantifying the generalization error in deep learning in terms of data distribution and neural network smoothness
  publication-title: Neural Netw
– volume: 29
  start-page: 8055
  year: 2020
  end-page: 8068
  article-title: Unsupervised learning of image segmentation based on differentiable feature clustering
  publication-title: IEEE Trans Image Process
– volume: 18
  start-page: 940
  year: 2021
  end-page: 949
  article-title: D‐UNet: A dimension‐fusion U shape network for chronic stroke lesion segmentation
  publication-title: IEEE/ACM Trans Comput Biol Bioinforma
– volume: 145
  start-page: 90
  year: 2022
  end-page: 106
  article-title: Interpolation consistency training for semi‐supervised learning
  publication-title: Neural Netw
– volume: 88
  year: 2023
  article-title: Ambiguity‐selective consistency regularization for mean‐teacher semi‐supervised medical image segmentation
  publication-title: Med Image Anal
– volume: 80
  year: 2022
  article-title: Semi‐supervised medical image segmentation via uncertainty rectified pyramid consistency
  publication-title: Med Image Anal
– volume: 12
  start-page: 5915
  year: 2021
  article-title: Annotation‐efficient deep learning for automatic medical image segmentation
  publication-title: Nat Commun
– start-page: 1
  year: 2015
  end-page: 15
– volume: 16
  start-page: 67
  year: 2019
  end-page: 70
  article-title: U‐Net: Deep learning for cell counting, detection, and morphometry
  publication-title: Nat Methods
– volume: 5
  start-page: 724
  year: 2023
  end-page: 738
  article-title: Uncertainty‐guided dual‐views for semi‐supervised volumetric medical image segmentation
  publication-title: Nat Mach Intell
– year: 2023
  article-title: Multi‐ConDoS: Multimodal contrastive domain sharing generative adversarial networks for self‐supervised medical image segmentation
  publication-title: IEEE Trans Med Imaging
– volume: 49
  start-page: 939
  year: 2019
  end-page: 954
  article-title: Deep learning in radiology: An overview of the concepts and a survey of the state of the art with focus on MRI
  publication-title: J Magn Reson Imaging
– volume: 25
  start-page: 2665
  year: 2021
  end-page: 2672
  article-title: 3D image segmentation with sparse annotation by self‐training and internal registration
  publication-title: IEEE J Biomed Health Informatics
– year: 2023
– volume: 123
  year: 2020
  article-title: Semi‐supervised generative adversarial networks for the segmentation of the left ventricle in pediatric MRI
  publication-title: Comput Biol Med
– volume: 54
  start-page: 452
  year: 2021
  end-page: 459
  article-title: Deep learning whole‐gland and zonal prostate segmentation on a public MRI dataset
  publication-title: J Magn Reson Imaging
– start-page: 605
  year: 2019
  end-page: 613
– volume: 107
  start-page: 107269
  year: 2020
  article-title: Deep co‐training for semi‐supervised image segmentation
  publication-title: Pattern Recognit
– volume: 147
  start-page: e93
  year: 2023
  end-page: e621
  article-title: Heart disease and stroke statistics—2023 update: A report from the American Heart Association
  publication-title: Circulation
– year: 2017
– volume: 40
  start-page: 2589
  year: 2021
  end-page: 2599
  article-title: A coarse‐to‐fine deformable transformation framework for unsupervised multi‐contrast MR image registration with dual consistency constraint
  publication-title: IEEE Trans Med Imaging
– ident: e_1_2_6_23_1
– ident: e_1_2_6_9_1
  doi: 10.1002/jmri.28403
– ident: e_1_2_6_22_1
  doi: 10.1109/TMI.2021.3059282
– ident: e_1_2_6_7_1
  doi: 10.1002/jmri.26534
– ident: e_1_2_6_2_1
  doi: 10.1002/jmri.27585
– ident: e_1_2_6_10_1
  doi: 10.1038/s41592-018-0261-2
– ident: e_1_2_6_16_1
  doi: 10.1016/j.media.2022.102517
– ident: e_1_2_6_8_1
  doi: 10.1109/TCBB.2019.2939522
– ident: e_1_2_6_28_1
  doi: 10.1016/j.neunet.2021.10.008
– ident: e_1_2_6_5_1
  doi: 10.1161/CIR.0000000000001123
– ident: e_1_2_6_17_1
  doi: 10.1109/TMI.2021.3117888
– ident: e_1_2_6_18_1
  doi: 10.1609/aaai.v34i04.6175
– ident: e_1_2_6_14_1
  doi: 10.1016/j.compbiomed.2020.103884
– ident: e_1_2_6_29_1
  doi: 10.1016/j.media.2022.102373
– ident: e_1_2_6_25_1
– ident: e_1_2_6_6_1
  doi: 10.1016/j.media.2020.101832
– ident: e_1_2_6_13_1
  doi: 10.1038/s42256-023-00682-w
– ident: e_1_2_6_15_1
  doi: 10.1109/ISBI53787.2023.10230326
– ident: e_1_2_6_26_1
  doi: 10.1007/978-3-030-32245-8_67
– ident: e_1_2_6_3_1
  doi: 10.3322/caac.21763
– ident: e_1_2_6_21_1
  doi: 10.1016/j.media.2023.102880
– ident: e_1_2_6_24_1
  doi: 10.1109/TIP.2020.3011269
– ident: e_1_2_6_4_1
  doi: 10.1016/j.media.2013.12.002
– ident: e_1_2_6_11_1
  doi: 10.1038/s41591-018-0316-z
– ident: e_1_2_6_12_1
  doi: 10.1038/s41467-021-26216-9
– ident: e_1_2_6_31_1
  doi: 10.1016/j.neunet.2020.06.024
– year: 2023
  ident: e_1_2_6_30_1
  article-title: Multi‐ConDoS: Multimodal contrastive domain sharing generative adversarial networks for self‐supervised medical image segmentation
  publication-title: IEEE Trans Med Imaging
– ident: e_1_2_6_19_1
  doi: 10.1109/JBHI.2020.3038847
– ident: e_1_2_6_27_1
  doi: 10.1016/j.patcog.2020.107269
– ident: e_1_2_6_20_1
  doi: 10.1007/978-3-030-87196-3_7
SSID ssj0009945
Score 2.4657664
Snippet Background Deep learning has presented great potential in accurate MR image segmentation when enough labeled data are provided for network optimization....
Deep learning has presented great potential in accurate MR image segmentation when enough labeled data are provided for network optimization. However, manually...
BackgroundDeep learning has presented great potential in accurate MR image segmentation when enough labeled data are provided for network optimization....
SourceID proquest
pubmed
crossref
wiley
SourceType Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 1604
SubjectTerms Atrium
Collaborative learning
Datasets
Deep learning
Field strength
Gadolinium
Image annotation
Image processing
Image segmentation
Mean
Population studies
Prostate
pseudo labeling
self‐supervised learning
semi‐supervised learning
sparse annotations
Statistical analysis
Statistical tests
Supervised learning
Training
volumetric MR image segmentation
Title Collaborative Learning for Annotation‐Efficient Volumetric MR Image Segmentation
URI https://onlinelibrary.wiley.com/doi/abs/10.1002%2Fjmri.29194
https://www.ncbi.nlm.nih.gov/pubmed/38156427
https://www.proquest.com/docview/3128154732
https://www.proquest.com/docview/2908124246
Volume 60
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1LS8QwEB7Eg3jx_agvInpR6BqT9BHwIqKosB7WB16kNE128bFdcXc9ePIn-Bv9JU7SbhcfCNpLC01ImsxMvnQm3wBsShOmTRZnvqFG-YiIjS9jrvxdaWIeSqG1S99WPwuPL8XpdXA9AnuDszAFP0T1w81qhrPXVsFT1d0ZkobetZ9ua0ziJhwNsA3WsoioMeSOktJlKEb8wP3dmEYVNynbGVb9vBp9g5ifEatbco4m4WbQ2SLS5L7W76la9vKFx_G_XzMFEyUWJfuF8EzDiMlnYKxeettnoXEwFJJnQ0oq1hZBnEv287xTePHfX98OHQ8FLl_kyhk7y_pP6g1y0kZrRc5Nq12ecMrn4PLo8OLg2C9zMPgZDyLhswgvoZtKMx42eUoVozwwgQx1iLu1KE55pnUsGEI1S12oRUYV2iwpTKpoqPg8jOad3CwCsWFXPFAIkHAXhEhRKsU0FVFq7yalHmwN5iLJSoJymyfjISmolVliBylxg-TBRlX2saDl-LHUymBKk1I1uwm3vkObcZl5sF69RqWynpI0N51-F2tTC3yYCD1YKEShaoY7fh0WebDtJvSX9pPTeuPEPS39pfAyjDOETkXI4AqM9p76ZhWhT0-tORH_AOKa_34
linkProvider Wiley-Blackwell
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1NT9wwEB21VKJc2lKgTUuLK7i0UhZjOx8-IgraBcJhC4hbFMdeVGCzFbvbQ0_9Cf2N_SWdcUJWtKgS5JJIsRXFnrGfPeP3ADa0i4uBSMvQcWdCRMQu1Kk04ZZ2qYy1stbLt2VHcfdE7Z9FZ01uDp2Fqfkh2g038gw_XpOD04b05ow19GJ4_bUjNK7CH8MTkvQmAYPP_Rl7lNZeoxgRhAy3Up607KRic1b39nz0D8i8jVn9pLP3vFZWHXuuQso1uexMJ6ZT_viLyfHB__MCnjVwlG3X9rMIj1z1EuazJuC-BP2dmZ18d6xhYz1nCHXZdlWN6kD-75-_dj0VBc5g7NSPd0T8z7I-6w1xwGJf3PmwOeRULcPJ3u7xTjdsZBjCUkaJCkWCl7IDY4WMB7LgRnAZuUjHNsYFW5IWsrQ2VQLRGrEXWlVyg8OWVq4wPDZyBeaqUeVeA6PMKxkZxEi4EEKwqI0RlqukoLsreAAfbzojLxuOcpLKuMprdmWRUyPlvpECWG_LfquZOe4stXrTp3njneNcUviQRJdFAB_a1-hXFCwpKjeajrE2J-wjVBzAq9oW2s9IT7EjkgA--R79z_fz_azf809v7lN4DZ52j7PD_LB3dPAWFgQiqTqDcBXmJtdT9w6R0MS89_b-B8zIA6c
linkToPdf http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3NTtwwEB5RkFAvpUBLQyk1gksrZdfYjhNLvSBgxQKL0LZUXFAUx17Ez2YR7PbAiUfoM_ZJOnayWUGrSiWXRIotO_bM-JuM_Q3AhrIy67EkDy21OkREbEOVcB1uKptwqYQxPn1b50junYj90-h0Cr6Mz8KU_BD1DzenGd5eOwW_Mb3mhDT0sn970WAKnfAXMCMkaouDRN0JeZRSPkUxAggebiY0rslJWXNS9_Fy9AfGfAxZ_ZrTmoOzcW_LrSZXjdFQN_L7J0SOz_2c1_CqAqNkq5SeeZiyxQLMdqpw-yJ0tydS8sOSiov1nCDQJVtFMSjD-L8efu56Igpcv8h3b-0c7T_pdEm7j-aKfLXn_eqIU_EGTlq737b3wioJQ5jzKBYhi_ESpqcN47LHM6oZ5ZGNlDQS3bU4yXhuTCIYYjXHXWhETjUaLSVspqnU_C1MF4PCvgPi9l3xSCNCQjcIoaLSmhkq4szdbUYD-DSeizSvGMpdoozrtORWZqkbpNQPUgDrddmbkpfjr6VWxlOaVrp5l3IXPHQpl1kAa_Vr1CoXKskKOxjdYW3qkA8TMoClUhTqZrgn2GFxAJ_9hP6j_XS_0237p-X_KfwRZo93Wulh--jgPbxkCKPK7YMrMD28HdkPCIOGetVL-2_f4QJf
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Collaborative+Learning+for+Annotation%E2%80%90Efficient+Volumetric+MR+Image+Segmentation&rft.jtitle=Journal+of+magnetic+resonance+imaging&rft.au=Osman%2C+Yousuf+Babiker+M.&rft.au=Li%2C+Cheng&rft.au=Huang%2C+Weijian&rft.au=Wang%2C+Shanshan&rft.date=2024-10-01&rft.issn=1053-1807&rft.eissn=1522-2586&rft.volume=60&rft.issue=4&rft.spage=1604&rft.epage=1614&rft_id=info:doi/10.1002%2Fjmri.29194&rft.externalDBID=n%2Fa&rft.externalDocID=10_1002_jmri_29194
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1053-1807&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1053-1807&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1053-1807&client=summon