DrasCLR: A Self-supervised Framework of Learning Disease-related and Anatomy-specific Representation for 3D Medical Images
Large-scale volumetric medical images with annotation are rare, costly, and time prohibitive to acquire. Self-supervised learning (SSL) offers a promising pre-training and feature extraction solution for many downstream tasks, as it only uses unlabeled data. Recently, SSL methods based on instance d...
Saved in:
Main Authors | , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
20.02.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Large-scale volumetric medical images with annotation are rare, costly, and
time prohibitive to acquire. Self-supervised learning (SSL) offers a promising
pre-training and feature extraction solution for many downstream tasks, as it
only uses unlabeled data. Recently, SSL methods based on instance
discrimination have gained popularity in the medical imaging domain. However,
SSL pre-trained encoders may use many clues in the image to discriminate an
instance that are not necessarily disease-related. Moreover, pathological
patterns are often subtle and heterogeneous, requiring the ability of the
desired method to represent anatomy-specific features that are sensitive to
abnormal changes in different body parts. In this work, we present a novel SSL
framework, named DrasCLR, for 3D medical imaging to overcome these challenges.
We propose two domain-specific contrastive learning strategies: one aims to
capture subtle disease patterns inside a local anatomical region, and the other
aims to represent severe disease patterns that span larger regions. We
formulate the encoder using conditional hyper-parameterized network, in which
the parameters are dependant on the anatomical location, to extract
anatomically sensitive features. Extensive experiments on large-scale computer
tomography (CT) datasets of lung images show that our method improves the
performance of many downstream prediction and segmentation tasks. The
patient-level representation improves the performance of the patient survival
prediction task. We show how our method can detect emphysema subtypes via dense
prediction. We demonstrate that fine-tuning the pre-trained model can
significantly reduce annotation efforts without sacrificing emphysema detection
accuracy. Our ablation study highlights the importance of incorporating
anatomical context into the SSL framework. |
---|---|
AbstractList | Large-scale volumetric medical images with annotation are rare, costly, and
time prohibitive to acquire. Self-supervised learning (SSL) offers a promising
pre-training and feature extraction solution for many downstream tasks, as it
only uses unlabeled data. Recently, SSL methods based on instance
discrimination have gained popularity in the medical imaging domain. However,
SSL pre-trained encoders may use many clues in the image to discriminate an
instance that are not necessarily disease-related. Moreover, pathological
patterns are often subtle and heterogeneous, requiring the ability of the
desired method to represent anatomy-specific features that are sensitive to
abnormal changes in different body parts. In this work, we present a novel SSL
framework, named DrasCLR, for 3D medical imaging to overcome these challenges.
We propose two domain-specific contrastive learning strategies: one aims to
capture subtle disease patterns inside a local anatomical region, and the other
aims to represent severe disease patterns that span larger regions. We
formulate the encoder using conditional hyper-parameterized network, in which
the parameters are dependant on the anatomical location, to extract
anatomically sensitive features. Extensive experiments on large-scale computer
tomography (CT) datasets of lung images show that our method improves the
performance of many downstream prediction and segmentation tasks. The
patient-level representation improves the performance of the patient survival
prediction task. We show how our method can detect emphysema subtypes via dense
prediction. We demonstrate that fine-tuning the pre-trained model can
significantly reduce annotation efforts without sacrificing emphysema detection
accuracy. Our ablation study highlights the importance of incorporating
anatomical context into the SSL framework. |
Author | Chen, Junxiang Sun, Li Reynolds, Max Chaudhary, Tigmanshu Yu, Ke Batmanghelich, Kayhan |
Author_xml | – sequence: 1 givenname: Ke surname: Yu fullname: Yu, Ke – sequence: 2 givenname: Li surname: Sun fullname: Sun, Li – sequence: 3 givenname: Junxiang surname: Chen fullname: Chen, Junxiang – sequence: 4 givenname: Max surname: Reynolds fullname: Reynolds, Max – sequence: 5 givenname: Tigmanshu surname: Chaudhary fullname: Chaudhary, Tigmanshu – sequence: 6 givenname: Kayhan surname: Batmanghelich fullname: Batmanghelich, Kayhan |
BackLink | https://doi.org/10.48550/arXiv.2302.10390$$DView paper in arXiv |
BookMark | eNotkMFOwzAQRH2AAxQ-gBP-gYR1XCcptyiltFIQUuk92sTryiJxIjsUytdTCqc5zOhJ867ZhRscMXYnIJ7nSsED-i97iBMJSSxALuCKfS89hrLaPvKCv1FnovAxkj_YQJqvPPb0Ofh3PhheEXpn3Z4vTx0Gijx1OJ1W6DQvHE5Df4zCSK01tuVbGj0FchNOdnDcDJ7LJX8hbVvs-KbHPYUbdmmwC3T7nzO2Wz3tynVUvT5vyqKKMM0gWgidSd00uaaWFM5BKglprhUZJYRIIU0akxsDkCIo0hmkRoPIG6ES0ImUM3b_hz2fr0dve_TH-ldCfZYgfwD3ZVl2 |
ContentType | Journal Article |
Copyright | http://creativecommons.org/licenses/by/4.0 |
Copyright_xml | – notice: http://creativecommons.org/licenses/by/4.0 |
DBID | AKY GOX |
DOI | 10.48550/arxiv.2302.10390 |
DatabaseName | arXiv Computer Science arXiv.org |
DatabaseTitleList | |
Database_xml | – sequence: 1 dbid: GOX name: arXiv.org url: http://arxiv.org/find sourceTypes: Open Access Repository |
DeliveryMethod | fulltext_linktorsrc |
ExternalDocumentID | 2302_10390 |
GroupedDBID | AKY GOX |
ID | FETCH-LOGICAL-a670-91d73dbb8dece5a40353068d5ef51116062bf8ff006a05ed706fd018b1520d233 |
IEDL.DBID | GOX |
IngestDate | Mon Jan 08 05:45:05 EST 2024 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | false |
IsScholarly | false |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-a670-91d73dbb8dece5a40353068d5ef51116062bf8ff006a05ed706fd018b1520d233 |
OpenAccessLink | https://arxiv.org/abs/2302.10390 |
ParticipantIDs | arxiv_primary_2302_10390 |
PublicationCentury | 2000 |
PublicationDate | 2023-02-20 |
PublicationDateYYYYMMDD | 2023-02-20 |
PublicationDate_xml | – month: 02 year: 2023 text: 2023-02-20 day: 20 |
PublicationDecade | 2020 |
PublicationYear | 2023 |
Score | 1.8778683 |
SecondaryResourceType | preprint |
Snippet | Large-scale volumetric medical images with annotation are rare, costly, and
time prohibitive to acquire. Self-supervised learning (SSL) offers a promising... |
SourceID | arxiv |
SourceType | Open Access Repository |
SubjectTerms | Computer Science - Artificial Intelligence Computer Science - Computer Vision and Pattern Recognition Computer Science - Learning |
Title | DrasCLR: A Self-supervised Framework of Learning Disease-related and Anatomy-specific Representation for 3D Medical Images |
URI | https://arxiv.org/abs/2302.10390 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwdV09T8MwELVKJxYEAlQ-dQNrhBs7ictWNYSC-JBKkbJFdmwjJJpWTYuAX8_ZSQULq33T2ed7T3fvTMgFXtwoFIbj68dZwJMyxpCSyHmo4lYazIm-GfPhMR6_8Ls8yjsENloYufx8-2jmA6v6EvFx6GThAyTlW2HoWrZunvKmOOlHcbX2v3aIMf3SnySR7ZKdFt3BsDmOPdIx1T75TpeyHt1PrmAIz-bdBvV64SK0NhqyTW8UzC20s05fIW2qJoEXmqAVsn0YVkiPZ1-Bk0a69h6Y-B7WVjpUAYJPYCm0hRe4neFDUR-QaXY9HY2D9suDQMYJRTfphGmlhDaliSSnLEJIL3RkLAKjPpKNUFlhLYaKpJHRCY2tpn2hMAtTHTJ2SLrVvDI9ApiWpZNHxqUquRVGJkImWjDKFVIea49IzzuqWDRTLQrnw8L78Pj_rROy7f5b95puekq6q-XanGFWXqlzfzQ_bxOMVQ |
link.rule.ids | 228,230,786,891 |
linkProvider | Cornell University |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=DrasCLR%3A+A+Self-supervised+Framework+of+Learning+Disease-related+and+Anatomy-specific+Representation+for+3D+Medical+Images&rft.au=Yu%2C+Ke&rft.au=Sun%2C+Li&rft.au=Chen%2C+Junxiang&rft.au=Reynolds%2C+Max&rft.date=2023-02-20&rft_id=info:doi/10.48550%2Farxiv.2302.10390&rft.externalDocID=2302_10390 |