Make Segment Anything Model Perfect on Shadow Detection

Compared to models pretrained on ImageNet, the segment anything model (SAM) has been trained on a massive segmentation corpus, excelling in both generalization ability and boundary localization. However, these strengths are still insufficient to enhance shadow detection without additional training,...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on geoscience and remote sensing Vol. 61; pp. 1 - 13
Main Authors Chen, Xiao-Diao, Wu, Wen, Yang, Wenya, Qin, Hongshuai, Wu, Xiantao, Mao, Xiaoyang
Format Journal Article
LanguageEnglish
Published New York IEEE 2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Compared to models pretrained on ImageNet, the segment anything model (SAM) has been trained on a massive segmentation corpus, excelling in both generalization ability and boundary localization. However, these strengths are still insufficient to enhance shadow detection without additional training, and it raises the question: do we still need precise manual annotations to fine-tune SAM for high detection accuracy? This article proposes an annotation-free framework for deep unsupervised shadow detection (USD) by leveraging SAM's capabilities. The key lies in how to exploit the abilities acquired from a large-scale corpus and utilize them to improve downstream tasks. Instead of directly fine-tuning SAM, we propose a prompt-like tuning method to inject task-specific cues into SAM in a lightweight manner, namely, ShadowSAM. This adaptation manner can ensure a good fitting when training data are limited. Moreover, considering that the pseudo labels used in our framework are generated by traditional USD approaches and may contain severe label noises, we propose an illumination and texture-guided updating (ITU) strategy to selectively boost the quality of pseudo masks. To further improve the model's robustness, we design a mask diversity index (MDI) to establish easy-to-hard subsets for incremental curriculum learning. Extensive experiments on benchmark datasets (i.e., SBU, UCF, ISTD, and CUHK-Shadow) demonstrate that our unsupervised solution can achieve comparable performance to state-of-the-art (SOTA) fully supervised methods. Our code is available at this repository.
AbstractList Compared to models pretrained on ImageNet, the segment anything model (SAM) has been trained on a massive segmentation corpus, excelling in both generalization ability and boundary localization. However, these strengths are still insufficient to enhance shadow detection without additional training, and it raises the question: do we still need precise manual annotations to fine-tune SAM for high detection accuracy? This article proposes an annotation-free framework for deep unsupervised shadow detection (USD) by leveraging SAM’s capabilities. The key lies in how to exploit the abilities acquired from a large-scale corpus and utilize them to improve downstream tasks. Instead of directly fine-tuning SAM, we propose a prompt-like tuning method to inject task-specific cues into SAM in a lightweight manner, namely, ShadowSAM. This adaptation manner can ensure a good fitting when training data are limited. Moreover, considering that the pseudo labels used in our framework are generated by traditional USD approaches and may contain severe label noises, we propose an illumination and texture-guided updating (ITU) strategy to selectively boost the quality of pseudo masks. To further improve the model’s robustness, we design a mask diversity index (MDI) to establish easy-to-hard subsets for incremental curriculum learning. Extensive experiments on benchmark datasets (i.e., SBU, UCF, ISTD, and CUHK-Shadow) demonstrate that our unsupervised solution can achieve comparable performance to state-of-the-art (SOTA) fully supervised methods. Our code is available at this repository.
Author Mao, Xiaoyang
Qin, Hongshuai
Chen, Xiao-Diao
Wu, Xiantao
Yang, Wenya
Wu, Wen
Author_xml – sequence: 1
  givenname: Xiao-Diao
  orcidid: 0000-0002-7523-7657
  surname: Chen
  fullname: Chen, Xiao-Diao
  email: xiaodiao@hdu.edu.cn
  organization: School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou, China
– sequence: 2
  givenname: Wen
  orcidid: 0000-0003-0919-3948
  surname: Wu
  fullname: Wu, Wen
  email: wuwen.hdu.cs@gmail.com
  organization: School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou, China
– sequence: 3
  givenname: Wenya
  orcidid: 0000-0002-7041-1302
  surname: Yang
  fullname: Yang, Wenya
  email: yangwenya@hdu.edu.cn
  organization: School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou, China
– sequence: 4
  givenname: Hongshuai
  surname: Qin
  fullname: Qin, Hongshuai
  email: qinhongshuai@hdu.edu.cn
  organization: School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou, China
– sequence: 5
  givenname: Xiantao
  surname: Wu
  fullname: Wu, Xiantao
  email: xiantao.hdu.cs@gmail.com
  organization: School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou, China
– sequence: 6
  givenname: Xiaoyang
  orcidid: 0000-0001-9531-3197
  surname: Mao
  fullname: Mao, Xiaoyang
  email: mao@yamanashi.ac.jp
  organization: Department of Computer Science and Engineering, University of Yamanashi, Kofu, Japan
BookMark eNpNkE1rAjEQhkOxULX9AYUeAj2vzeduchRbbUFpqfYc1uxE12pisyvFf9-IHgoDA8PzzjBPD3V88IDQPSUDSol-Wkw-5wNGGB9wzhmTxRXqUilVRnIhOqhLqM4zpjS7Qb2m2RBChaRFFxWz8hvwHFY78C0e-mO7rv0Kz0IFW_wB0YFtcfB4vi6r8IufoU2DOvhbdO3KbQN3l95HX-OXxeg1m75P3kbDaWaZFm0GxApeEWsVLIlwjFqbCqBynC-1VLQShcxdrjmXllVCalcooiFXXCmZC95Hj-e9-xh-DtC0ZhMO0aeTJn0jCdGM8kTRM2VjaJoIzuxjvSvj0VBiTn7MyY85-TEXPynzcM7UAPCP5zSJEfwPnkZhaQ
CODEN IGRSD2
CitedBy_id crossref_primary_10_1109_TII_2024_3376726
Cites_doi 10.1109/VS-GAMES.2017.8056589
10.1109/TGRS.2022.3203808
10.1109/ICCV48922.2021.00950
10.1109/ICCV48922.2021.00417
10.1109/IJCNN48605.2020.9207304
10.1007/978-3-030-01231-1_8
10.1109/CVPR.2018.00847
10.3390/rs14020320
10.1109/CVPR.2011.5995725
10.1109/TGRS.2008.2004629
10.1109/ICCV.2011.6126331
10.1109/CVPR.2018.00778
10.1007/978-3-031-19827-4_41
10.1109/TGRS.2022.3144165
10.1016/j.cviu.2021.103341
10.1109/TIP.2022.3222904
10.1016/j.cag.2022.04.003
10.1109/TPAMI.2022.3225323
10.1109/ICCV51070.2023.00371
10.1109/TGRS.2014.2306233
10.1109/TGRS.2006.869980
10.1016/j.patcog.2006.09.017
10.1145/2070781.2024191
10.1109/ICASSP.2009.4959698
10.1109/CVPR.2014.360
10.1007/s00371-021-02095-5
10.1109/CVPR52729.2023.00701
10.18653/v1/2021.acl-long.353
10.1109/tnnls.2023.3262599
10.1109/CVPR.2013.153
10.1109/CVPR52729.2023.01813
10.1109/TCSVT.2023.3263903
10.1007/978-3-030-01216-8_41
10.1109/TNNLS.2021.3109872
10.1109/TPAMI.2022.3179526
10.1109/TGRS.2021.3095166
10.1007/978-3-319-46466-4_49
10.1109/TGRS.2013.2288500
10.1109/CVPR.2017.634
10.1145/1553374.1553380
10.1109/TPAMI.2017.2691703
10.1109/CVPR.2010.5540209
10.3390/s23187884
10.1109/ICCV.2013.370
10.1016/j.cviu.2010.04.003
10.1007/978-1-4615-0913-4_11
10.1109/CVPR.2009.5206848
10.3390/rs15184466
10.1016/j.jvcir.2022.103596
10.1016/j.patcog.2011.10.001
10.1109/TIP.2017.2712283
10.1109/CVPR52688.2022.00423
10.1109/ICCV.2017.483
10.1109/TIP.2021.3049331
10.1007/978-3-030-58580-8_41
10.1109/ICCV48922.2021.00466
10.1016/j.patcog.2022.108982
10.1109/ICCV.2013.209
10.1109/TGRS.2010.2096515
10.1109/CVPR52729.2023.02310
10.1145/3503161.3547904
10.1109/CVPR.2019.00531
10.1109/TIP.2015.2465159
10.1016/j.patcog.2022.108777
10.1109/CVPR42600.2020.00565
10.1109/CVPR.2018.00192
10.18653/v1/2021.emnlp-main.243
10.1109/TGRS.2013.2262722
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023
DBID 97E
RIA
RIE
AAYXX
CITATION
7UA
8FD
C1K
F1W
FR3
H8D
H96
KR7
L.G
L7M
DOI 10.1109/TGRS.2023.3332257
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE/IET Electronic Library
CrossRef
Water Resources Abstracts
Technology Research Database
Environmental Sciences and Pollution Management
ASFA: Aquatic Sciences and Fisheries Abstracts
Engineering Research Database
Aerospace Database
Aquatic Science & Fisheries Abstracts (ASFA) 2: Ocean Technology, Policy & Non-Living Resources
Civil Engineering Abstracts
Aquatic Science & Fisheries Abstracts (ASFA) Professional
Advanced Technologies Database with Aerospace
DatabaseTitle CrossRef
Aerospace Database
Civil Engineering Abstracts
Aquatic Science & Fisheries Abstracts (ASFA) Professional
Aquatic Science & Fisheries Abstracts (ASFA) 2: Ocean Technology, Policy & Non-Living Resources
Technology Research Database
ASFA: Aquatic Sciences and Fisheries Abstracts
Engineering Research Database
Advanced Technologies Database with Aerospace
Water Resources Abstracts
Environmental Sciences and Pollution Management
DatabaseTitleList Aerospace Database

Database_xml – sequence: 1
  dbid: RIE
  name: IEEE/IET Electronic Library
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
Physics
EISSN 1558-0644
EndPage 13
ExternalDocumentID 10_1109_TGRS_2023_3332257
10315174
Genre orig-research
GrantInformation_xml – fundername: National Natural Science Foundation of China
  grantid: 61972120
  funderid: 10.13039/501100001809
GroupedDBID -~X
0R~
29I
4.4
5GY
5VS
6IK
97E
AAJGR
AASAJ
AAYOK
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
ACNCT
AENEX
AETIX
AFRAH
AI.
AIBXA
AKJIK
ALLEH
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
EBS
EJD
F5P
HZ~
H~9
IBMZZ
ICLAB
IFIPE
IFJZH
IPLJI
JAVBF
LAI
M43
O9-
OCL
P2P
RIA
RIE
RIG
RNS
RXW
TAE
TN5
VH1
XFK
Y6R
AAYXX
CITATION
7UA
8FD
C1K
F1W
FR3
H8D
H96
KR7
L.G
L7M
ID FETCH-LOGICAL-c294t-e0c43d0cc8eb04f21cc1cceedf33b9581d4756f69335c2d459f7809e683885643
IEDL.DBID RIE
ISSN 0196-2892
IngestDate Thu Oct 10 19:31:33 EDT 2024
Fri Aug 23 02:59:03 EDT 2024
Wed Jun 26 19:24:53 EDT 2024
IsPeerReviewed true
IsScholarly true
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c294t-e0c43d0cc8eb04f21cc1cceedf33b9581d4756f69335c2d459f7809e683885643
ORCID 0000-0002-7523-7657
0000-0002-7041-1302
0000-0003-0919-3948
0000-0001-9531-3197
PQID 2895009213
PQPubID 85465
PageCount 13
ParticipantIDs crossref_primary_10_1109_TGRS_2023_3332257
proquest_journals_2895009213
ieee_primary_10315174
PublicationCentury 2000
PublicationDate 20230000
2023-00-00
20230101
PublicationDateYYYYMMDD 2023-01-01
PublicationDate_xml – year: 2023
  text: 20230000
PublicationDecade 2020
PublicationPlace New York
PublicationPlace_xml – name: New York
PublicationTitle IEEE transactions on geoscience and remote sensing
PublicationTitleAbbrev TGRS
PublicationYear 2023
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref13
ref57
ref12
ref56
ref15
ref59
ref14
ref58
ref53
ref52
ref11
ref55
ref10
ref54
Radford (ref23)
ref17
ref16
ref19
ref18
Hendrycks (ref66) 2016
ref51
ref50
Zhang (ref75) 2023
ref46
ref45
ref47
Zoph (ref49); 33
ref44
ref43
ref8
ref7
ref9
ref4
ref3
ref6
ref5
ref81
ref40
Cai (ref28); 33
ref80
ref35
Sohn (ref48); 33
ref79
ref34
ref78
ref37
ref36
ref31
ref30
ref33
ref77
ref32
ref76
ref2
ref1
ref39
Tan (ref65)
ref38
Jie (ref74) 2023
ref70
ref73
ref68
ref67
ref26
ref25
ref69
ref20
ref64
ref63
ref22
Bommasani (ref24) 2021
ref21
Lee (ref42); 3
Dosovitskiy (ref71)
ref27
ref29
Guo (ref72)
ref60
ref62
ref61
Arpit (ref41)
References_xml – ident: ref10
  doi: 10.1109/VS-GAMES.2017.8056589
– year: 2023
  ident: ref74
  article-title: When SAM meets shadow detection
  publication-title: arXiv:2305.11513
  contributor:
    fullname: Jie
– ident: ref7
  doi: 10.1109/TGRS.2022.3203808
– volume: 33
  start-page: 596
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  ident: ref48
  article-title: FixMatch: Simplifying semi-supervised learning with consistency and confidence
  contributor:
    fullname: Sohn
– ident: ref25
  doi: 10.1109/ICCV48922.2021.00950
– ident: ref26
  doi: 10.1109/ICCV48922.2021.00417
– ident: ref46
  doi: 10.1109/IJCNN48605.2020.9207304
– ident: ref56
  doi: 10.1007/978-3-030-01231-1_8
– ident: ref29
  doi: 10.1109/CVPR.2018.00847
– ident: ref6
  doi: 10.3390/rs14020320
– year: 2021
  ident: ref24
  article-title: On the opportunities and risks of foundation models
  publication-title: arXiv:2108.07258
  contributor:
    fullname: Bommasani
– volume: 33
  start-page: 3833
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  ident: ref49
  article-title: Rethinking pre-training and self-training
  contributor:
    fullname: Zoph
– ident: ref53
  doi: 10.1109/CVPR.2011.5995725
– ident: ref37
  doi: 10.1109/TGRS.2008.2004629
– ident: ref52
  doi: 10.1109/ICCV.2011.6126331
– ident: ref55
  doi: 10.1109/CVPR.2018.00778
– ident: ref33
  doi: 10.1007/978-3-031-19827-4_41
– ident: ref14
  doi: 10.1109/TGRS.2022.3144165
– ident: ref17
  doi: 10.1016/j.cviu.2021.103341
– ident: ref76
  doi: 10.1109/TIP.2022.3222904
– ident: ref19
  doi: 10.1016/j.cag.2022.04.003
– ident: ref50
  doi: 10.1109/TPAMI.2022.3225323
– ident: ref22
  doi: 10.1109/ICCV51070.2023.00371
– ident: ref8
  doi: 10.1109/TGRS.2014.2306233
– ident: ref63
  doi: 10.1109/TGRS.2006.869980
– ident: ref36
  doi: 10.1016/j.patcog.2006.09.017
– ident: ref1
  doi: 10.1145/2070781.2024191
– year: 2023
  ident: ref75
  article-title: Customized segment anything model for medical image segmentation
  publication-title: arXiv:2304.13785
  contributor:
    fullname: Zhang
– ident: ref38
  doi: 10.1109/ICASSP.2009.4959698
– start-page: 1
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  ident: ref72
  article-title: SegNeXt: Rethinking convolutional attention design for semantic segmentation
  contributor:
    fullname: Guo
– ident: ref81
  doi: 10.1109/CVPR.2014.360
– ident: ref21
  doi: 10.1007/s00371-021-02095-5
– start-page: 8748
  volume-title: Proc. Int. Conf. Mach. Learn.
  ident: ref23
  article-title: Learning transferable visual models from natural language supervision
  contributor:
    fullname: Radford
– ident: ref35
  doi: 10.1109/CVPR52729.2023.00701
– ident: ref31
  doi: 10.18653/v1/2021.acl-long.353
– ident: ref12
  doi: 10.1109/tnnls.2023.3262599
– ident: ref80
  doi: 10.1109/CVPR.2013.153
– ident: ref44
  doi: 10.1109/CVPR52729.2023.01813
– ident: ref18
  doi: 10.1109/TCSVT.2023.3263903
– ident: ref58
  doi: 10.1007/978-3-030-01216-8_41
– ident: ref11
  doi: 10.1109/TNNLS.2021.3109872
– ident: ref73
  doi: 10.1109/TPAMI.2022.3179526
– year: 2016
  ident: ref66
  article-title: Gaussian error linear units (GELUs)
  publication-title: arXiv:1606.08415
  contributor:
    fullname: Hendrycks
– ident: ref13
  doi: 10.1109/TGRS.2021.3095166
– ident: ref68
  doi: 10.1007/978-3-319-46466-4_49
– ident: ref40
  doi: 10.1109/TGRS.2013.2288500
– ident: ref64
  doi: 10.1109/CVPR.2017.634
– ident: ref47
  doi: 10.1145/1553374.1553380
– ident: ref54
  doi: 10.1109/TPAMI.2017.2691703
– ident: ref69
  doi: 10.1109/CVPR.2010.5540209
– ident: ref27
  doi: 10.3390/s23187884
– ident: ref79
  doi: 10.1109/ICCV.2013.370
– ident: ref2
  doi: 10.1016/j.cviu.2010.04.003
– start-page: 1
  volume-title: Proc. Int. Conf. Learn. Represent.
  ident: ref71
  article-title: An image is worth 16✗16 words: Transformers for image recognition at scale
  contributor:
    fullname: Dosovitskiy
– ident: ref4
  doi: 10.1007/978-1-4615-0913-4_11
– volume: 33
  start-page: 11285
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  ident: ref28
  article-title: TinyTL: Reduce memory, not parameters for efficient on-device learning
  contributor:
    fullname: Cai
– ident: ref34
  doi: 10.1109/CVPR.2009.5206848
– ident: ref15
  doi: 10.3390/rs15184466
– start-page: 6105
  volume-title: Proc. Int. Conf. Mach. Learn.
  ident: ref65
  article-title: EfficientNet: Rethinking model scaling for convolutional neural networks
  contributor:
    fullname: Tan
– ident: ref20
  doi: 10.1016/j.jvcir.2022.103596
– ident: ref5
  doi: 10.1016/j.patcog.2011.10.001
– ident: ref3
  doi: 10.1109/TIP.2017.2712283
– ident: ref43
  doi: 10.1109/CVPR52688.2022.00423
– ident: ref57
  doi: 10.1109/ICCV.2017.483
– start-page: 233
  volume-title: Proc. Int. Conf. Mach. Learn.
  ident: ref41
  article-title: A closer look at memorization in deep networks
  contributor:
    fullname: Arpit
– ident: ref60
  doi: 10.1109/TIP.2021.3049331
– ident: ref30
  doi: 10.1007/978-3-030-58580-8_41
– volume: 3
  start-page: 896
  issue: 2
  volume-title: Proc. Int. Conf. Mach. Learn. Workshop
  ident: ref42
  article-title: Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks
  contributor:
    fullname: Lee
– ident: ref61
  doi: 10.1109/ICCV48922.2021.00466
– ident: ref67
  doi: 10.1016/j.patcog.2022.108982
– ident: ref78
  doi: 10.1109/ICCV.2013.209
– ident: ref39
  doi: 10.1109/TGRS.2010.2096515
– ident: ref45
  doi: 10.1109/CVPR52729.2023.02310
– ident: ref62
  doi: 10.1145/3503161.3547904
– ident: ref59
  doi: 10.1109/CVPR.2019.00531
– ident: ref77
  doi: 10.1109/TIP.2015.2465159
– ident: ref51
  doi: 10.1016/j.patcog.2022.108777
– ident: ref16
  doi: 10.1109/CVPR42600.2020.00565
– ident: ref70
  doi: 10.1109/CVPR.2018.00192
– ident: ref32
  doi: 10.18653/v1/2021.emnlp-main.243
– ident: ref9
  doi: 10.1109/TGRS.2013.2262722
SSID ssj0014517
Score 2.494646
Snippet Compared to models pretrained on ImageNet, the segment anything model (SAM) has been trained on a massive segmentation corpus, excelling in both generalization...
SourceID proquest
crossref
ieee
SourceType Aggregation Database
Publisher
StartPage 1
SubjectTerms Annotations
Curriculum
Curriculum learning
Detection
Feature extraction
Image segmentation
Labels
Lighting
Localization
noisy label
segment anything model (SAM)
Segments
shadow detection
Shadows
Task analysis
Training
Training data
Tuning
unsupervised learning
Title Make Segment Anything Model Perfect on Shadow Detection
URI https://ieeexplore.ieee.org/document/10315174
https://www.proquest.com/docview/2895009213
Volume 61
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LS8NAEB5sQdCDj1qxWmUPnoSkSTabZI9FrUVoEdtCb2GzmShUUmlTRH-9u5tUqiIIOeSQLMt8k8zsvD6ASz2SJfUZWqErNIVZmlicIVpUBpkyoEiTTIcGBsOgP_Hvp2xaNaubXhhENMVnaOtbk8tP53KlQ2UdQ0mgXOga1ELOy2atr5SBz9yqNzqw1CnCq1KYrsM747vHka15wm1KtQKH34yQYVX59Ss29qW3D8P1zsqykpm9KhJbfvwY2vjvrR_AXuVpkm6pGoewhXkDdjfmDzZg29R_yuURhAMxQzLCJ70M6ebvhY5MEc2U9kIecKGLPsg8J6Nnkc7fyA0WpoQrb8Kkdzu-7lsVp4IlPe4XFjrSp6kjZYSJ42eeK6W6lKHMKE04U96rHzIFE6eUSU-hyLMwcjgGEY0iptyXY6jn8xxPgCjPjwkhM0GF9FOBUZAGwsmSQHqCBpHTgqu1kOPXcnRGbI4cDo81IrFGJK4QaUFTC23jwVJeLWivcYmrr2sZK3iZHhbl0tM_XjuDHb16GStpQ71YrPBceQ9FcmG05hO88r8h
link.rule.ids 315,783,787,799,4033,27937,27938,27939,55088
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1JS8NAFH64IOrBpSpWq87Bk5CYZDJZjuJWlxaxFbyFyeRFQUmlTRH99c6bpOKCIOSQQ5bhfS9537wVYJ9asmS-QCt0JY0wy1IrFogWV0GuDSjyNCfXQKcbtO_8y3txXxerm1oYRDTJZ2jTqYnlZwM1JlfZoRlJoCn0NMwKIhZVudZn0MAXbl0dHVh6H-HVQUzXiQ_757c9myaF25yTCoffzJCZq_LrZ2wszNkydCdrqxJLnuxxmdrq_Ufbxn8vfgWWaq7JjirlWIUpLBqw-KUDYQPmTAaoGq1B2JFPyHr4QI9hR8VbSb4pRrPSntkNDintgw0K1nuU2eCVnWBpkriKdbg7O-0ft616qoKlvNgvLXSUzzNHqQhTx889Vyl9aFOZc57GQvNXPxQaqJhzoTyNY5yHkRNjEPEoEprAbMBMMShwE5jmfkJKlUsulZ9JjIIskE6eBsqTPIicJhxMhJy8VM0zErPpcOKEEEkIkaRGpAnrJLQvF1byakJrgktSf1-jRMMrqF2Uy7f-uG0P5tv9znVyfdG92oYFelPlOWnBTDkc447mEmW6azToA_Avwm4
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Make+Segment+Anything+Model+Perfect+on+Shadow+Detection&rft.jtitle=IEEE+transactions+on+geoscience+and+remote+sensing&rft.au=Chen%2C+Xiao-Diao&rft.au=Wu%2C+Wen&rft.au=Yang%2C+Wenya&rft.au=Qin%2C+Hongshuai&rft.date=2023&rft.pub=IEEE&rft.issn=0196-2892&rft.eissn=1558-0644&rft.volume=61&rft.spage=1&rft.epage=13&rft_id=info:doi/10.1109%2FTGRS.2023.3332257&rft.externalDocID=10315174
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0196-2892&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0196-2892&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0196-2892&client=summon