Difference-Complementary Learning and Label Reassignment for Multimodal Semi-Supervised Semantic Segmentation of Remote Sensing Images
The feature fusion of optical and Synthetic Aperture Radar (SAR) images is widely used for semantic segmentation of multimodal remote sensing images. It leverages information from two different sensors to enhance the analytical capabilities of land cover. However, the imaging characteristics of opti...
Saved in:
Published in | IEEE transactions on image processing Vol. 34; pp. 566 - 580 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
United States
IEEE
01.01.2025
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | The feature fusion of optical and Synthetic Aperture Radar (SAR) images is widely used for semantic segmentation of multimodal remote sensing images. It leverages information from two different sensors to enhance the analytical capabilities of land cover. However, the imaging characteristics of optical and SAR data are vastly different, and noise interference makes the fusion of multimodal data information challenging. Furthermore, in practical remote sensing applications, there are typically only a limited number of labeled samples available, with most pixels needing to be labeled. Semi-supervised learning has the potential to improve model performance in scenarios with limited labeled data. However, in remote sensing applications, the quality of pseudo-labels is frequently compromised, particularly in challenging regions such as blurred edges and areas with class confusion. This degradation in label quality can have a detrimental effect on the model's overall performance. In this paper, we introduce the Difference-complementary Learning and Label Reassignment (DLLR) network for multimodal semi-supervised semantic segmentation of remote sensing images. Our proposed DLLR framework leverages asymmetric masking to create information discrepancies between the optical and SAR modalities, and employs a difference-guided complementary learning strategy to enable mutual learning. Subsequently, we introduce a multi-level label reassignment strategy, treating the label assignment problem as an optimal transport optimization task to allocate pixels to classes with higher precision for unlabeled pixels, thereby enhancing the quality of pseudo-label annotations. Finally, we introduce a multimodal consistency cross pseudo-supervision strategy to improve pseudo-label utilization. We evaluate our method on two multimodal remote sensing datasets, namely, the WHU-OPT-SAR and EErDS-OPT-SAR datasets. Experimental results demonstrate that our proposed DLLR model outperforms other relevant deep networks in terms of accuracy in multimodal semantic segmentation. |
---|---|
AbstractList | The feature fusion of optical and Synthetic Aperture Radar (SAR) images is widely used for semantic segmentation of multimodal remote sensing images. It leverages information from two different sensors to enhance the analytical capabilities of land cover. However, the imaging characteristics of optical and SAR data are vastly different, and noise interference makes the fusion of multimodal data information challenging. Furthermore, in practical remote sensing applications, there are typically only a limited number of labeled samples available, with most pixels needing to be labeled. Semi-supervised learning has the potential to improve model performance in scenarios with limited labeled data. However, in remote sensing applications, the quality of pseudo-labels is frequently compromised, particularly in challenging regions such as blurred edges and areas with class confusion. This degradation in label quality can have a detrimental effect on the model’s overall performance. In this paper, we introduce the Difference-complementary Learning and Label Reassignment (DLLR) network for multimodal semi-supervised semantic segmentation of remote sensing images. Our proposed DLLR framework leverages asymmetric masking to create information discrepancies between the optical and SAR modalities, and employs a difference-guided complementary learning strategy to enable mutual learning. Subsequently, we introduce a multi-level label reassignment strategy, treating the label assignment problem as an optimal transport optimization task to allocate pixels to classes with higher precision for unlabeled pixels, thereby enhancing the quality of pseudo-label annotations. Finally, we introduce a multimodal consistency cross pseudo-supervision strategy to improve pseudo-label utilization. We evaluate our method on two multimodal remote sensing datasets, namely, the WHU-OPT-SAR and EErDS-OPT-SAR datasets. Experimental results demonstrate that our proposed DLLR model outperforms other relevant deep networks in terms of accuracy in multimodal semantic segmentation. The feature fusion of optical and Synthetic Aperture Radar (SAR) images is widely used for semantic segmentation of multimodal remote sensing images. It leverages information from two different sensors to enhance the analytical capabilities of land cover. However, the imaging characteristics of optical and SAR data are vastly different, and noise interference makes the fusion of multimodal data information challenging. Furthermore, in practical remote sensing applications, there are typically only a limited number of labeled samples available, with most pixels needing to be labeled. Semi-supervised learning has the potential to improve model performance in scenarios with limited labeled data. However, in remote sensing applications, the quality of pseudo-labels is frequently compromised, particularly in challenging regions such as blurred edges and areas with class confusion. This degradation in label quality can have a detrimental effect on the model's overall performance. In this paper, we introduce the Difference-complementary Learning and Label Reassignment (DLLR) network for multimodal semi-supervised semantic segmentation of remote sensing images. Our proposed DLLR framework leverages asymmetric masking to create information discrepancies between the optical and SAR modalities, and employs a difference-guided complementary learning strategy to enable mutual learning. Subsequently, we introduce a multi-level label reassignment strategy, treating the label assignment problem as an optimal transport optimization task to allocate pixels to classes with higher precision for unlabeled pixels, thereby enhancing the quality of pseudo-label annotations. Finally, we introduce a multimodal consistency cross pseudo-supervision strategy to improve pseudo-label utilization. We evaluate our method on two multimodal remote sensing datasets, namely, the WHU-OPT-SAR and EErDS-OPT-SAR datasets. Experimental results demonstrate that our proposed DLLR model outperforms other relevant deep networks in terms of accuracy in multimodal semantic segmentation.The feature fusion of optical and Synthetic Aperture Radar (SAR) images is widely used for semantic segmentation of multimodal remote sensing images. It leverages information from two different sensors to enhance the analytical capabilities of land cover. However, the imaging characteristics of optical and SAR data are vastly different, and noise interference makes the fusion of multimodal data information challenging. Furthermore, in practical remote sensing applications, there are typically only a limited number of labeled samples available, with most pixels needing to be labeled. Semi-supervised learning has the potential to improve model performance in scenarios with limited labeled data. However, in remote sensing applications, the quality of pseudo-labels is frequently compromised, particularly in challenging regions such as blurred edges and areas with class confusion. This degradation in label quality can have a detrimental effect on the model's overall performance. In this paper, we introduce the Difference-complementary Learning and Label Reassignment (DLLR) network for multimodal semi-supervised semantic segmentation of remote sensing images. Our proposed DLLR framework leverages asymmetric masking to create information discrepancies between the optical and SAR modalities, and employs a difference-guided complementary learning strategy to enable mutual learning. Subsequently, we introduce a multi-level label reassignment strategy, treating the label assignment problem as an optimal transport optimization task to allocate pixels to classes with higher precision for unlabeled pixels, thereby enhancing the quality of pseudo-label annotations. Finally, we introduce a multimodal consistency cross pseudo-supervision strategy to improve pseudo-label utilization. We evaluate our method on two multimodal remote sensing datasets, namely, the WHU-OPT-SAR and EErDS-OPT-SAR datasets. Experimental results demonstrate that our proposed DLLR model outperforms other relevant deep networks in terms of accuracy in multimodal semantic segmentation. |
Author | Han, Wenqi Jiang, Wen Geng, Jie Miao, Wang |
Author_xml | – sequence: 1 givenname: Wenqi surname: Han fullname: Han, Wenqi email: hanwenqinwpu@mail.nwpu.edu.cn organization: School of Electronics and Information, Northwestern Polytechnical University, Xi'an, China – sequence: 2 givenname: Wen orcidid: 0000-0001-5429-2748 surname: Jiang fullname: Jiang, Wen email: jiangwen@nwpu.edu.cn organization: School of Electronics and Information, Northwestern Polytechnical University, Xi'an, China – sequence: 3 givenname: Jie orcidid: 0000-0003-4858-823X surname: Geng fullname: Geng, Jie email: gengjie@nwpu.edu.cn organization: School of Electronics and Information, Northwestern Polytechnical University, Xi'an, China – sequence: 4 givenname: Wang orcidid: 0009-0006-8704-4445 surname: Miao fullname: Miao, Wang email: mw0638@mail.nwpu.edu.cn organization: School of Electronics and Information, Northwestern Polytechnical University, Xi'an, China |
BackLink | https://www.ncbi.nlm.nih.gov/pubmed/40030991$$D View this record in MEDLINE/PubMed |
BookMark | eNpdkcur1DAUxoNc8T5070Kk4MZNx5NH02Yp42tgRPFe1yVNT4ZcmmRMWsF_wL_b1BlFXJ0Hv_Nx-L5rchFiQEKeUthQCurV3e7zhgFrNrxhEqR4QK6oErQGEOyi9NC0dUuFuiTXOd8DUNFQ-YhcCgAOStEr8vONsxYTBoP1NvrjhB7DrNOPao86BRcOlQ5jtdcDTtUX1Dm7Q1iRysZUfVym2fk46qm6Re_q2-WI6bvLOK6zDrMzpTn8lpxdDFW0RcTHGcs65FV95_UB82Py0Oop45NzvSFf3729236o95_e77av97XhQOfa2AFaapRQBjWCYawbG-y6cTAdcC1Nq20HEgWzLarWglSDRKOMlc0wCOA35OVJ95jitwXz3HuXDU6TDhiX3HPaclHME01BX_yH3sclhfJdoRophWBSFur5mVoGj2N_TM4X9_o_DhcAToBJMeeE9i9CoV9D7EuI_Rpifw6xnDw7nThE_AfveMeU4L8A352ZXw |
CODEN | IIPRE4 |
Cites_doi | 10.1109/TPAMI.2022.3233584 10.1109/TGRS.2023.3306180 10.1109/TGRS.2023.3274395 10.1109/TGRS.2020.3034123 10.1109/TGRS.2022.3197402 10.1109/TIP.2021.3109518 10.1109/CVPR42600.2020.01269 10.1109/TCSVT.2020.2995754 10.1109/TGRS.2023.3290232 10.1109/TGRS.2023.3321041 10.1109/CVPR46437.2021.00037 10.1109/CVPRW59228.2023.00671 10.1109/TGRS.2020.2964679 10.1109/CVPR52733.2024.02618 10.1109/JSTARS.2020.2975252 10.1109/TGRS.2017.2783902 10.1109/TPAMI.2023.3273592 10.1109/TGRS.2022.3185298 10.1109/SIU55565.2022.9864861 10.1109/TIP.2021.3116793 10.48550/arXiv.1802.02611 10.1109/TIP.2023.3290519 10.1109/TGRS.2022.3166252 10.1109/TCSVT.2022.3206496 10.1109/CVPR46437.2021.00126 10.1109/JSTARS.2020.3019582 10.1109/TMM.2022.3167805 10.1016/j.jag.2021.102638 10.1109/TGRS.2023.3310521 10.1109/TGRS.2022.3174636 10.1109/TIP.2023.3279660 10.1109/CVPR52688.2022.00423 10.1109/TGRS.2020.3016820 10.1109/JSTARS.2022.3150843 10.1109/ICIEA58696.2023.10241417 10.1109/TIP.2020.2987161 10.1109/TGRS.2022.3200996 10.3390/rs13010071 10.1109/WACV48630.2021.00357 10.1016/j.isprsjprs.2020.07.007 10.1109/TNNLS.2022.3171572 10.1109/ICCV.2019.00608 10.1109/CVPR.2015.7298965 10.1109/BIGSARDATA.2019.8858437 10.1016/j.patcog.2022.108777 10.1109/CVPR52733.2024.01640 10.1109/TGRS.2023.3267890 10.1109/CVPR52729.2023.00116 10.1109/TGRS.2022.3144165 10.1109/WACV57701.2024.00089 10.3390/rs13183600 10.1109/TGRS.2023.3244565 10.48550/ARXIV.1706.03762 10.1109/IGARSS47720.2021.9555111 10.1109/TGRS.2024.3423663 10.5194/isprs-annals-V-3-2020-795-2020 10.1109/TGRS.2022.3195740 10.1109/TGRS.2020.3015157 10.1109/TGRS.2022.3140485 10.1109/cvprw.2018.00048 10.1109/TITS.2023.3300537 10.1109/TIP.2023.3243853 10.1109/TGRS.2023.3290242 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2025 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2025 |
DBID | 97E RIA RIE AAYXX CITATION NPM 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
DOI | 10.1109/TIP.2025.3526064 |
DatabaseName | IEEE Xplore (IEEE) IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE/IET Electronic Library CrossRef PubMed Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional MEDLINE - Academic |
DatabaseTitle | CrossRef PubMed Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional MEDLINE - Academic |
DatabaseTitleList | Technology Research Database PubMed MEDLINE - Academic |
Database_xml | – sequence: 1 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: RIE name: IEEE/IET Electronic Library url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Applied Sciences Engineering |
EISSN | 1941-0042 |
EndPage | 580 |
ExternalDocumentID | 40030991 10_1109_TIP_2025_3526064 10838294 |
Genre | orig-research Journal Article |
GrantInformation_xml | – fundername: National Key Research and Development Program of China grantid: 2021YFB3900502 funderid: 10.13039/501100012166 |
GroupedDBID | --- -~X .DC 0R~ 29I 4.4 53G 5GY 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABFSI ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AETIX AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 E.L EBS EJD F5P HZ~ H~9 ICLAB IFIPE IFJZH IPLJI JAVBF LAI M43 MS~ O9- OCL P2P RIA RIE RNS TAE TN5 VH1 AAYOK AAYXX CITATION RIG NPM 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
ID | FETCH-LOGICAL-c301t-cfb071c949ceae0c228d5e88dbc803a6c7af806e42f7e97f069b6ec9cf65bb403 |
IEDL.DBID | RIE |
ISSN | 1057-7149 1941-0042 |
IngestDate | Fri Jul 11 02:38:47 EDT 2025 Mon Jun 30 10:09:13 EDT 2025 Mon Jul 21 06:06:31 EDT 2025 Tue Jul 01 02:19:00 EDT 2025 Wed Aug 27 01:55:50 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c301t-cfb071c949ceae0c228d5e88dbc803a6c7af806e42f7e97f069b6ec9cf65bb403 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
ORCID | 0009-0006-8704-4445 0000-0001-5429-2748 0000-0003-4858-823X |
PMID | 40030991 |
PQID | 3156644266 |
PQPubID | 85429 |
PageCount | 15 |
ParticipantIDs | ieee_primary_10838294 crossref_primary_10_1109_TIP_2025_3526064 proquest_miscellaneous_3173404245 proquest_journals_3156644266 pubmed_primary_40030991 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2025-01-01 |
PublicationDateYYYYMMDD | 2025-01-01 |
PublicationDate_xml | – month: 01 year: 2025 text: 2025-01-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | United States |
PublicationPlace_xml | – name: United States – name: New York |
PublicationTitle | IEEE transactions on image processing |
PublicationTitleAbbrev | TIP |
PublicationTitleAlternate | IEEE Trans Image Process |
PublicationYear | 2025 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref13 ref12 ref56 ref15 ref14 ref58 ref53 ref52 ref11 ref55 ref10 Zou (ref51) ref17 Hu (ref54) 2021; 34 ref16 ref19 ref18 ref50 ref46 ref45 ref48 ref47 ref42 ref41 ref44 ref43 Na (ref64) Kimhi (ref63) 2023 ref49 ref8 ref7 ref9 ref4 ref3 ref6 ref5 ref40 ref35 ref34 Kingma (ref65) 2014 ref37 ref36 Tai (ref59) ref31 ref30 ref33 ref32 ref2 ref1 ref39 ref38 ref71 ref70 ref24 ref68 ref23 ref67 ref26 Xie (ref57); 34 ref25 ref20 ref22 ref66 ref21 Yin (ref69) 2023 ref28 ref27 ref29 ref60 ref62 ref61 |
References_xml | – ident: ref46 doi: 10.1109/TPAMI.2022.3233584 – ident: ref29 doi: 10.1109/TGRS.2023.3306180 – ident: ref24 doi: 10.1109/TGRS.2023.3274395 – ident: ref40 doi: 10.1109/TGRS.2020.3034123 – ident: ref21 doi: 10.1109/TGRS.2022.3197402 – ident: ref22 doi: 10.1109/TIP.2021.3109518 – ident: ref49 doi: 10.1109/CVPR42600.2020.01269 – ident: ref8 doi: 10.1109/TCSVT.2020.2995754 – ident: ref5 doi: 10.1109/TGRS.2023.3290232 – ident: ref33 doi: 10.1109/TGRS.2023.3321041 – ident: ref60 doi: 10.1109/CVPR46437.2021.00037 – ident: ref58 doi: 10.1109/CVPRW59228.2023.00671 – ident: ref11 doi: 10.1109/TGRS.2020.2964679 – start-page: 10065 volume-title: Proc. Int. Conf. Mach. Learn. ident: ref59 article-title: Sinkhorn label allocation: Semi-supervised classification via annealed self-training – ident: ref13 doi: 10.1109/CVPR52733.2024.02618 – ident: ref67 doi: 10.1109/JSTARS.2020.2975252 – ident: ref20 doi: 10.1109/TGRS.2017.2783902 – ident: ref48 doi: 10.1109/TPAMI.2023.3273592 – ident: ref28 doi: 10.1109/TGRS.2022.3185298 – ident: ref41 doi: 10.1109/SIU55565.2022.9864861 – ident: ref9 doi: 10.1109/TIP.2021.3116793 – ident: ref66 doi: 10.48550/arXiv.1802.02611 – ident: ref3 doi: 10.1109/TIP.2023.3290519 – ident: ref25 doi: 10.1109/TGRS.2022.3166252 – ident: ref47 doi: 10.1109/TCSVT.2022.3206496 – ident: ref50 doi: 10.1109/CVPR46437.2021.00126 – ident: ref19 doi: 10.1109/JSTARS.2020.3019582 – ident: ref35 doi: 10.1109/TMM.2022.3167805 – ident: ref61 doi: 10.1016/j.jag.2021.102638 – ident: ref45 doi: 10.1109/TGRS.2023.3310521 – ident: ref34 doi: 10.1109/TGRS.2022.3174636 – start-page: 1 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref64 article-title: Switching temporary teachers for semi-supervised semantic segmentation – ident: ref1 doi: 10.1109/TIP.2023.3279660 – ident: ref52 doi: 10.1109/CVPR52688.2022.00423 – ident: ref6 doi: 10.1109/TGRS.2020.3016820 – ident: ref32 doi: 10.1109/JSTARS.2022.3150843 – ident: ref7 doi: 10.1109/ICIEA58696.2023.10241417 – ident: ref30 doi: 10.1109/TIP.2020.2987161 – start-page: 1 volume-title: Proc. Int. Conf. Learn. Represent. ident: ref51 article-title: PseudoSeg: Designing pseudo labels for semantic segmentation – ident: ref27 doi: 10.1109/TGRS.2022.3200996 – year: 2023 ident: ref69 article-title: DFormer: Rethinking RGBD representation learning for semantic segmentation publication-title: arXiv:2309.09668 – ident: ref39 doi: 10.3390/rs13010071 – ident: ref56 doi: 10.1109/WACV48630.2021.00357 – ident: ref23 doi: 10.1016/j.isprsjprs.2020.07.007 – ident: ref26 doi: 10.1109/TNNLS.2022.3171572 – ident: ref62 doi: 10.1109/ICCV.2019.00608 – ident: ref37 doi: 10.1109/CVPR.2015.7298965 – ident: ref42 doi: 10.1109/BIGSARDATA.2019.8858437 – ident: ref53 doi: 10.1016/j.patcog.2022.108777 – ident: ref10 doi: 10.1109/CVPR52733.2024.01640 – ident: ref17 doi: 10.1109/TGRS.2023.3267890 – ident: ref71 doi: 10.1109/CVPR52729.2023.00116 – ident: ref16 doi: 10.1109/TGRS.2022.3144165 – ident: ref68 doi: 10.1109/WACV57701.2024.00089 – volume: 34 start-page: 12077 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref57 article-title: SegFormer: Simple and efficient design for semantic segmentation with transformers – ident: ref44 doi: 10.3390/rs13183600 – ident: ref36 doi: 10.1109/TGRS.2023.3244565 – ident: ref55 doi: 10.48550/ARXIV.1706.03762 – ident: ref43 doi: 10.1109/IGARSS47720.2021.9555111 – ident: ref14 doi: 10.1109/TGRS.2024.3423663 – ident: ref31 doi: 10.5194/isprs-annals-V-3-2020-795-2020 – year: 2023 ident: ref63 article-title: Semi-supervised semantic segmentation via marginal contextual information publication-title: arXiv:2308.13900 – ident: ref12 doi: 10.1109/TGRS.2022.3195740 – ident: ref18 doi: 10.1109/TGRS.2020.3015157 – ident: ref15 doi: 10.1109/TGRS.2022.3140485 – year: 2014 ident: ref65 article-title: Adam: A method for stochastic optimization publication-title: arXiv:1412.6980 – ident: ref38 doi: 10.1109/cvprw.2018.00048 – volume: 34 start-page: 22106 year: 2021 ident: ref54 article-title: Semi-supervised semantic segmentation via adaptive equalization learning publication-title: Proc. Adv. Neural Inf. Process. Syst. – ident: ref70 doi: 10.1109/TITS.2023.3300537 – ident: ref4 doi: 10.1109/TIP.2023.3243853 – ident: ref2 doi: 10.1109/TGRS.2023.3290242 |
SSID | ssj0014516 |
Score | 2.4658 |
Snippet | The feature fusion of optical and Synthetic Aperture Radar (SAR) images is widely used for semantic segmentation of multimodal remote sensing images. It... |
SourceID | proquest pubmed crossref ieee |
SourceType | Aggregation Database Index Database Publisher |
StartPage | 566 |
SubjectTerms | Accuracy Adaptive optics Datasets Image segmentation Labels Land cover multimodal fusion Optical imaging Optical sensors Optimization Pixels Radar imaging Radar polarimetry Remote sensing Semantic segmentation Semantics Semi-supervised learning Sensors Synthetic aperture radar |
Title | Difference-Complementary Learning and Label Reassignment for Multimodal Semi-Supervised Semantic Segmentation of Remote Sensing Images |
URI | https://ieeexplore.ieee.org/document/10838294 https://www.ncbi.nlm.nih.gov/pubmed/40030991 https://www.proquest.com/docview/3156644266 https://www.proquest.com/docview/3173404245 |
Volume | 34 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9QwEB7RnuBAoRQIFGQkLhy8mMSJ7SMCqhZBhWgr9RbZzmS1Kkmq7uZAf0B_N2PHuypIlbhFiWU7moe_8bwA3irbeuGl4GVjBZeFdFw7ZbmVjVLOSmNtuIf8flwdnsmv5-V5SlaPuTCIGIPPcBYeoy-_GfwYrspIwnWhcyO3YIsstylZa-MyCB1no2uzVFwR7l_7JIV5f3r0gyzBvJyFYvCikn-dQbGpyt34Mp4zBztwvN7hFF5yMRtXbuav_yne-N-_8AgeJsTJPk4s8hjuYb8LOwl9siTby114cKs04RO4-Zw6p3jkQWekKPOr3yxVZJ0z2zfsm3X4i_1EguCLeQwsYISCWUzr7YaG1j3BbsFPxsugk5ZhQeyImAtPD_MuJT71bGhpEuIapNd9uL5gRx1puuUenB18Of10yFPPBu5JVay4bx2BFm-k8WhR-DzXTYlaN85rUdjKE3NoUaHMW4VGtaIyrkJvfFuVzklRPIXtfujxOTDUBY1tK0KErTQaXdUQvmmKaPQok2fwbk3F-nIqzVFHk0aYmiheB4rXieIZ7AVa3Bo3kSGD_TXd6yS8y7oINq0M0CWDN5vPJHbBl2J7HMYwRhUyuo0zeDbxy2ZyGf3L5sOLOxZ9CffD3mJIoNqH7dXViK8I2qzc68jSfwDnkvck |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3Nb9MwFH-CcQAODMaAwAAjceHgYhInsY8ImFroKsQ6abfIdl6qCpJOa3OAP4C_m2fHrQbSJG5RYtmO3od_fp8Ar0vTOOGk4HltBJeZtFzZ0nAj67K0RmpjvB3yZFaMz-Tn8_w8JquHXBhEDMFnOPKPwZdfr1zvTWUk4SpTqZY34RYd_Hk6pGvtnAa-52xwbuYlLwn5b72SQr-dT77SXTDNR74cvCjkX6dQaKtyPcIMJ83xPsy2exwCTL6P-o0duV__lG_875-4D_ci5mTvByZ5ADewO4D9iD9ZlO71Ady9UpzwIfz-GHunOORea8Q488ufLNZkXTDT1WxqLP5g35BA-HIRQgsY4WAWEnvbVU3rnmK75Kf9hddKa78gtkTOpaOHRRtTnzq2amgS4huk1503YLBJS7pufQhnx5_mH8Y8dm3gjpTFhrvGEmxxWmqHBoVLU1XnqFRtnRKZKRyxhxIFyrQpUZeNKLQt0GnXFLm1UmSPYK9bdfgEGKqMxjYFYcJGaoW2qAnh1Fm49pQ6TeDNlorVxVCcowqXGqEronjlKV5Fiidw6GlxZdxAhgSOtnSvoviuq8zfaqUHLwm82n0mwfPeFNPhqvdjykwGx3ECjwd-2U0ug4dZv3t6zaIv4fZ4fjKtppPZl2dwx-9zMOscwd7mssfnBHQ29kVg7z_vV_po |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Difference-complementary+Learning+and+Label+Reassignment+for+Multimodal+Semi-Supervised+Semantic+Segmentation+of+Remote+Sensing+Images&rft.jtitle=IEEE+transactions+on+image+processing&rft.au=Han%2C+Wenqi&rft.au=Jiang%2C+Wen&rft.au=Geng%2C+Jie&rft.au=Miao%2C+Wang&rft.date=2025-01-01&rft.eissn=1941-0042&rft.volume=PP&rft_id=info:doi/10.1109%2FTIP.2025.3526064&rft_id=info%3Apmid%2F40030991&rft.externalDocID=40030991 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1057-7149&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1057-7149&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1057-7149&client=summon |