MMSMCNet: Modal Memory Sharing and Morphological Complementary Networks for RGB-T Urban Scene Semantic Segmentation
Combining color (RGB) images with thermal images can facilitate semantic segmentation of poorly lit urban scenes. However, for RGB-thermal (RGB-T) semantic segmentation, most existing models address cross-modal feature fusion by focusing only on exploring the samples while neglecting the connections...
Saved in:
Published in | IEEE transactions on circuits and systems for video technology Vol. 33; no. 12; p. 1 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.12.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Combining color (RGB) images with thermal images can facilitate semantic segmentation of poorly lit urban scenes. However, for RGB-thermal (RGB-T) semantic segmentation, most existing models address cross-modal feature fusion by focusing only on exploring the samples while neglecting the connections between different samples. Additionally, although the importance of boundary, binary, and semantic information is considered in the decoding process, the differences and complementarities between different morphological features are usually neglected. In this paper, we propose a novel RGB-T semantic segmentation network, called MMSMCNet, based on modal memory fusion and morphological multiscale assistance to address the aforementioned problems. For this network, in the encoding part, we used SegFormer for feature extraction of bimodal inputs. Next, our modal memory sharing module implements staged learning and memory sharing of sample information across modal multiscales. Furthermore, we constructed a decoding union unit comprising three decoding units in a layer-by-layer progression that can extract two different morphological features according to the information category and realize the complementary utilization of multiscale cross-modal fusion information. Each unit contains a contour positioning module based on detail information, a skeleton positioning module with deep features as the primary input, and a morphological complementary module for mutual reinforcement of the first two types of information and construction of semantic information. Based on this, we constructed a new supervision strategy, that is, a multi-unit-based complementary supervision strategy. Extensive experiments using two standard datasets showed that MMSMCNet outperformed related state-of-the-art methods. The code is available at: https://github.com/2021nihao/MMSMCNet. |
---|---|
AbstractList | Combining color (RGB) images with thermal images can facilitate semantic segmentation of poorly lit urban scenes. However, for RGB-thermal (RGB-T) semantic segmentation, most existing models address cross-modal feature fusion by focusing only on exploring the samples while neglecting the connections between different samples. Additionally, although the importance of boundary, binary, and semantic information is considered in the decoding process, the differences and complementarities between different morphological features are usually neglected. In this paper, we propose a novel RGB-T semantic segmentation network, called MMSMCNet, based on modal memory fusion and morphological multiscale assistance to address the aforementioned problems. For this network, in the encoding part, we used SegFormer for feature extraction of bimodal inputs. Next, our modal memory sharing module implements staged learning and memory sharing of sample information across modal multiscales. Furthermore, we constructed a decoding union unit comprising three decoding units in a layer-by-layer progression that can extract two different morphological features according to the information category and realize the complementary utilization of multiscale cross-modal fusion information. Each unit contains a contour positioning module based on detail information, a skeleton positioning module with deep features as the primary input, and a morphological complementary module for mutual reinforcement of the first two types of information and construction of semantic information. Based on this, we constructed a new supervision strategy, that is, a multi-unit-based complementary supervision strategy. Extensive experiments using two standard datasets showed that MMSMCNet outperformed related state-of-the-art methods. The code is available at: https://github.com/2021nihao/MMSMCNet . |
Author | Zhou, Wujie Zhang, Han Lin, Weisi Yan, Weiqing |
Author_xml | – sequence: 1 givenname: Wujie orcidid: 0000-0002-3055-2493 surname: Zhou fullname: Zhou, Wujie organization: School of Information & Electronic Engineering, Zhejiang University of Science & Technology, Hangzhou, China – sequence: 2 givenname: Han surname: Zhang fullname: Zhang, Han organization: School of Information & Electronic Engineering, Zhejiang University of Science & Technology, Hangzhou, China – sequence: 3 givenname: Weiqing orcidid: 0000-0001-7869-2404 surname: Yan fullname: Yan, Weiqing organization: School of Computer and Control Engineering, Yantai University, Yantai, China – sequence: 4 givenname: Weisi orcidid: 0000-0001-9866-1947 surname: Lin fullname: Lin, Weisi organization: School of Computer Science and Engineering, Nanyang Technological University, Singapore |
BookMark | eNp9kM1OwzAQhC1UJFrgBRAHS5xTbMfODzeIoCARkEjhGjnOuk1J7OKkQrw9pu0BceC0K-18M9qZoJGxBhA6o2RKKUkv51nxNp8ywsJpyGIRUn6AxlSIJGCMiJHfiaBBwqg4QpO-XxFCecLjMerzvMizJxiucG5r2eIcOuu-cLGUrjELLE3tD269tK1dNMoLMtutW-jADNLrPPlp3XuPtXX4ZXYTzPGrq6TBhQIDuIBOmqFRfllskaGx5gQdatn2cLqfx-j17nae3QePz7OH7PoxUCyNhkCEVQ1Ug9KRIDwSEMUyrUFXdVVBnEZcC0mZ1ITriHOtVcIj8EgitdYkVuExutj5rp392EA_lCu7ccZHlixJU2-ZMuFVbKdSzva9A12uXdP530pKyp9yy2255U-55b5cDyV_INXsnhucbNr_0fMd2gDAryzKQkLS8BvKEowA |
CODEN | ITCTEM |
CitedBy_id | crossref_primary_10_1007_s11554_024_01578_7 crossref_primary_10_1109_TITS_2023_3306368 crossref_primary_10_1016_j_engappai_2023_106885 crossref_primary_10_1109_LRA_2024_3458594 crossref_primary_10_1109_TIP_2024_3501077 crossref_primary_10_1109_TIP_2025_3544484 crossref_primary_10_1016_j_engappai_2023_106729 crossref_primary_10_1016_j_engappai_2025_110068 crossref_primary_10_1016_j_engappai_2024_109881 crossref_primary_10_1016_j_knosys_2025_113016 crossref_primary_10_1109_ACCESS_2024_3432709 crossref_primary_10_1109_TCSVT_2024_3382354 crossref_primary_10_1016_j_engappai_2024_108013 crossref_primary_10_1016_j_inffus_2023_101832 crossref_primary_10_1109_TIP_2023_3275538 crossref_primary_10_1016_j_engappai_2024_108290 crossref_primary_10_1016_j_isprsjprs_2025_01_022 crossref_primary_10_1016_j_jvcir_2023_103951 crossref_primary_10_1109_TCSVT_2024_3485655 crossref_primary_10_1109_TIM_2024_3418111 crossref_primary_10_3390_rs16244717 crossref_primary_10_1109_ACCESS_2023_3314199 crossref_primary_10_1109_TGRS_2023_3311480 crossref_primary_10_1007_s10489_024_05743_0 crossref_primary_10_1016_j_jag_2025_104383 crossref_primary_10_1007_s11760_024_03779_w crossref_primary_10_1109_TIV_2024_3376534 crossref_primary_10_1007_s11042_024_19302_9 |
Cites_doi | 10.1109/TCSVT.2022.3206865 10.1109/ACCESS.2022.3227771 10.1109/TITS.2017.2750080 10.1109/ICRA40945.2020.9196831 10.1109/cvpr.2018.00474 10.1109/TIP.2018.2794207 10.1145/3351180.3351182 10.1007/978-3-030-58621-8_33 10.1109/LGRS.2023.3241648 10.1109/tip.2023.3275538 10.1007/978-3-030-58574-7_1 10.1109/TPAMI.2016.2644615 10.1109/IROS51168.2021.9636084 10.1109/CVPR46437.2021.00266 10.1109/ICCV.2019.00069 10.1109/TCSVT.2022.3178178 10.1109/TIV.2022.3164899 10.1109/TCSVT.2022.3187664 10.1609/aaai.v36i3.20269 10.1109/tip.2020.2976689 10.1109/tnnls.2022.3233089 10.1109/TMM.2021.3077767 10.1016/j.neucom.2022.07.041 10.1016/j.engappai.2022.105510 10.1109/CVPR.2019.00326 10.1109/TIP.2021.3109518 10.1007/s00371-022-02559-2 10.1109/CVPR.2017.353 10.1109/TITS.2023.3242651 10.1109/TCSVT.2019.2951621 10.1109/tcsvt.2022.3229359 10.1007/978-3-319-54181-5_14 10.1016/j.dsp.2023.104011 10.1109/CVPR.2016.90 10.1109/TCSVT.2022.3208833 10.1016/j.patcog.2021.108468 10.1109/CVPR.2019.00770 10.1109/tmm.2022.3161852 10.1109/LSP.2023.3270759 10.1109/TMM.2021.3086618 10.1016/j.measurement.2022.112177 10.1007/978-3-030-01261-8_20 10.1109/lsp.2021.3066071 10.1109/CVPR.2018.00199 10.1109/TCSVT.2021.3121680 10.1007/978-3-030-01234-2_49 10.1109/TIP.2023.3242775 10.1007/s11042-017-4440-4 10.1109/TCSVT.2020.3015866 10.1109/LRA.2019.2904733 10.1109/TASE.2020.2993143 10.1109/ICIP.2019.8803154 10.1109/IROS.2017.8206396 10.1109/TPAMI.2017.2699184 10.1109/TCSVT.2021.3132047 10.1109/TCSVT.2019.2962073 10.1016/j.neucom.2022.12.036 10.1016/j.measurement.2021.110176 10.1109/CVPRW56347.2022.00341 10.1109/ICIP.2019.8803025 10.1109/LGRS.2022.3179721 10.1007/s10489-021-02687-7 10.1109/CVPR.2017.549 10.1109/TIP.2023.3256762 10.1109/TCSVT.2021.3077058 10.1109/cvpr46437.2021.00306 10.1109/TPAMI.2022.3179526 10.1109/tpami.2022.3211006 10.1109/iccv48922.2021.00717 10.1007/978-3-319-46454-1_40 10.1007/978-3-319-24574-4_28 10.1109/CVPR.2015.7298965 10.1109/JSTSP.2022.3159032 10.1109/TCSVT.2022.3166914 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023 |
DBID | 97E RIA RIE AAYXX CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D |
DOI | 10.1109/TCSVT.2023.3275314 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
DatabaseTitle | CrossRef Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional |
DatabaseTitleList | Technology Research Database |
Database_xml | – sequence: 1 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering |
EISSN | 1558-2205 |
EndPage | 1 |
ExternalDocumentID | 10_1109_TCSVT_2023_3275314 10123009 |
Genre | orig-research |
GrantInformation_xml | – fundername: National Key Research and Development Program of China grantid: 2022YFE0196000 – fundername: National Natural Science Foundation of China grantid: 61502429 funderid: 10.13039/501100001809 |
GroupedDBID | -~X 0R~ 29I 4.4 5GY 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AGQYO AHBIQ AKJIK AKQYR ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS HZ~ IFIPE IPLJI JAVBF LAI M43 O9- OCL P2P RIA RIE RNS RXW TAE TN5 5VS AAYXX AETIX AGSQL AI. AIBXA ALLEH CITATION EJD H~9 ICLAB IFJZH RIG VH1 7SC 7SP 8FD JQ2 L7M L~C L~D |
ID | FETCH-LOGICAL-c296t-53bde1fecf650465e67a9defbdbbe7964f5a12af04f644ffc846ebde8afff07c3 |
IEDL.DBID | RIE |
ISSN | 1051-8215 |
IngestDate | Mon Jun 30 03:07:33 EDT 2025 Thu Apr 24 22:55:43 EDT 2025 Tue Jul 01 00:41:21 EDT 2025 Wed Aug 27 02:18:21 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 12 |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c296t-53bde1fecf650465e67a9defbdbbe7964f5a12af04f644ffc846ebde8afff07c3 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0001-9866-1947 0000-0001-7869-2404 0000-0002-3055-2493 |
PQID | 2899465925 |
PQPubID | 85433 |
PageCount | 1 |
ParticipantIDs | crossref_citationtrail_10_1109_TCSVT_2023_3275314 crossref_primary_10_1109_TCSVT_2023_3275314 proquest_journals_2899465925 ieee_primary_10123009 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2023-12-01 |
PublicationDateYYYYMMDD | 2023-12-01 |
PublicationDate_xml | – month: 12 year: 2023 text: 2023-12-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | New York |
PublicationPlace_xml | – name: New York |
PublicationTitle | IEEE transactions on circuits and systems for video technology |
PublicationTitleAbbrev | TCSVT |
PublicationYear | 2023 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref13 ref57 ref12 ref56 ref15 ref59 ref14 ref58 ref53 ref52 ref11 ref55 ref10 ref54 ref17 ref16 ref19 ref18 Chen (ref35) 2014 ref51 ref50 ref46 ref45 Dosovitskiy (ref47) 2020 ref42 ref41 ref44 ref43 ref49 ref8 ref7 ref9 ref4 ref3 ref6 Xie (ref48); 34 ref5 ref80 ref34 ref78 ref36 ref31 ref75 Simonyan (ref79) 2014 ref30 ref74 ref33 ref77 ref32 ref76 ref2 ref1 Chen (ref37) 2017 ref39 ref38 Paszke (ref40) 2016 ref71 ref70 ref73 ref72 ref24 ref68 ref23 ref67 ref26 ref25 ref69 ref20 ref64 ref63 ref22 ref66 ref21 ref65 ref28 ref27 ref29 ref60 ref62 ref61 |
References_xml | – ident: ref9 doi: 10.1109/TCSVT.2022.3206865 – year: 2014 ident: ref35 article-title: Semantic image segmentation with deep convolutional nets and fully connected CRFs publication-title: arXiv:1412.7062 – ident: ref66 doi: 10.1109/ACCESS.2022.3227771 – ident: ref77 doi: 10.1109/TITS.2017.2750080 – ident: ref13 doi: 10.1109/ICRA40945.2020.9196831 – ident: ref78 doi: 10.1109/cvpr.2018.00474 – year: 2016 ident: ref40 article-title: ENet: A deep neural network architecture for real-time semantic segmentation publication-title: arXiv:1606.02147 – ident: ref22 doi: 10.1109/TIP.2018.2794207 – ident: ref56 doi: 10.1145/3351180.3351182 – ident: ref60 doi: 10.1007/978-3-030-58621-8_33 – ident: ref23 doi: 10.1109/LGRS.2023.3241648 – ident: ref15 doi: 10.1109/tip.2023.3275538 – ident: ref46 doi: 10.1007/978-3-030-58574-7_1 – ident: ref39 doi: 10.1109/TPAMI.2016.2644615 – ident: ref30 doi: 10.1109/IROS51168.2021.9636084 – ident: ref14 doi: 10.1109/CVPR46437.2021.00266 – year: 2017 ident: ref37 article-title: Rethinking atrous convolution for semantic image segmentation publication-title: arXiv:1706.05587 – ident: ref45 doi: 10.1109/ICCV.2019.00069 – ident: ref3 doi: 10.1109/TCSVT.2022.3178178 – ident: ref69 doi: 10.1109/TIV.2022.3164899 – ident: ref5 doi: 10.1109/TCSVT.2022.3187664 – ident: ref62 doi: 10.1609/aaai.v36i3.20269 – volume: 34 start-page: 12077 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref48 article-title: SegFormer: Simple and efficient design for semantic segmentation with transformers – ident: ref50 doi: 10.1109/tip.2020.2976689 – ident: ref70 doi: 10.1109/tnnls.2022.3233089 – year: 2014 ident: ref79 article-title: Very deep convolutional networks for large-scale image recognition publication-title: arXiv:1409.1556 – ident: ref51 doi: 10.1109/TMM.2021.3077767 – ident: ref31 doi: 10.1016/j.neucom.2022.07.041 – ident: ref63 doi: 10.1016/j.engappai.2022.105510 – ident: ref44 doi: 10.1109/CVPR.2019.00326 – ident: ref33 doi: 10.1109/TIP.2021.3109518 – ident: ref76 doi: 10.1007/s00371-022-02559-2 – ident: ref72 doi: 10.1109/CVPR.2017.353 – ident: ref2 doi: 10.1109/TITS.2023.3242651 – ident: ref52 doi: 10.1109/TCSVT.2019.2951621 – ident: ref8 doi: 10.1109/tcsvt.2022.3229359 – ident: ref55 doi: 10.1007/978-3-319-54181-5_14 – ident: ref7 doi: 10.1016/j.dsp.2023.104011 – ident: ref57 doi: 10.1109/CVPR.2016.90 – ident: ref32 doi: 10.1109/TCSVT.2022.3208833 – ident: ref61 doi: 10.1016/j.patcog.2021.108468 – ident: ref75 doi: 10.1109/CVPR.2019.00770 – ident: ref21 doi: 10.1109/tmm.2022.3161852 – ident: ref26 doi: 10.1109/LSP.2023.3270759 – ident: ref27 doi: 10.1109/TMM.2021.3086618 – ident: ref65 doi: 10.1016/j.measurement.2022.112177 – ident: ref73 doi: 10.1007/978-3-030-01261-8_20 – year: 2020 ident: ref47 article-title: An image is worth 16×16 words: Transformers for image recognition at scale publication-title: arXiv:2010.11929 – ident: ref58 doi: 10.1109/lsp.2021.3066071 – ident: ref74 doi: 10.1109/CVPR.2018.00199 – ident: ref20 doi: 10.1109/TCSVT.2021.3121680 – ident: ref38 doi: 10.1007/978-3-030-01234-2_49 – ident: ref6 doi: 10.1109/TIP.2023.3242775 – ident: ref80 doi: 10.1007/s11042-017-4440-4 – ident: ref42 doi: 10.1109/TCSVT.2020.3015866 – ident: ref11 doi: 10.1109/LRA.2019.2904733 – ident: ref12 doi: 10.1109/TASE.2020.2993143 – ident: ref41 doi: 10.1109/ICIP.2019.8803154 – ident: ref10 doi: 10.1109/IROS.2017.8206396 – ident: ref36 doi: 10.1109/TPAMI.2017.2699184 – ident: ref1 doi: 10.1109/TCSVT.2021.3132047 – ident: ref19 doi: 10.1109/TCSVT.2019.2962073 – ident: ref67 doi: 10.1016/j.neucom.2022.12.036 – ident: ref29 doi: 10.1016/j.measurement.2021.110176 – ident: ref64 doi: 10.1109/CVPRW56347.2022.00341 – ident: ref25 doi: 10.1109/ICIP.2019.8803025 – ident: ref68 doi: 10.1109/LGRS.2022.3179721 – ident: ref28 doi: 10.1007/s10489-021-02687-7 – ident: ref43 doi: 10.1109/CVPR.2017.549 – ident: ref53 doi: 10.1109/TIP.2023.3256762 – ident: ref17 doi: 10.1109/TCSVT.2021.3077058 – ident: ref54 doi: 10.1109/cvpr46437.2021.00306 – ident: ref18 doi: 10.1109/TPAMI.2022.3179526 – ident: ref71 doi: 10.1109/tpami.2022.3211006 – ident: ref49 doi: 10.1109/iccv48922.2021.00717 – ident: ref59 doi: 10.1007/978-3-319-46454-1_40 – ident: ref4 doi: 10.1007/978-3-319-24574-4_28 – ident: ref34 doi: 10.1109/CVPR.2015.7298965 – ident: ref24 doi: 10.1109/JSTSP.2022.3159032 – ident: ref16 doi: 10.1109/TCSVT.2022.3166914 |
SSID | ssj0014847 |
Score | 2.6044137 |
Snippet | Combining color (RGB) images with thermal images can facilitate semantic segmentation of poorly lit urban scenes. However, for RGB-thermal (RGB-T) semantic... |
SourceID | proquest crossref ieee |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 1 |
SubjectTerms | complementary supervision strategy contour skeleton positioning Data mining Decoding Feature extraction Image segmentation information memory sharing Modules morphological complementary Morphology RGB-T semantic segmentation Semantic segmentation Semantics Skeleton Thermal imaging |
Title | MMSMCNet: Modal Memory Sharing and Morphological Complementary Networks for RGB-T Urban Scene Semantic Segmentation |
URI | https://ieeexplore.ieee.org/document/10123009 https://www.proquest.com/docview/2899465925 |
Volume | 33 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1NT9wwELWAEz0ApSC2LJUPvVUO-XJic4MVFFXKHtjdilsU22OE2s1WbPYAv56xk6xoEai3SJmxLD1n_BzPmyHkK5dGGCUjFsdGsRQ3YCa4zpg2oTCGZ3kOTpxcjLPrWfrjlt92YnWvhQEAn3wGgXv0d_lmoVfuV9mpq0WVeLneJp7cWrHW-sogFb6bGPKFiAncyHqFTChPp6PJz2ngGoUHSYz8PEr_2oV8W5VXsdhvMFe7ZNxPrc0r-RWsGhXop3-qNv733PfITkc16Xm7Nj6SDaj3yYcXBQg_kWVRTIrRGJozWiwMGhcu8faRujLOaECr2uALhKIPkdTFjy7jHO3GbRL5kiL1pTffL9iUzh5UVdOJxhhKJzBH4O41PtzNO5FTfUBmV5fT0TXr2jAwHcusYTxRBiIL2iKbSzMOWV5JA1YZpcApWS2voriyYWqRXFmrkdIAuojKWhvmOjkkW_WihiNCrZMCG82NQTcupIgTHF0KSMLKyDQfkKiHpdRdjXLXKuN36c8qoSw9lKWDsuygHJBva58_bYWOd60PHDYvLFtYBmTYw192X_GydIfR1N07889vuB2TbTd6m98yJFvNwwpOkKU06otfnc8qdOPT |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwzV1Lb9QwEB6VcgAOPItYKOADnJDTxIkTG4kDLJQtbfbAZlFvIX5VFTSLulmh8l_4K_w2xnmsCghulbhZyoyjOF8838TzAHjCpRFGyYgyZhRN0ABTwXVKtQmFMTzNMuuTk_NpOpkn7w754QZ8X-fCWGvb4DMb-GF7lm8WeuV_le34WlQxkoI-hnLfnn1FD235Yu81vs6njO2-KcYT2jcRoJrJtKE8VsZGzmqHXCRJuU2zShrrlFHK-jxMx6uIVS5MHFID5zQaZIsqonLOhZmOcd5LcBmJBmddetj6kCIRbf8yZCgRFWg6h5ycUO4U49mHIvCtyYOYoUcQJb_YvbaRyx-7f2vSdm_Aj2ExukiWT8GqUYH-9ludyP92tW7C9Z5Mk5cd-m_Bhq1vw7VzJRbvwDLPZ_l4apvnJF8YFM59aPEZ8YWqUYBUtcELCLbBCBC_Q_Yx9Sg37cLklwTJPXn_9hUtyPxUVTWZabQSZGZPEJrHGgdHJ30aV70F8wt56ruwWS9qew-I88nORnNjUI0LKViMs0th47AyMslGEA0wKHVfhd03A_lctt5YKMsWOqWHTtlDZwTP1jpfuhok_5Te8lg4J9nBYATbA9zKfp9alt7dTvzJOr__F7XHcGVS5Aflwd50_wFc9Xfqonm2YbM5XdmHyMka9aj9Mgh8vGhw_QQmDkb7 |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=MMSMCNet%3A+Modal+Memory+Sharing+and+Morphological+Complementary+Networks+for+RGB-T+Urban+Scene+Semantic+Segmentation&rft.jtitle=IEEE+transactions+on+circuits+and+systems+for+video+technology&rft.au=Zhou%2C+Wujie&rft.au=Zhang%2C+Han&rft.au=Yan%2C+Weiqing&rft.au=Lin%2C+Weisi&rft.date=2023-12-01&rft.pub=IEEE&rft.issn=1051-8215&rft.spage=1&rft.epage=1&rft_id=info:doi/10.1109%2FTCSVT.2023.3275314&rft.externalDocID=10123009 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1051-8215&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1051-8215&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1051-8215&client=summon |