Context and Spatial Feature Calibration for Real-Time Semantic Segmentation
Context modeling or multi-level feature fusion methods have been proved to be effective in improving semantic segmentation performance. However, they are not specialized to deal with the problems of pixel-context mismatch and spatial feature misalignment, and the high computational complexity hinder...
Saved in:
Published in | IEEE transactions on image processing Vol. 32; pp. 5465 - 5477 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Context modeling or multi-level feature fusion methods have been proved to be effective in improving semantic segmentation performance. However, they are not specialized to deal with the problems of pixel-context mismatch and spatial feature misalignment, and the high computational complexity hinders their widespread application in real-time scenarios. In this work, we propose a lightweight Context and Spatial Feature Calibration Network (CSFCN) to address the above issues with pooling-based and sampling-based attention mechanisms. CSFCN contains two core modules: Context Feature Calibration (CFC) module and Spatial Feature Calibration (SFC) module. CFC adopts a cascaded pyramid pooling module to efficiently capture nested contexts, and then aggregates private contexts for each pixel based on pixel-context similarity to realize context feature calibration. SFC splits features into multiple groups of sub-features along the channel dimension and propagates sub-features therein by the learnable sampling to achieve spatial feature calibration. Extensive experiments on the Cityscapes and CamVid datasets illustrate that our method achieves a state-of-the-art trade-off between speed and accuracy. Concretely, our method achieves 78.7% mIoU with 70.0 FPS and 77.8% mIoU with 179.2 FPS on the Cityscapes and CamVid test sets, respectively. The code is available at https://nave.vr3i.com/ and https://github.com/kaigelee/CSFCN . |
---|---|
AbstractList | Context modeling or multi-level feature fusion methods have been proved to be effective in improving semantic segmentation performance. However, they are not specialized to deal with the problems of pixel-context mismatch and spatial feature misalignment, and the high computational complexity hinders their widespread application in real-time scenarios. In this work, we propose a lightweight Context and Spatial Feature Calibration Network (CSFCN) to address the above issues with pooling-based and sampling-based attention mechanisms. CSFCN contains two core modules: Context Feature Calibration (CFC) module and Spatial Feature Calibration (SFC) module. CFC adopts a cascaded pyramid pooling module to efficiently capture nested contexts, and then aggregates private contexts for each pixel based on pixel-context similarity to realize context feature calibration. SFC splits features into multiple groups of sub-features along the channel dimension and propagates sub-features therein by the learnable sampling to achieve spatial feature calibration. Extensive experiments on the Cityscapes and CamVid datasets illustrate that our method achieves a state-of-the-art trade-off between speed and accuracy. Concretely, our method achieves 78.7% mIoU with 70.0 FPS and 77.8% mIoU with 179.2 FPS on the Cityscapes and CamVid test sets, respectively. The code is available at https://nave.vr3i.com/ and https://github.com/kaigelee/CSFCN.Context modeling or multi-level feature fusion methods have been proved to be effective in improving semantic segmentation performance. However, they are not specialized to deal with the problems of pixel-context mismatch and spatial feature misalignment, and the high computational complexity hinders their widespread application in real-time scenarios. In this work, we propose a lightweight Context and Spatial Feature Calibration Network (CSFCN) to address the above issues with pooling-based and sampling-based attention mechanisms. CSFCN contains two core modules: Context Feature Calibration (CFC) module and Spatial Feature Calibration (SFC) module. CFC adopts a cascaded pyramid pooling module to efficiently capture nested contexts, and then aggregates private contexts for each pixel based on pixel-context similarity to realize context feature calibration. SFC splits features into multiple groups of sub-features along the channel dimension and propagates sub-features therein by the learnable sampling to achieve spatial feature calibration. Extensive experiments on the Cityscapes and CamVid datasets illustrate that our method achieves a state-of-the-art trade-off between speed and accuracy. Concretely, our method achieves 78.7% mIoU with 70.0 FPS and 77.8% mIoU with 179.2 FPS on the Cityscapes and CamVid test sets, respectively. The code is available at https://nave.vr3i.com/ and https://github.com/kaigelee/CSFCN. Context modeling or multi-level feature fusion methods have been proved to be effective in improving semantic segmentation performance. However, they are not specialized to deal with the problems of pixel-context mismatch and spatial feature misalignment, and the high computational complexity hinders their widespread application in real-time scenarios. In this work, we propose a lightweight Context and Spatial Feature Calibration Network (CSFCN) to address the above issues with pooling-based and sampling-based attention mechanisms. CSFCN contains two core modules: Context Feature Calibration (CFC) module and Spatial Feature Calibration (SFC) module. CFC adopts a cascaded pyramid pooling module to efficiently capture nested contexts, and then aggregates private contexts for each pixel based on pixel-context similarity to realize context feature calibration. SFC splits features into multiple groups of sub-features along the channel dimension and propagates sub-features therein by the learnable sampling to achieve spatial feature calibration. Extensive experiments on the Cityscapes and CamVid datasets illustrate that our method achieves a state-of-the-art trade-off between speed and accuracy. Concretely, our method achieves 78.7% mIoU with 70.0 FPS and 77.8% mIoU with 179.2 FPS on the Cityscapes and CamVid test sets, respectively. The code is available at https://nave.vr3i.com/ and https://github.com/kaigelee/CSFCN . |
Author | Li, Kaige Wan, Maoxian Geng, Qichuan Cao, Xiaochun Zhou, Zhong |
Author_xml | – sequence: 1 givenname: Kaige orcidid: 0000-0002-1716-4381 surname: Li fullname: Li, Kaige email: lkg@buaa.edu.cn organization: State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, China – sequence: 2 givenname: Qichuan surname: Geng fullname: Geng, Qichuan email: gengqichuan1989@cnu.edu.cn organization: Information Engineering College, Capital Normal University, Beijing, China – sequence: 3 givenname: Maoxian orcidid: 0009-0000-5396-0185 surname: Wan fullname: Wan, Maoxian email: wanmaoxian@buaa.edu.cn organization: State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, China – sequence: 4 givenname: Xiaochun orcidid: 0000-0001-7141-708X surname: Cao fullname: Cao, Xiaochun email: caoxiaochun@mail.sysu.edu.cn organization: School of Cyber Science and Technology, Shenzhen Campus, Sun Yat-sen University, Shenzhen, China – sequence: 5 givenname: Zhong orcidid: 0000-0002-5825-7517 surname: Zhou fullname: Zhou, Zhong email: zz@buaa.edu.cn organization: State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, China |
BookMark | eNp9kLtOwzAUQC1URB-wMzBEYmFJsX3j2B5RRQFRCUTLHLnJLXKVOMVxJfh7XNoBdWDylXWOH2dIeq51SMglo2PGqL5dPL2OOeUwBmBK5_KEDJjOWEppxntxpkKmkmW6T4Zdt6aUZYLlZ6QPUkrQVA_I86R1Ab9CYlyVzDcmWFMnUzRh6zGZmNoufdxrXbJqffKGpk4XtsFkjo1xwZZx-GjQhV_mnJyuTN3hxWEdkffp_WLymM5eHp4md7O0BC5CyismSiHAKGMUCipQKoZqyTWvpM6WSlRSlTIXGc0BUJWKxi8anYFSIKGCEbnZn7vx7ecWu1A0tiuxro3DdtsVXEmqdc6iPSLXR-i63XoXXxcpxQQXOahI0T1V-rbrPK6KjbeN8d8Fo8UudBFDF7vQxSF0VPIjpbT7CsEbW_8nXu1Fi4h_7uG5AsjgB4sLiHM |
CODEN | IIPRE4 |
CitedBy_id | crossref_primary_10_1109_ACCESS_2025_3529812 crossref_primary_10_1109_TIP_2024_3425048 crossref_primary_10_1109_TIP_2025_3526054 crossref_primary_10_1109_TGRS_2024_3516501 crossref_primary_10_1016_j_neucom_2024_128991 crossref_primary_10_3390_drones8080400 crossref_primary_10_1016_j_jksuci_2024_102226 crossref_primary_10_1088_1361_6501_ada2b6 |
Cites_doi | 10.1007/s11263-021-01515-2 10.1007/s11263-015-0816-y 10.1109/TMM.2022.3157995 10.1109/CVPR.2016.350 10.1109/TITS.2021.3115705 10.1109/TITS.2017.2750080 10.1109/CVPR.2016.90 10.1109/CVPR.2017.549 10.1109/TIP.2021.3102509 10.1109/WACV48630.2021.00360 10.1109/CVPR46437.2021.00405 10.1109/CVPR46437.2021.00959 10.1007/978-3-540-88682-2_5 10.1109/CVPR.2015.7298965 10.1109/CVPR52729.2023.01871 10.1109/TITS.2020.2980426 10.1109/CVPRW.2019.00168 10.1109/CVPR.2017.660 10.1007/978-3-030-01264-9_8 10.1109/WACV48630.2021.00305 10.1007/978-3-030-58452-8_45 10.1109/TITS.2022.3150350 10.1109/WACV.2019.00195 10.1109/TITS.2022.3228042 10.1109/TITS.2020.3037727 10.1109/CVPR.2019.00975 10.1109/TIP.2020.3042065 10.1109/TMM.2021.3088639 10.1109/WACV.2018.00163 10.1109/ICCV48922.2021.00061 10.1109/TIP.2020.2976856 10.1109/ICCV.2019.00068 10.1109/CVPR.2019.00941 10.1007/978-3-030-01261-8_20 10.1109/CVPR.2018.00813 10.1109/TITS.2020.3044672 10.1007/s11263-021-01465-9 10.1109/TCSVT.2021.3121680 10.1109/ICRA46639.2022.9811930 10.1007/978-3-030-01234-2_49 10.1109/CVPR.2019.00326 10.1109/CVPR.2018.00388 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023 |
DBID | 97E RIA RIE AAYXX CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
DOI | 10.1109/TIP.2023.3318967 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Xplore CrossRef Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional MEDLINE - Academic |
DatabaseTitle | CrossRef Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional MEDLINE - Academic |
DatabaseTitleList | MEDLINE - Academic Technology Research Database |
Database_xml | – sequence: 1 dbid: RIE name: IEEE Xplore Digital Library url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Applied Sciences Engineering |
EISSN | 1941-0042 |
EndPage | 5477 |
ExternalDocumentID | 10_1109_TIP_2023_3318967 10268334 |
Genre | orig-research |
GrantInformation_xml | – fundername: National Natural Science Foundation of China grantid: 62272018 funderid: 10.13039/501100001809 – fundername: National Key Research and Development Program of China grantid: 2018YFB2100603 funderid: 10.13039/501100012166 |
GroupedDBID | --- -~X .DC 0R~ 29I 4.4 53G 5GY 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABFSI ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AETIX AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 E.L EBS EJD F5P HZ~ H~9 ICLAB IFIPE IFJZH IPLJI JAVBF LAI M43 MS~ O9- OCL P2P RIA RIE RNS TAE TN5 VH1 AAYOK AAYXX CITATION RIG 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
ID | FETCH-LOGICAL-c325t-2d15c553a8aa8e505e781e8b292d794b85d78c76540633e8c80023a94388373d3 |
IEDL.DBID | RIE |
ISSN | 1057-7149 1941-0042 |
IngestDate | Fri Jul 11 16:44:01 EDT 2025 Mon Jun 30 08:36:52 EDT 2025 Tue Jul 01 02:18:57 EDT 2025 Thu Apr 24 22:57:03 EDT 2025 Wed Aug 27 02:25:04 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c325t-2d15c553a8aa8e505e781e8b292d794b85d78c76540633e8c80023a94388373d3 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
ORCID | 0000-0001-7141-708X 0009-0000-5396-0185 0000-0002-1716-4381 0000-0002-5825-7517 |
PMID | 37773909 |
PQID | 2881525638 |
PQPubID | 85429 |
PageCount | 13 |
ParticipantIDs | proquest_miscellaneous_2870996163 proquest_journals_2881525638 ieee_primary_10268334 crossref_primary_10_1109_TIP_2023_3318967 crossref_citationtrail_10_1109_TIP_2023_3318967 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 20230000 2023-00-00 20230101 |
PublicationDateYYYYMMDD | 2023-01-01 |
PublicationDate_xml | – year: 2023 text: 20230000 |
PublicationDecade | 2020 |
PublicationPlace | New York |
PublicationPlace_xml | – name: New York |
PublicationTitle | IEEE transactions on image processing |
PublicationTitleAbbrev | TIP |
PublicationYear | 2023 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref12 ref15 ref14 ref52 ref11 ref10 ref17 paszke (ref27) 2016 ref16 ref19 ref18 ronneberger (ref7) 2015 ref51 ref50 ref45 li (ref39) 2019 ref48 ref47 ref42 orši? (ref9) 2021; 110 ref41 ref44 ref43 ref8 huang (ref22) 2022; 44 ref4 ref3 chen (ref40) 2017 ref6 wu (ref26) 2022 ref5 ref35 ref34 ref37 ref36 ref31 ref30 ref33 ref32 ref2 ref1 ref38 xie (ref46) 2021; 34 si (ref49) 2020 ref24 ref23 ref25 ref20 ref21 ref28 ref29 peng (ref13) 2022 |
References_xml | – volume: 44 start-page: 550 year: 2022 ident: ref22 article-title: AlignSeg: Feature-aligned segmentation networks publication-title: IEEE Trans Pattern Anal Mach Intell – ident: ref33 doi: 10.1007/s11263-021-01515-2 – ident: ref38 doi: 10.1007/s11263-015-0816-y – ident: ref31 doi: 10.1109/TMM.2022.3157995 – ident: ref19 doi: 10.1109/CVPR.2016.350 – year: 2017 ident: ref40 article-title: Rethinking atrous convolution for semantic image segmentation publication-title: arXiv 1706 05587 – start-page: 234 year: 2015 ident: ref7 article-title: U-Net: Convolutional networks for biomedical image segmentation publication-title: Proc Int Conf Med Image Comput Comput -Assist Intervent – year: 2019 ident: ref39 article-title: Spatial group-wise enhance: Improving semantic feature learning in convolutional networks publication-title: arXiv 1905 09646 – ident: ref2 doi: 10.1109/TITS.2021.3115705 – ident: ref28 doi: 10.1109/TITS.2017.2750080 – start-page: 1 year: 2020 ident: ref49 article-title: Real-time semantic segmentation via multiple spatial fusion network publication-title: Proc Brit Mach Vis Conf – ident: ref37 doi: 10.1109/CVPR.2016.90 – ident: ref8 doi: 10.1109/CVPR.2017.549 – ident: ref1 doi: 10.1109/TIP.2021.3102509 – year: 2016 ident: ref27 article-title: ENet: A deep neural network architecture for real-time semantic segmentation publication-title: ArXiv 1606 02147 – ident: ref43 doi: 10.1109/WACV48630.2021.00360 – ident: ref48 doi: 10.1109/CVPR46437.2021.00405 – ident: ref36 doi: 10.1109/CVPR46437.2021.00959 – ident: ref20 doi: 10.1007/978-3-540-88682-2_5 – ident: ref3 doi: 10.1109/CVPR.2015.7298965 – ident: ref15 doi: 10.1109/CVPR52729.2023.01871 – ident: ref14 doi: 10.1109/TITS.2020.2980426 – volume: 34 start-page: 12077 year: 2021 ident: ref46 article-title: SegFormer: Simple and efficient design for semantic segmentation with transformers publication-title: Proc Adv Neural Inf Process Syst – ident: ref42 doi: 10.1109/CVPRW.2019.00168 – ident: ref4 doi: 10.1109/CVPR.2017.660 – ident: ref51 doi: 10.1007/978-3-030-01264-9_8 – ident: ref45 doi: 10.1109/WACV48630.2021.00305 – volume: 110 year: 2021 ident: ref9 article-title: Efficient semantic segmentation with pyramidal fusion publication-title: Pattern Recognit – ident: ref17 doi: 10.1007/978-3-030-58452-8_45 – ident: ref35 doi: 10.1109/TITS.2022.3150350 – ident: ref52 doi: 10.1109/WACV.2019.00195 – ident: ref41 doi: 10.1109/TITS.2022.3228042 – year: 2022 ident: ref26 article-title: P2T: Pyramid pooling transformer for scene understanding publication-title: IEEE Trans Pattern Anal Mach Intell – ident: ref10 doi: 10.1109/TITS.2020.3037727 – ident: ref34 doi: 10.1109/CVPR.2019.00975 – ident: ref44 doi: 10.1109/TIP.2020.3042065 – ident: ref50 doi: 10.1109/TMM.2021.3088639 – ident: ref23 doi: 10.1109/WACV.2018.00163 – ident: ref25 doi: 10.1109/ICCV48922.2021.00061 – ident: ref30 doi: 10.1109/TIP.2020.2976856 – ident: ref21 doi: 10.1109/ICCV.2019.00068 – ident: ref29 doi: 10.1109/CVPR.2019.00941 – ident: ref32 doi: 10.1007/978-3-030-01261-8_20 – ident: ref18 doi: 10.1109/CVPR.2018.00813 – ident: ref11 doi: 10.1109/TITS.2020.3044672 – ident: ref24 doi: 10.1007/s11263-021-01465-9 – ident: ref12 doi: 10.1109/TCSVT.2021.3121680 – ident: ref47 doi: 10.1109/ICRA46639.2022.9811930 – ident: ref5 doi: 10.1007/978-3-030-01234-2_49 – year: 2022 ident: ref13 article-title: PP-LiteSeg: A superior real-time semantic segmentation model publication-title: arXiv 2204 02681 – ident: ref16 doi: 10.1109/CVPR.2019.00326 – ident: ref6 doi: 10.1109/CVPR.2018.00388 |
SSID | ssj0014516 |
Score | 2.5118594 |
Snippet | Context modeling or multi-level feature fusion methods have been proved to be effective in improving semantic segmentation performance. However, they are not... |
SourceID | proquest crossref ieee |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 5465 |
SubjectTerms | Aggregates Calibration Context context feature calibration Context modeling Misalignment Modules Pixels Real time Real-time semantic segmentation Real-time systems Sampling Semantic segmentation Semantics spatial feature calibration Transformers |
Title | Context and Spatial Feature Calibration for Real-Time Semantic Segmentation |
URI | https://ieeexplore.ieee.org/document/10268334 https://www.proquest.com/docview/2881525638 https://www.proquest.com/docview/2870996163 |
Volume | 32 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1La9wwEB6anNpDXk3pNmlwoZce5DiSZcnHEBLygFDaBHIzkjxbSrPekthQ-uszI2uXtCUlN4HHQvbMSN9oXgAfSQbcNGgnSuOsKHXthNNFK7AyrdJYGutitMVldXpdnt_om5SsHnNhEDEGn2HOw-jLb-dh4Ksy0nBZWaXKFVghy21M1lq6DLjjbHRtaiMM4f6FT7Ko96_OPufcJjxXJMF1xY33lDGGzP36j-Mo9lf5Z1OOJ83JOlwu1jgGmPzIh97n4fdf5Ruf_REbsJYwZ3Y4CskmvMBuC9YT_sySdt9vwatHxQlfw0UsXPWrz1xHRBx4TXMwYhzuMOOULj8KT0awN_tCeFNwOkn2FWfErO-BBt9mKbGp24brk-Oro1ORWi-IoKTuhWwPdNBaOeucRUJJaOwBWi9r2ZIGe6tbY4OpCO9VSqENjDuVq0tlyeJVrXoDq928w7eQuWLqnJTopwS9sKh9kM6jnZrSVxZLnMD-ggNNSHXJuT3GbRPtk6JuiH0Ns69J7JvAp-UbP8eaHP-h3WYWPKIb__4EdhdcbpLW3jfSWm4HRVvSBD4sH5O-sRPFdTgfmMYQqK4Ixr57YuodeMkrGO9pdmG1vxvwPSGX3u9FiX0AArzmMw |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwzV3NbtQwEB6VcgAOLZSibikQJDhwcJracewcOCCg2mVLhWAr9RZsZxZV0CzqJuLnXXgVno2x412tQHCrxM1SJlZiz3i-8fwBPCIeMFMnDcuV0SyXpWFGZjXDQtVCYq60CdEWx8XwJH91Kk_X4McyFwYRQ_AZpn4YfPn1zHX-qowknBdaiDzGUI7x2xey0OZPRy9oOx9zfvhy8nzIYhMB5gSXLeP1gXRSCqON0Uj6HpU-QG15yWviRatlrbRTBSGXQgjUziMoYcpcaLLdRC1o3itwlYCG5H162NJJ4XvcBmeqVEyRpbHwgmbl_mT0JvWNyVNBMlMWvtWfUEqJ0gc-rijA0NHlDzUQdNvhJvxcrEof0vIx7Vqbuu-_FYz8b5ftJmxEVJ0868XgFqxhswWbEWEn8fyab8GNlfKLt2EcSnN9bRPTEJEPLac5PCbuLjDxSWu2F4-EgH3ylhA18wkzyTs8J3Y8czT4cB5Tt5ptOLmUH7wD682swR1ITDY1hnO0UwKXmJXWcWNRT1VuC405DmB_seOVi5XXfQOQT1WwwLKyInapPLtUkV0G8GT5xue-6sg_aLf9lq_Q9bs9gL0FV1XxXJpXXGvf8IoO3QE8XD6mE8W7iUyDs87TKDIbCgLqu3-Z-gFcG05eH1VHo-PxXbjuv6a_ldqD9faiw3uE01p7P0hLAu8vm89-AUSkP78 |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Context+and+Spatial+Feature+Calibration+for+Real-Time+Semantic+Segmentation&rft.jtitle=IEEE+transactions+on+image+processing&rft.au=Li%2C+Kaige&rft.au=Geng%2C+Qichuan&rft.au=Wan%2C+Maoxian&rft.au=Cao%2C+Xiaochun&rft.date=2023&rft.issn=1057-7149&rft.eissn=1941-0042&rft.volume=32&rft.spage=5465&rft.epage=5477&rft_id=info:doi/10.1109%2FTIP.2023.3318967&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TIP_2023_3318967 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1057-7149&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1057-7149&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1057-7149&client=summon |