Cross-Collaborative Fusion-Encoder Network for Robust RGB-Thermal Salient Object Detection
With the prevalence of thermal cameras, RGB-T multi-modal data have become more available for salient object detection (SOD) in complex scenes. Most RGB-T SOD works first individually extract RGB and thermal features from two separate encoders and directly integrate them, which pay less attention to...
Saved in:
Published in | IEEE transactions on circuits and systems for video technology Vol. 32; no. 11; pp. 7646 - 7661 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.11.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | With the prevalence of thermal cameras, RGB-T multi-modal data have become more available for salient object detection (SOD) in complex scenes. Most RGB-T SOD works first individually extract RGB and thermal features from two separate encoders and directly integrate them, which pay less attention to the issue of defective modalities. However, such an indiscriminate feature extraction strategy may produce contaminated features and thus lead to poor SOD performance. To address this issue, we propose a novel CCFENet for a perspective to perform robust and accurate multi-modal expression encoding. First, we propose an essential cross-collaboration enhancement strategy (CCE), which concentrates on facilitating the interactions across the encoders and encouraging different modalities to complement each other during encoding. Such a cross-collaborative-encoder paradigm induces our network to collaboratively suppress the negative feature responses of defective modality data and effectively exploit modality-informative features. Moreover, as the network goes deeper, we embed several CCEs into the encoder, further enabling more representative and robust feature generation. Second, benefiting from the proposed robust encoding paradigm, a simple yet effective cross-scale cross-modal decoder (CCD) is designed to aggregate multi-level complementary multi-modal features, and thus encourages efficient and accurate RGB-T SOD. Extensive experiments reveal that our CCFENet outperforms the state-of-the-art models on three RGB-T datasets with a fast inference speed of 62 FPS. In addition, the advantages of our approach in complex scenarios (e.g., bad weather, motion blur, etc.) and RGB-D SOD further verify its robustness and generality. The source code will be publicly available via our project page: https://git.openi.org.cn/OpenVision/CCFENet . |
---|---|
AbstractList | With the prevalence of thermal cameras, RGB-T multi-modal data have become more available for salient object detection (SOD) in complex scenes. Most RGB-T SOD works first individually extract RGB and thermal features from two separate encoders and directly integrate them, which pay less attention to the issue of defective modalities. However, such an indiscriminate feature extraction strategy may produce contaminated features and thus lead to poor SOD performance. To address this issue, we propose a novel CCFENet for a perspective to perform robust and accurate multi-modal expression encoding. First, we propose an essential cross-collaboration enhancement strategy (CCE), which concentrates on facilitating the interactions across the encoders and encouraging different modalities to complement each other during encoding. Such a cross-collaborative-encoder paradigm induces our network to collaboratively suppress the negative feature responses of defective modality data and effectively exploit modality-informative features. Moreover, as the network goes deeper, we embed several CCEs into the encoder, further enabling more representative and robust feature generation. Second, benefiting from the proposed robust encoding paradigm, a simple yet effective cross-scale cross-modal decoder (CCD) is designed to aggregate multi-level complementary multi-modal features, and thus encourages efficient and accurate RGB-T SOD. Extensive experiments reveal that our CCFENet outperforms the state-of-the-art models on three RGB-T datasets with a fast inference speed of 62 FPS. In addition, the advantages of our approach in complex scenarios (e.g., bad weather, motion blur, etc.) and RGB-D SOD further verify its robustness and generality. The source code will be publicly available via our project page: https://git.openi.org.cn/OpenVision/CCFENet . |
Author | Wang, Junle Liao, Guibiao Li, Ge Kwong, Sam Gao, Wei |
Author_xml | – sequence: 1 givenname: Guibiao orcidid: 0000-0002-5714-1926 surname: Liao fullname: Liao, Guibiao email: gbliao@stu.edu.pku.cn organization: School of Electronic and Computer Engineering, Peking University, Shenzhen, China – sequence: 2 givenname: Wei orcidid: 0000-0001-7429-5495 surname: Gao fullname: Gao, Wei email: gaowei262@pku.edu.cn organization: School of Electronic and Computer Engineering, Peking University, Shenzhen, China – sequence: 3 givenname: Ge orcidid: 0000-0003-0140-0949 surname: Li fullname: Li, Ge email: geli@pku.edu.cn organization: School of Electronic and Computer Engineering, Peking University, Shenzhen, China – sequence: 4 givenname: Junle surname: Wang fullname: Wang, Junle email: jljunlewang@tencent.com organization: Turing Laboratory, Tencent, Shenzhen, China – sequence: 5 givenname: Sam orcidid: 0000-0001-7484-7261 surname: Kwong fullname: Kwong, Sam email: cssamk@cityu.edu.hk organization: Department of Computer Science, City University of Hong Kong, Kowloon, Hong Kong |
BookMark | eNp9kD9PwzAUxC0EEm3hC8BiidnFdv45I4S2IFVUagMDS2Q7ryIljYvtgPj2JLRiYGC6N9zvne6G6LgxDSB0weiYMZpe59nqOR9zyvk4YCIUIT1CAxZFgnBOo-PuphEjgrPoFA2d21DKOlMyQC-ZNc6RzNS1VMZKX30AnrauMg2ZNNqUYPEj-E9j3_DaWLw0qnUeL2e3JH8Fu5U1Xsm6gsbjhdqA9vgOfCcdf4ZO1rJ2cH7QEXqaTvLsnswXs4fsZk40TyNPRMxjUKpMISmpZCyIqdAiBh5QKrlKFS3DOFI8EDTWpUy01EwmPFRKgwqlCEboav93Z817C84XG9PaposseMLTOOhqJ52L7126L2xhXexstZX2q2C06DcsfjYs-g2Lw4YdJP5AuvKyL-etrOr_0cs9WgHAb1YqKBMJC74BqBmByQ |
CODEN | ITCTEM |
CitedBy_id | crossref_primary_10_1016_j_eswa_2024_125278 crossref_primary_10_1016_j_inffus_2025_103048 crossref_primary_10_1109_TCE_2024_3390841 crossref_primary_10_3390_a17030109 crossref_primary_10_1109_TPAMI_2024_3521416 crossref_primary_10_1016_j_aei_2024_102953 crossref_primary_10_1109_TGRS_2024_3425658 crossref_primary_10_1016_j_jksuci_2023_101702 crossref_primary_10_1109_TCSVT_2024_3489440 crossref_primary_10_1109_TMM_2024_3410542 crossref_primary_10_1016_j_engappai_2023_105919 crossref_primary_10_1016_j_measurement_2023_113180 crossref_primary_10_1016_j_knosys_2023_110322 crossref_primary_10_1016_j_knosys_2024_111597 crossref_primary_10_1109_TIP_2023_3275538 crossref_primary_10_1016_j_knosys_2024_111672 crossref_primary_10_1016_j_inffus_2024_102806 crossref_primary_10_1109_TCSVT_2024_3375505 crossref_primary_10_1109_TCSVT_2023_3286072 crossref_primary_10_1109_TMM_2023_3323890 crossref_primary_10_1117_1_JEI_34_1_013005 crossref_primary_10_1016_j_engappai_2022_105640 crossref_primary_10_1007_s00371_024_03423_1 crossref_primary_10_1016_j_engappai_2022_105707 crossref_primary_10_1016_j_patcog_2024_110868 crossref_primary_10_1016_j_optlaseng_2023_107842 crossref_primary_10_1016_j_dsp_2022_103827 crossref_primary_10_1016_j_image_2024_117165 crossref_primary_10_1109_TCSVT_2024_3414170 crossref_primary_10_1109_TIM_2024_3370783 crossref_primary_10_1016_j_inffus_2023_101828 crossref_primary_10_1145_3624984 |
Cites_doi | 10.1109/TMM.2021.3069297 10.1109/TPAMI.2021.3051099 10.1109/ICRA.2017.7989668 10.1109/TNNLS.2020.2996406 10.1088/1742-6596/2181/1/012008 10.1007/s41095-020-0199-z 10.1109/cvpr.2016.90 10.1109/TPAMI.2017.2662005 10.1109/TIP.2020.2976689 10.1109/TIP.2022.3154931 10.1109/TCSVT.2019.2951621 10.1109/ICMLC.2016.7860880 10.1109/ICIP.2014.7025222 10.1007/978-3-319-10590-1_53 10.1109/TIP.2019.2959253 10.1109/TIP.2021.3087412 10.1088/1742-6596/2181/1/012003 10.1016/j.patcog.2019.106977 10.1109/TPAMI.2019.2905607 10.1109/TCSVT.2021.3082939 10.1109/CVPR46437.2021.01211 10.1007/978-3-319-10578-9_7 10.1109/CVPR.2018.00745 10.1007/978-3-030-58523-5_4 10.1109/TMM.2019.2924578 10.1109/TCSVT.2022.3144852 10.1109/TMM.2022.3171688 10.1109/MIPR.2019.00032 10.1109/CVPR.2019.00941 10.1109/CVPR.2018.00474 10.1109/CVPR.2016.257 10.1109/CVPR46437.2021.00935 10.1109/CVPR42600.2020.00861 10.1007/978-3-030-58520-4_39 10.1109/CVPR.2019.00612 10.1109/TPAMI.2021.3134684 10.1109/CVPR42600.2020.00312 10.1109/TIP.2020.3028289 10.1109/CVPR.2012.6247708 10.1609/aaai.v35i3.16331 10.1109/tcsvt.2020.3014663 10.1109/TCYB.2019.2932005 10.1109/TPAMI.2021.3073689 10.24963/ijcai.2018/97 10.1007/s10489-021-02984-1 10.1109/ICIP.2019.8803025 10.1109/CVPR46437.2021.00146 10.1109/CVPR.2019.00154 10.1109/CVPR42600.2020.00299 10.1007/978-3-030-58610-2_17 10.1109/TMM.2022.3174341 10.1007/978-981-13-1702-6_36 10.1109/TCSVT.2021.3102268 10.1007/978-3-030-58595-2_15 10.1109/TIP.2021.3109518 10.1109/TIP.2021.3062689 10.1109/TIP.2021.3123548 10.1109/TMM.2020.3011327 10.1145/3474085.3475601 10.1145/3394171.3413523 10.1109/CVPR42600.2020.01377 10.1109/ICCV.2019.00735 10.1016/j.patcog.2019.107130 10.1109/TIP.2021.3060167 10.1007/978-3-030-58542-6_39 10.1109/CVPR42600.2020.00353 10.1609/aaai.v35i2.16191 10.1109/TPAMI.2021.3060412 10.1109/CVPR.2009.5206596 10.1109/TCSVT.2021.3077058 10.1109/tpami.2020.3023152 10.1109/ICCV.2017.487 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022 |
DBID | 97E RIA RIE AAYXX CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D |
DOI | 10.1109/TCSVT.2022.3184840 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
DatabaseTitle | CrossRef Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional |
DatabaseTitleList | Technology Research Database |
Database_xml | – sequence: 1 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering |
EISSN | 1558-2205 |
EndPage | 7661 |
ExternalDocumentID | 10_1109_TCSVT_2022_3184840 9801871 |
Genre | orig-research |
GrantInformation_xml | – fundername: Guangdong Basic and Applied Basic Research Foundation grantid: 2019A1515012031 – fundername: National Key Research and Development Program of China grantid: 2020AAA0103501 funderid: 10.13039/501100012166 – fundername: Shenzhen Science and Technology Plan Basic Research Project grantid: JCYJ20190808161805519 – fundername: Shenzhen Fundamental Research Program grantid: GXWD20201231165807007-20200806163656003 funderid: 10.13039/501100017607 – fundername: Hong Kong RGC GRF grantid: 9042816 (CityU 11209819); 9042958 (CityU 11203820) funderid: 10.13039/501100002920 – fundername: Natural Science Foundation of China grantid: 61801303; 62031013 funderid: 10.13039/501100001809 – fundername: Hong Kong Innovation and Technology Commission (InnoHK) under Project CIMDA |
GroupedDBID | -~X 0R~ 29I 4.4 5GY 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AETIX AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS EJD HZ~ H~9 ICLAB IFIPE IFJZH IPLJI JAVBF LAI M43 O9- OCL P2P RIA RIE RNS RXW TAE TN5 VH1 AAYXX CITATION RIG 7SC 7SP 8FD JQ2 L7M L~C L~D |
ID | FETCH-LOGICAL-c295t-8626ebbd9e7d0a113608c86e2300a2b9b0d465b23806cda7cac1a724bbceb4a83 |
IEDL.DBID | RIE |
ISSN | 1051-8215 |
IngestDate | Sun Jun 29 15:45:01 EDT 2025 Tue Jul 01 00:41:18 EDT 2025 Thu Apr 24 22:54:15 EDT 2025 Wed Aug 27 02:14:46 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 11 |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c295t-8626ebbd9e7d0a113608c86e2300a2b9b0d465b23806cda7cac1a724bbceb4a83 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0002-5714-1926 0000-0003-0140-0949 0000-0001-7429-5495 0000-0001-7484-7261 |
PQID | 2729638217 |
PQPubID | 85433 |
PageCount | 16 |
ParticipantIDs | proquest_journals_2729638217 crossref_primary_10_1109_TCSVT_2022_3184840 ieee_primary_9801871 crossref_citationtrail_10_1109_TCSVT_2022_3184840 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2022-11-01 |
PublicationDateYYYYMMDD | 2022-11-01 |
PublicationDate_xml | – month: 11 year: 2022 text: 2022-11-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | New York |
PublicationPlace_xml | – name: New York |
PublicationTitle | IEEE transactions on circuits and systems for video technology |
PublicationTitleAbbrev | TCSVT |
PublicationYear | 2022 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref13 ref57 ref12 ref56 ref15 ref59 ref14 ref58 ref53 ref52 ref11 ref55 ref10 ref54 ref17 ref16 ref19 ref18 ref51 ref50 ref46 ref45 ref48 ref47 ref42 ref41 ref44 ref43 ref49 ref8 ref7 ref9 ref4 ref3 ref6 ref5 ref40 ref35 ref34 ref37 ref36 ref31 ref30 ref33 ref32 ref2 ref1 ref39 ref38 ref71 ref70 ref72 ref24 ref68 ref23 ref67 ref26 ref25 ref69 ref20 ref64 ref63 ref22 ref66 ref21 ref65 ref28 ref27 ref29 ref60 ref62 ref61 |
References_xml | – ident: ref15 doi: 10.1109/TMM.2021.3069297 – ident: ref8 doi: 10.1109/TPAMI.2021.3051099 – ident: ref21 doi: 10.1109/ICRA.2017.7989668 – ident: ref68 doi: 10.1109/TNNLS.2020.2996406 – ident: ref43 doi: 10.1088/1742-6596/2181/1/012008 – ident: ref26 doi: 10.1007/s41095-020-0199-z – ident: ref52 doi: 10.1109/cvpr.2016.90 – ident: ref1 doi: 10.1109/TPAMI.2017.2662005 – ident: ref35 doi: 10.1109/TIP.2020.2976689 – ident: ref38 doi: 10.1109/TIP.2022.3154931 – ident: ref44 doi: 10.1109/TCSVT.2019.2951621 – ident: ref25 doi: 10.1109/ICMLC.2016.7860880 – ident: ref66 doi: 10.1109/ICIP.2014.7025222 – ident: ref54 doi: 10.1007/978-3-319-10590-1_53 – ident: ref46 doi: 10.1109/TIP.2019.2959253 – ident: ref16 doi: 10.1109/TIP.2021.3087412 – ident: ref42 doi: 10.1088/1742-6596/2181/1/012003 – ident: ref18 doi: 10.1016/j.patcog.2019.106977 – ident: ref5 doi: 10.1109/TPAMI.2019.2905607 – ident: ref17 doi: 10.1109/TCSVT.2021.3082939 – ident: ref10 doi: 10.1109/CVPR46437.2021.01211 – ident: ref27 doi: 10.1007/978-3-319-10578-9_7 – ident: ref53 doi: 10.1109/CVPR.2018.00745 – ident: ref31 doi: 10.1007/978-3-030-58523-5_4 – ident: ref45 doi: 10.1109/TMM.2019.2924578 – ident: ref39 doi: 10.1109/TCSVT.2022.3144852 – ident: ref23 doi: 10.1109/TMM.2022.3171688 – ident: ref58 doi: 10.1109/MIPR.2019.00032 – ident: ref62 doi: 10.1109/CVPR.2019.00941 – ident: ref63 doi: 10.1109/CVPR.2018.00474 – ident: ref28 doi: 10.1109/CVPR.2016.257 – ident: ref71 doi: 10.1109/CVPR46437.2021.00935 – ident: ref69 doi: 10.1109/CVPR42600.2020.00861 – ident: ref70 doi: 10.1007/978-3-030-58520-4_39 – ident: ref6 doi: 10.1109/CVPR.2019.00612 – ident: ref64 doi: 10.1109/TPAMI.2021.3134684 – ident: ref32 doi: 10.1109/CVPR42600.2020.00312 – ident: ref36 doi: 10.1109/TIP.2020.3028289 – ident: ref67 doi: 10.1109/CVPR.2012.6247708 – ident: ref7 doi: 10.1609/aaai.v35i3.16331 – ident: ref47 doi: 10.1109/tcsvt.2020.3014663 – ident: ref29 doi: 10.1109/TCYB.2019.2932005 – ident: ref12 doi: 10.1109/TPAMI.2021.3073689 – ident: ref61 doi: 10.24963/ijcai.2018/97 – ident: ref51 doi: 10.1007/s10489-021-02984-1 – ident: ref55 doi: 10.1109/ICIP.2019.8803025 – ident: ref72 doi: 10.1109/CVPR46437.2021.00146 – ident: ref4 doi: 10.1109/CVPR.2019.00154 – ident: ref9 doi: 10.1109/CVPR42600.2020.00299 – ident: ref34 doi: 10.1007/978-3-030-58610-2_17 – ident: ref19 doi: 10.1109/TMM.2022.3174341 – ident: ref24 doi: 10.1007/978-981-13-1702-6_36 – ident: ref49 doi: 10.1109/TCSVT.2021.3102268 – ident: ref57 doi: 10.1007/978-3-030-58595-2_15 – ident: ref20 doi: 10.1109/TIP.2021.3109518 – ident: ref14 doi: 10.1109/TIP.2021.3062689 – ident: ref37 doi: 10.1109/TIP.2021.3123548 – ident: ref40 doi: 10.1109/TMM.2020.3011327 – ident: ref41 doi: 10.1145/3474085.3475601 – ident: ref50 doi: 10.1145/3394171.3413523 – ident: ref33 doi: 10.1109/CVPR42600.2020.01377 – ident: ref65 doi: 10.1109/ICCV.2019.00735 – ident: ref2 doi: 10.1016/j.patcog.2019.107130 – ident: ref13 doi: 10.1109/TIP.2021.3060167 – ident: ref30 doi: 10.1007/978-3-030-58542-6_39 – ident: ref56 doi: 10.1109/CVPR42600.2020.00353 – ident: ref22 doi: 10.1609/aaai.v35i2.16191 – ident: ref11 doi: 10.1109/TPAMI.2021.3060412 – ident: ref59 doi: 10.1109/CVPR.2009.5206596 – ident: ref48 doi: 10.1109/TCSVT.2021.3077058 – ident: ref3 doi: 10.1109/tpami.2020.3023152 – ident: ref60 doi: 10.1109/ICCV.2017.487 |
SSID | ssj0014847 |
Score | 2.6005926 |
Snippet | With the prevalence of thermal cameras, RGB-T multi-modal data have become more available for salient object detection (SOD) in complex scenes. Most RGB-T SOD... |
SourceID | proquest crossref ieee |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 7646 |
SubjectTerms | Blurring Coders Collaboration cross-collaborative Decoding Encoding Feature extraction fusion-encoder Modal data Noise measurement Object detection Object recognition RGB-thermal salient object detection Robustness Salience Saliency detection Source code Task analysis |
Title | Cross-Collaborative Fusion-Encoder Network for Robust RGB-Thermal Salient Object Detection |
URI | https://ieeexplore.ieee.org/document/9801871 https://www.proquest.com/docview/2729638217 |
Volume | 32 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV07T8MwELYKEwy8CqK85IEN3DqpmzgjFEqFBEiloIolsp3rQpUiSBj49ZydB08htgy2Y_nOvu-z70HIYeApz0jcaQnXgokudJkUUchC--Y2VRCIqcv2eR0M78TlpDdpkOM6FgYAnPMZtO2ne8tP5ia3V2WdSNoScsh1FpC4FbFa9YuBkK6YGMIFj0m0Y1WADI864_7t_RipoO8jQ5XYkH8xQq6qyo-j2NmXwSq5qmZWuJU8tvNMt83bt6SN_536GlkpgSY9KTRjnTQg3SDLn9IPNslD386O9T9U4RXoILf3Z-w8tcHuz_S6cBOniG3paK7zl4yOLk4Zahee6DN6izAef0xvtL3PoWeQOdeudJPcDc7H_SEray0w40e9jFliA1onEYQJV7bQC5dGBoAMhStfR5onIuhpNPA8MIkKjTKeCn2htQEtlOxukcV0nsI2oSCiKe8CcCVDoZWnfAsjtOFKId4zQYt41eLHpkxEbuthzGJHSHgUO4HFVmBxKbAWOar7PBVpOP5s3bQSqFuWi98ie5WM43KnvsQ-sgs8g5CZ7fzea5cs2bGL-MM9spg957CPQCTTB04D3wEjwdkv |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LbxoxEB5F9NDk0BeNQkNbH3prTLyL2fUeWwolCVAJSIV6Wdne4VK0VLCbQ399x96F9KWqNx9s2fKMPd9nzwPgTRTowCo6aZkwkssudrmSScxj9-e20hjJlc_2OY1Gt_J62VsewcUhFgYRvfMZdlzT_-VnG1u6p7LLRLkScsR1HpDd74VVtNbhz0AqX06MAEPAFVmyfYiMSC4X_fnnBZHBMCSOqqij-MUM-boqf1zG3sIMH8Nkv7bKseRrpyxMx37_LW3j_y7-CTyqoSZ7V-nGUzjC_Bmc_JSAsAlf-m51vH-vDHfIhqV7QeOD3IW7b9m0chRnhG7ZbGPKXcFmH99z0i-609dsTkCeJmafjHvRYR-w8M5d-XO4HQ4W_RGvqy1wGya9gjtqg8ZkCcaZ0K7Ui1BWRUgcRejQJEZkMuoZMvEispmOrbaBjkNpjEUjteqeQiPf5HgGDGWyEl1EoVUsjQ506ICEsUJrQnw2akGw3_zU1qnIXUWMdeopiUhSL7DUCSytBdaCt4cx36pEHP_s3XQSOPSsN78F7b2M0_qs7tKQ-AXdQsTNXvx91Gt4OFpMxun4anpzDsdunioasQ2NYlviS4IlhXnltfEHo__ceQ |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Cross-Collaborative+Fusion-Encoder+Network+for+Robust+RGB-Thermal+Salient+Object+Detection&rft.jtitle=IEEE+transactions+on+circuits+and+systems+for+video+technology&rft.au=Liao%2C+Guibiao&rft.au=Gao%2C+Wei&rft.au=Li%2C+Ge&rft.au=Wang%2C+Junle&rft.date=2022-11-01&rft.pub=IEEE&rft.issn=1051-8215&rft.volume=32&rft.issue=11&rft.spage=7646&rft.epage=7661&rft_id=info:doi/10.1109%2FTCSVT.2022.3184840&rft.externalDocID=9801871 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1051-8215&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1051-8215&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1051-8215&client=summon |