Cross-Modal Transformers for Infrared and Visible Image Fusion
Image fusion techniques aim to generate more informative images by merging multiple images of different modalities with complementary information. Despite significant fusion performance improvements of recent learning-based approaches, most fusion algorithms have been developed based on convolutiona...
Saved in:
Published in | IEEE transactions on circuits and systems for video technology Vol. 34; no. 2; pp. 770 - 785 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.02.2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Image fusion techniques aim to generate more informative images by merging multiple images of different modalities with complementary information. Despite significant fusion performance improvements of recent learning-based approaches, most fusion algorithms have been developed based on convolutional neural networks (CNNs), which stack deep layers to obtain a large receptive field for feature extraction. However, important details and contexts of the source images may be lost through a series of convolution layers. In this work, we propose a cross-modal transformer-based fusion (CMTFusion) algorithm for infrared and visible image fusion that captures global interactions by faithfully extracting complementary information from source images. Specifically, we first extract the multiscale feature maps of infrared and visible images. Then, we develop cross-modal transformers (CMTs) to retain complementary information in the source images by removing redundancies in both the spatial and channel domains. To this end, we design a gated bottleneck that integrates cross-domain interaction to consider the characteristics of the source images. Finally, a fusion result is obtained by exploiting spatial-channel information in refined feature maps using a fusion block. Experimental results on multiple datasets demonstrate that the proposed algorithm provides better fusion performance than state-of-the-art infrared and visible image fusion algorithms, both quantitatively and qualitatively. Furthermore, we show that the proposed algorithm can be used to improve the performance of computer vision tasks, e.g., object detection and monocular depth estimation. |
---|---|
AbstractList | Image fusion techniques aim to generate more informative images by merging multiple images of different modalities with complementary information. Despite significant fusion performance improvements of recent learning-based approaches, most fusion algorithms have been developed based on convolutional neural networks (CNNs), which stack deep layers to obtain a large receptive field for feature extraction. However, important details and contexts of the source images may be lost through a series of convolution layers. In this work, we propose a cross-modal transformer-based fusion (CMTFusion) algorithm for infrared and visible image fusion that captures global interactions by faithfully extracting complementary information from source images. Specifically, we first extract the multiscale feature maps of infrared and visible images. Then, we develop cross-modal transformers (CMTs) to retain complementary information in the source images by removing redundancies in both the spatial and channel domains. To this end, we design a gated bottleneck that integrates cross-domain interaction to consider the characteristics of the source images. Finally, a fusion result is obtained by exploiting spatial-channel information in refined feature maps using a fusion block. Experimental results on multiple datasets demonstrate that the proposed algorithm provides better fusion performance than state-of-the-art infrared and visible image fusion algorithms, both quantitatively and qualitatively. Furthermore, we show that the proposed algorithm can be used to improve the performance of computer vision tasks, e.g., object detection and monocular depth estimation. |
Author | Vien, An Gia Lee, Chul Park, Seonghyun |
Author_xml | – sequence: 1 givenname: Seonghyun orcidid: 0000-0002-5690-7125 surname: Park fullname: Park, Seonghyun email: seonghyun@mme.dongguk.edu organization: Department of Multimedia Engineering, Dongguk University, Seoul, South Korea – sequence: 2 givenname: An Gia orcidid: 0000-0003-0067-0285 surname: Vien fullname: Vien, An Gia email: viengiaan@mme.dongguk.edu organization: Department of Multimedia Engineering, Dongguk University, Seoul, South Korea – sequence: 3 givenname: Chul orcidid: 0000-0001-9329-7365 surname: Lee fullname: Lee, Chul email: chullee@dongguk.edu organization: Department of Multimedia Engineering, Dongguk University, Seoul, South Korea |
BookMark | eNp9kD9PwzAQxS1UJNrCF0AMkZhTfOc4dhYkFFGoVMRA6Go5zRmlSpNipwPfnpR2QAxM7056v_vzJmzUdi0xdg18BsCzuyJ_WxUz5ChmAnUGip-xMUipY0QuR0PNJcQaQV6wSQgbziHRiRqz-9x3IcQvXWWbqPC2Da7zW_IhGjRatM5bT1Vk2ypa1aEuG4oWW_tB0Xwf6q69ZOfONoGuTjpl7_PHIn-Ol69Pi_xhGa8xS_u4TCGxKkkJSAqliTurh46UtTpV6FKSIG3GHVUoSwXWAWoBlYKKl2VWiim7Pc7d-e5zT6E3m27v22GlwQwFIErBB5c-utaHpzw5s6572w939t7WjQFuDmmZn7TMIS1zSmtA8Q-68_XW-q__oZsjVBPRLwBSgYkS38tGdzc |
CODEN | ITCTEM |
CitedBy_id | crossref_primary_10_1016_j_inffus_2025_103025 crossref_primary_10_1109_TIM_2024_3460942 crossref_primary_10_1016_j_patcog_2024_110996 crossref_primary_10_1109_TCSVT_2024_3412743 crossref_primary_10_1109_TITS_2024_3426539 crossref_primary_10_1109_TPAMI_2024_3521416 crossref_primary_10_1109_JSTARS_2024_3509684 crossref_primary_10_1007_s00371_024_03736_1 crossref_primary_10_1109_TGRS_2024_3389976 crossref_primary_10_3390_s25020531 crossref_primary_10_1016_j_compag_2025_110024 crossref_primary_10_1109_TCSVT_2024_3449638 crossref_primary_10_1117_1_JEI_33_5_053016 crossref_primary_10_3390_s24175860 crossref_primary_10_1109_JSEN_2024_3426274 crossref_primary_10_1109_TCSVT_2024_3493254 crossref_primary_10_3934_era_2025009 crossref_primary_10_1007_s11760_024_03672_6 crossref_primary_10_1016_j_patcog_2025_111457 crossref_primary_10_1007_s00371_025_03801_3 crossref_primary_10_1016_j_inffus_2025_103111 crossref_primary_10_1109_TCSVT_2024_3433555 crossref_primary_10_1007_s11042_024_18141_y crossref_primary_10_1109_JSEN_2024_3432143 crossref_primary_10_5909_JBE_2024_29_3_252 crossref_primary_10_3390_rs16050803 crossref_primary_10_1016_j_infrared_2025_105780 crossref_primary_10_1016_j_patcog_2024_110822 crossref_primary_10_3390_s25030791 crossref_primary_10_1109_TCSVT_2024_3491865 crossref_primary_10_1016_j_eswa_2025_126572 crossref_primary_10_1016_j_inffus_2023_102039 crossref_primary_10_1109_TIP_2024_3512365 crossref_primary_10_1109_TETCI_2024_3395529 crossref_primary_10_3390_rs16040673 crossref_primary_10_1016_j_knosys_2024_112132 crossref_primary_10_1109_TGRS_2024_3412683 |
Cites_doi | 10.1016/j.inffus.2018.09.004 10.1109/TIP.2023.3263113 10.1109/ICIP46576.2022.9897993 10.1109/TIM.2022.3175055 10.1109/TCSVT.2021.3109895 10.1109/tmm.2022.3192661 10.1109/tcsvt.2023.3241196 10.1016/j.inffus.2021.02.023 10.1109/TIP.2020.2977573 10.1016/j.inffus.2017.10.007 10.1109/TCSVT.2021.3054584 10.1109/LSP.2012.2227726 10.1016/j.inffus.2021.02.008 10.1049/el:20000267 10.1016/j.infrared.2015.07.003 10.1109/ICAICT.2014.7036000 10.1109/TIP.2012.2214050 10.1007/978-3-319-10602-1_48 10.1109/ICCV48922.2021.01196 10.1109/TCI.2022.3151472 10.1109/TIP.2015.2442920 10.1109/TPAMI.2020.3012548 10.1109/LSP.2021.3109818 10.1109/TPAMI.2021.3063604 10.1016/j.patrec.2021.08.022 10.1016/j.sigpro.2021.108036 10.1016/j.inffus.2021.06.008 10.1007/978-3-031-19797-0_31 10.1109/CVPR.2017.19 10.1109/tcsvt.2023.3234340 10.1109/CVPR.2015.7298706 10.1109/CVPR.2018.00813 10.1023/A:1014573219977 10.1016/j.inffus.2015.11.003 10.1109/TCSVT.2021.3075745 10.1016/j.inffus.2018.02.004 10.1016/j.ins.2019.08.066 10.1109/TIP.2018.2887342 10.1007/978-3-319-71607-7_59 10.1109/TIM.2022.3191664 10.1109/TCSVT.2022.3202563 10.1016/j.inffus.2014.09.004 10.1109/TIP.2020.3043125 10.1109/cvpr52688.2022.00571 10.1007/978-3-030-67070-2_2 10.1016/j.inffus.2010.03.002 10.1016/j.infrared.2014.09.019 10.1109/JAS.2022.105686 10.1109/TIM.2020.3022438 10.24963/ijcai.2022/487 10.1109/CVPRW50498.2020.00060 10.1109/TCI.2021.3119954 10.1109/CVPR52688.2022.01107 10.1109/TBME.2012.2217493 10.1109/CVPR.2018.00262 10.1109/TCSVT.2021.3056725 10.1109/ICIP46576.2022.9897280 10.1016/j.aeue.2015.09.004 10.1016/j.ins.2017.09.010 10.1109/ICCV.2019.00769 10.1109/TCI.2021.3100986 10.1109/TCI.2019.2956873 10.1109/TIP.2020.2975984 10.1109/LSP.2014.2354534 10.1109/TCSVT.2022.3206807 10.1109/TCI.2020.2965304 10.1109/TIM.2021.3056645 10.1109/TIM.2021.3075747 10.1109/ACCESS.2022.3226564 10.1109/TMM.2020.2997127 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024 |
DBID | 97E RIA RIE AAYXX CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D |
DOI | 10.1109/TCSVT.2023.3289170 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005-present IEEE All-Society Periodicals Package (ASPP) 1998-Present IEEE Electronic Library (IEL) CrossRef Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
DatabaseTitle | CrossRef Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional |
DatabaseTitleList | Technology Research Database |
Database_xml | – sequence: 1 dbid: RIE name: IEEE Xplore Digital Library url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering |
EISSN | 1558-2205 |
EndPage | 785 |
ExternalDocumentID | 10_1109_TCSVT_2023_3289170 10163247 |
Genre | orig-research |
GrantInformation_xml | – fundername: Korean Government [Ministry of Science and ICT (MSIT)] grantid: 2022R1F1A1074402 – fundername: National Research Foundation of Korea (NRF) funderid: 10.13039/501100003725 |
GroupedDBID | -~X 0R~ 29I 4.4 5GY 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AETIX AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS EJD HZ~ H~9 ICLAB IFIPE IFJZH IPLJI JAVBF LAI M43 O9- OCL P2P RIA RIE RNS RXW TAE TN5 VH1 AAYXX CITATION RIG 7SC 7SP 8FD JQ2 L7M L~C L~D |
ID | FETCH-LOGICAL-c296t-b614a746e1e5378e0fa846ee7aa8672f6e515a90fed25b71af12831d71d0bb9b3 |
IEDL.DBID | RIE |
ISSN | 1051-8215 |
IngestDate | Mon Jun 30 08:29:29 EDT 2025 Tue Jul 01 00:41:22 EDT 2025 Thu Apr 24 22:59:51 EDT 2025 Wed Aug 27 01:58:40 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 2 |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c296t-b614a746e1e5378e0fa846ee7aa8672f6e515a90fed25b71af12831d71d0bb9b3 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0003-0067-0285 0000-0001-9329-7365 0000-0002-5690-7125 |
PQID | 2923122530 |
PQPubID | 85433 |
PageCount | 16 |
ParticipantIDs | crossref_citationtrail_10_1109_TCSVT_2023_3289170 ieee_primary_10163247 crossref_primary_10_1109_TCSVT_2023_3289170 proquest_journals_2923122530 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2024-02-01 |
PublicationDateYYYYMMDD | 2024-02-01 |
PublicationDate_xml | – month: 02 year: 2024 text: 2024-02-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | New York |
PublicationPlace_xml | – name: New York |
PublicationTitle | IEEE transactions on circuits and systems for video technology |
PublicationTitleAbbrev | TCSVT |
PublicationYear | 2024 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref13 ref57 ref12 ref56 ref15 ref14 ref58 ref53 ref52 ref11 ref10 ref17 ref16 ref19 ref18 Zhang (ref55) 2022 Tancik (ref68); 33 Dosovitskiy (ref45) ref51 ref50 ref46 ref48 ref47 ref42 ref41 ref44 ref43 ref49 Toet (ref64) 2014 ref8 ref7 ref9 ref4 ref3 ref6 ref5 ref81 ref40 Xu (ref54); 34 ref80 ref35 ref34 ref37 ref36 ref31 ref75 ref30 ref74 ref33 ref77 ref32 Scharstein (ref76) 2002; 47 ref2 ref1 ref39 ref38 Kingma (ref69) Tan (ref78) ref71 ref70 ref73 ref72 ref24 ref23 ref26 ref25 ref20 ref63 ref22 ref66 ref21 ref65 ref28 ref27 ref29 Bochkovskiy (ref79) 2020 Nagrani (ref61); 34 ref60 ref62 Simonyan (ref67) Vaswani (ref59); 30 |
References_xml | – ident: ref35 doi: 10.1016/j.inffus.2018.09.004 – ident: ref44 doi: 10.1109/TIP.2023.3263113 – ident: ref50 doi: 10.1109/ICIP46576.2022.9897993 – ident: ref43 doi: 10.1109/TIM.2022.3175055 – ident: ref25 doi: 10.1109/TCSVT.2021.3109895 – ident: ref48 doi: 10.1109/tmm.2022.3192661 – ident: ref56 doi: 10.1109/tcsvt.2023.3241196 – ident: ref27 doi: 10.1016/j.inffus.2021.02.023 – ident: ref38 doi: 10.1109/TIP.2020.2977573 – ident: ref19 doi: 10.1016/j.inffus.2017.10.007 – ident: ref36 doi: 10.1109/TCSVT.2021.3054584 – ident: ref75 doi: 10.1109/LSP.2012.2227726 – ident: ref52 doi: 10.1016/j.inffus.2021.02.008 – ident: ref70 doi: 10.1049/el:20000267 – ident: ref17 doi: 10.1016/j.infrared.2015.07.003 – ident: ref71 doi: 10.1109/ICAICT.2014.7036000 – ident: ref74 doi: 10.1109/TIP.2012.2214050 – ident: ref80 doi: 10.1007/978-3-319-10602-1_48 – ident: ref81 doi: 10.1109/ICCV48922.2021.01196 – ident: ref18 doi: 10.1109/TCI.2022.3151472 – ident: ref72 doi: 10.1109/TIP.2015.2442920 – ident: ref28 doi: 10.1109/TPAMI.2020.3012548 – ident: ref33 doi: 10.1109/LSP.2021.3109818 – ident: ref65 doi: 10.1109/TPAMI.2021.3063604 – ident: ref62 doi: 10.1016/j.patrec.2021.08.022 – ident: ref2 doi: 10.1016/j.sigpro.2021.108036 – ident: ref5 doi: 10.1016/j.inffus.2021.06.008 – ident: ref29 doi: 10.1007/978-3-031-19797-0_31 – ident: ref66 doi: 10.1109/CVPR.2017.19 – ident: ref51 doi: 10.1109/tcsvt.2023.3234340 – start-page: 1 volume-title: Proc. Int. Conf. Learn. Represent. ident: ref67 article-title: Very deep convolutional networks for large-scale image recognition – year: 2020 ident: ref79 article-title: YOLOv4: Optimal speed and accuracy of object detection publication-title: arXiv:2004.10934 – ident: ref63 doi: 10.1109/CVPR.2015.7298706 – year: 2022 ident: ref55 article-title: CMX: Cross-modal fusion for RGB-X semantic segmentation with transformers publication-title: arXiv:2203.04838 – start-page: 10096 volume-title: Proc. Int. Conf. Mach. Learn. ident: ref78 article-title: EfficientNetv2: Smaller models and faster training – ident: ref53 doi: 10.1109/CVPR.2018.00813 – volume: 47 start-page: 7 issue: 1 year: 2002 ident: ref76 article-title: A taxonomy and evaluation of dense twoframe stereo correspondence algorithms publication-title: Int. J. Comput. Vis. doi: 10.1023/A:1014573219977 – ident: ref8 doi: 10.1016/j.inffus.2015.11.003 – ident: ref31 doi: 10.1109/TCSVT.2021.3075745 – ident: ref1 doi: 10.1016/j.inffus.2018.02.004 – ident: ref9 doi: 10.1016/j.ins.2019.08.066 – ident: ref26 doi: 10.1109/TIP.2018.2887342 – ident: ref11 doi: 10.1007/978-3-319-71607-7_59 – ident: ref47 doi: 10.1109/TIM.2022.3191664 – ident: ref57 doi: 10.1109/TCSVT.2022.3202563 – ident: ref16 doi: 10.1016/j.inffus.2014.09.004 – ident: ref32 doi: 10.1109/TIP.2020.3043125 – ident: ref40 doi: 10.1109/cvpr52688.2022.00571 – ident: ref58 doi: 10.1007/978-3-030-67070-2_2 – ident: ref6 doi: 10.1016/j.inffus.2010.03.002 – ident: ref14 doi: 10.1016/j.infrared.2014.09.019 – ident: ref46 doi: 10.1109/JAS.2022.105686 – ident: ref21 doi: 10.1109/TIM.2020.3022438 – ident: ref24 doi: 10.24963/ijcai.2022/487 – ident: ref3 doi: 10.1109/CVPRW50498.2020.00060 – ident: ref39 doi: 10.1109/TCI.2021.3119954 – ident: ref42 doi: 10.1109/CVPR52688.2022.01107 – ident: ref13 doi: 10.1109/TBME.2012.2217493 – ident: ref77 doi: 10.1109/CVPR.2018.00262 – ident: ref30 doi: 10.1109/TCSVT.2021.3056725 – ident: ref49 doi: 10.1109/ICIP46576.2022.9897280 – ident: ref73 doi: 10.1016/j.aeue.2015.09.004 – ident: ref15 doi: 10.1016/j.ins.2017.09.010 – ident: ref60 doi: 10.1109/ICCV.2019.00769 – volume: 33 start-page: 7537 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref68 article-title: Fourier features let networks learn high frequency functions in low dimensional domains – ident: ref23 doi: 10.1109/TCI.2021.3100986 – ident: ref10 doi: 10.1109/TCI.2019.2956873 – ident: ref12 doi: 10.1109/TIP.2020.2975984 – ident: ref7 doi: 10.1109/LSP.2014.2354534 – start-page: 1 volume-title: Proc. Int. Conf. Learn. Represent. ident: ref69 article-title: Adam: A method for stochastic optimization – ident: ref37 doi: 10.1109/TCSVT.2022.3206807 – volume: 30 start-page: 3 volume-title: Proc. Conf. Neural Inf. Process. Syst. ident: ref59 article-title: Attention is all you need – ident: ref20 doi: 10.1109/TCI.2020.2965304 – volume: 34 start-page: 14200 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref61 article-title: Attention bottlenecks for multimodal fusion – ident: ref22 doi: 10.1109/TIM.2021.3056645 – volume-title: TNO Image Fusion Dataset year: 2014 ident: ref64 – ident: ref4 doi: 10.1109/TIM.2021.3075747 – ident: ref34 doi: 10.1109/ACCESS.2022.3226564 – ident: ref41 doi: 10.1109/TMM.2020.2997127 – start-page: 1 volume-title: Proc. Int. Conf. Learn. Represent. ident: ref45 article-title: An image is worth 16×16 words: Transformer for image recognition at scale – volume: 34 start-page: 28522 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref54 article-title: ViTAE: Vision transformer advanced by exploring intrinsic inductive bias |
SSID | ssj0014847 |
Score | 2.6134984 |
Snippet | Image fusion techniques aim to generate more informative images by merging multiple images of different modalities with complementary information. Despite... |
SourceID | proquest crossref ieee |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 770 |
SubjectTerms | Algorithms Artificial neural networks Computer vision Data mining Feature extraction Feature maps Image fusion infrared image Infrared imagery Infrared imaging Object recognition Performance enhancement self-attention transformer Transformers visible image |
Title | Cross-Modal Transformers for Infrared and Visible Image Fusion |
URI | https://ieeexplore.ieee.org/document/10163247 https://www.proquest.com/docview/2923122530 |
Volume | 34 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3NS8MwFA9uJz34OXE6pQdv0q7pV9qLIMOxCdvFbexWkuYFxNnJbC_-9b6k7RiK4qmFJiXkveT9XvLe-xFyq_NxvZBTlABIOxCJsmNwY5sLgSojqFCGDmgyjUbz4GkZLutkdZMLAwAm-Awc_Wru8uU6K_VRWV97mggAWIu00HOrkrW2VwZBbNjEEC9QO0ZD1mTIuEl_NnhezBxNFO746GBQzUy8Y4UMrcqPvdgYmOERmTZDq-JKXp2yEE72-a1q47_HfkwOa6hpPVS6cUL2ID8lBzsFCM_I_UAPz56sJTacNRgWEaGFT2ucq42OT7d4Lq3FCy6eFVjjN9yBrGGpT9k6ZD58nA1Gds2oYGdeEhW2QGPMWRABhdBnKBTFEX8AMM7jiHkqAoQ3PHEVSC8UjHKF5sunklHpCpEI_5y083UOF8QKs0AiXJNhgh4cUyxW-J0rT_gec0G4XUKbGU6zuty4Zr1YpcbtcJPUSCXVUklrqXTJ3bbPe1Vs48_WHT3NOy2rGe6SXiPJtF6QH6mngSxqpe9e_tLtiuzj34MqIrtH2sWmhGsEHIW4MYr2BRkbz3o |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LS8QwEB58HNSDb3F99uBNWpu-0l4EWVx21d2LVbyVpJmAqF3R3Yu_3knayqIonlpoQkNmkvkmmZkP4MTk4waxYCQBVG4kM-2m6KeukJJURjKpLR3QcJT076Krh_ihSVa3uTCIaIPP0DOv9i5fjcupOSo7M54mAQA-D4tk-GNWp2t9XRpEqeUTI8TA3JRMWZsj42dneff2PvcMVbgXkovBDDfxjB2yxCo_dmNrYnprMGoHV0eWPHnTifTKj291G_89-nVYbcCmc1FrxwbMYbUJKzMlCLfgvGuG5w7HihrmLYolTOjQ0xlU-s1EqDuiUs79Iy2fZ3QGL7QHOb2pOWfbhrveZd7tuw2nglsGWTJxJZljwaMEGcYhJ7FoQQgEkQuRJjzQCRLAEZmvUQWx5ExoMmAhU5wpX8pMhjuwUI0r3AUnLiNFgE3FGflwXPNU03ehAxkG3Efpd4C1M1yUTcFxw3vxXFjHw88KK5XCSKVopNKB068-r3W5jT9bb5tpnmlZz3AHDlpJFs2SfC8CA2VJL0N_75dux7DUz4c3xc1gdL0Py_SnqI7PPoCFydsUDwl-TOSRVbpPGhvSww |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Cross-Modal+Transformers+for+Infrared+and+Visible+Image+Fusion&rft.jtitle=IEEE+transactions+on+circuits+and+systems+for+video+technology&rft.au=Park%2C+Seonghyun&rft.au=An+Gia+Vien&rft.au=Lee%2C+Chul&rft.date=2024-02-01&rft.pub=The+Institute+of+Electrical+and+Electronics+Engineers%2C+Inc.+%28IEEE%29&rft.issn=1051-8215&rft.eissn=1558-2205&rft.volume=34&rft.issue=2&rft.spage=770&rft_id=info:doi/10.1109%2FTCSVT.2023.3289170&rft.externalDBID=NO_FULL_TEXT |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1051-8215&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1051-8215&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1051-8215&client=summon |