Cross-UNet: dual-branch infrared and visible image fusion framework based on cross-convolution and attention mechanism

Existing infrared and visible image fusion methods suffer from edge information loss, artifact introduction, and image distortion. Therefore, a dual-branch network model based on the attention mechanism, Cross-UNet, is proposed in this paper for infrared and visible image fusion. First, the encoder...

Full description

Saved in:
Bibliographic Details
Published inThe Visual computer Vol. 39; no. 10; pp. 4801 - 4818
Main Authors Wang, Xuejiao, Hua, Zhen, Li, Jinjiang
Format Journal Article
LanguageEnglish
Published Berlin/Heidelberg Springer Berlin Heidelberg 01.10.2023
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Existing infrared and visible image fusion methods suffer from edge information loss, artifact introduction, and image distortion. Therefore, a dual-branch network model based on the attention mechanism, Cross-UNet, is proposed in this paper for infrared and visible image fusion. First, the encoder part adopts an asymmetric convolution kernel, which can simultaneously obtain local detail information and global structural information of the source image from different directions. Second, in order to fuse the dual-branch image features of different scales, a dual-attention mechanism is added to the fusion block. Finally, the decoder adopts an attention model with a large receptive field to enhance the ability to judge the importance of features, thereby improving the fusion quality. On the public datasets of TNO, RoadScene, and Country, the results are fully compared with nine other advanced fusion methods both qualitatively and quantitatively. The results show that the model in this paper has superior performance and high stability.
AbstractList Existing infrared and visible image fusion methods suffer from edge information loss, artifact introduction, and image distortion. Therefore, a dual-branch network model based on the attention mechanism, Cross-UNet, is proposed in this paper for infrared and visible image fusion. First, the encoder part adopts an asymmetric convolution kernel, which can simultaneously obtain local detail information and global structural information of the source image from different directions. Second, in order to fuse the dual-branch image features of different scales, a dual-attention mechanism is added to the fusion block. Finally, the decoder adopts an attention model with a large receptive field to enhance the ability to judge the importance of features, thereby improving the fusion quality. On the public datasets of TNO, RoadScene, and Country, the results are fully compared with nine other advanced fusion methods both qualitatively and quantitatively. The results show that the model in this paper has superior performance and high stability.
Author Li, Jinjiang
Hua, Zhen
Wang, Xuejiao
Author_xml – sequence: 1
  givenname: Xuejiao
  surname: Wang
  fullname: Wang, Xuejiao
  organization: School of Computer Science and Technology, Shandong Technology and Business University, Institute of Network Technology (INT)
– sequence: 2
  givenname: Zhen
  surname: Hua
  fullname: Hua, Zhen
  email: huazhen@sdtbu.edu.cn
  organization: School of Information and electronic engineering, Shandong Technology and Business University
– sequence: 3
  givenname: Jinjiang
  orcidid: 0000-0002-2080-8678
  surname: Li
  fullname: Li, Jinjiang
  organization: School of Computer Science and Technology, Shandong Technology and Business University, Institute of Network Technology (INT)
BookMark eNp9kMtOwzAQRS1UJNrCD7CKxNowtpM4YYcqXhKCDV1bjuO0KYld7KSIv8dJkJBYdDEajX3PPO4CzYw1GqFLAtcEgN94AMYJBkpDpDTD6Qmak5hRTBlJZmgOhGeY8iw_QwvvdxBqHudzdFg56z1ev-ruNip72eDCSaO2UW0qJ50uI2nK6FD7umh0VLdyo6Oq97U1Ufhv9Zd1H1EhfRCGJzU2U9YcbNN3g2igZddpM1atVltpat-eo9NKNl5f_OYlWj_cv6-e8Mvb4_Pq7gUrRvIOx5KzIivTAiqVE6U1i3VBqCpjoFWqEgZpnhFKY8hzVQFLIEspSJZwppOyALZEV1PfvbOfvfad2NnemTBS0JxkwAcgqOikGvd3uhJ7F05134KAGPwVk78i-CtGf0UaoOwfpOpODmd2TtbNcZRNqA9zzEa7v62OUD_2xpJd
CitedBy_id crossref_primary_10_1007_s00371_023_02834_w
crossref_primary_10_1007_s00371_024_03273_x
crossref_primary_10_3389_fphy_2023_1180100
crossref_primary_10_3390_su152215920
crossref_primary_10_1364_JOSAA_492002
crossref_primary_10_3390_app14010114
crossref_primary_10_1007_s00371_023_02780_7
crossref_primary_10_1016_j_jrmge_2024_09_051
crossref_primary_10_1007_s11760_022_02392_z
crossref_primary_10_3390_rs16244781
crossref_primary_10_3389_fphy_2023_1214206
crossref_primary_10_1016_j_optlastec_2024_111666
Cites_doi 10.1016/j.infrared.2022.104041
10.1109/TPAMI.2011.109
10.1016/j.inffus.2019.07.011
10.1016/j.bspc.2021.103357
10.1007/s00371-021-02396-9
10.1109/CVPR42600.2020.00325
10.21437/Interspeech.2016-686
10.1007/s00371-022-02438-w
10.1109/CVPR.2011.5995637
10.1016/j.optlastec.2021.107787
10.1016/j.knosys.2021.107087
10.1016/j.neucom.2021.06.009
10.3390/rs13030335
10.1007/s00371-022-02410-8
10.1109/JSEN.2021.3073568
10.1109/TCSVT.2021.3109895
10.1109/TCSVT.2021.3138431
10.1109/TIP.2019.2905537
10.1007/s00371-022-02441-1
10.1016/j.neucom.2022.02.025
10.1109/ICPR48806.2021.9412293
10.1609/aaai.v34i07.6975
10.1016/j.inffus.2018.09.004
10.1016/j.inffus.2021.02.023
10.1016/j.ins.2017.09.010
10.1016/j.cviu.2022.103407
10.1109/TIP.2022.3203223
10.1109/TIP.2018.2887342
10.1109/ICCV.2019.00069
10.1016/j.infrared.2019.03.030
10.1016/j.media.2017.06.014
10.1016/j.inffus.2021.02.008
10.1109/JSEN.2021.3090021
10.1109/CVPR.2016.308
10.1109/TNNLS.2020.3019893
10.1016/j.patcog.2020.107756
10.1109/JSTARS.2021.3065987
10.1109/CVPR.2018.00745
10.1109/CVPR.2018.00813
10.1016/j.inffus.2022.03.007
10.1109/CVPR42600.2020.00406
10.1109/ICASSP.2019.8682386
10.1109/TPAMI.2020.3012548
10.1109/TIP.2020.2975984
10.1109/TIM.2020.3005230
ContentType Journal Article
Copyright The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2022. Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.. Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2022. Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
Copyright_xml – notice: The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2022. Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.. Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
– notice: The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2022. Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
DBID AAYXX
CITATION
8FE
8FG
AFKRA
ARAPS
AZQEC
BENPR
BGLVJ
CCPQU
DWQXO
GNUQQ
HCIFZ
JQ2
K7-
P5Z
P62
PHGZM
PHGZT
PKEHL
PQEST
PQGLB
PQQKQ
PQUKI
DOI 10.1007/s00371-022-02628-6
DatabaseName CrossRef
ProQuest SciTech Collection
ProQuest Technology Collection
ProQuest Central UK/Ireland
Advanced Technologies & Aerospace Collection
ProQuest Central Essentials
ProQuest Central
Technology Collection
ProQuest One Community College
ProQuest Central
ProQuest Central Student
SciTech Premium Collection
ProQuest Computer Science Collection
Computer Science Database
ProQuest Advanced Technologies & Aerospace Database
ProQuest Advanced Technologies & Aerospace Collection
ProQuest Central Premium
ProQuest One Academic
ProQuest One Academic Middle East (New)
ProQuest One Academic Eastern Edition (DO NOT USE)
ProQuest One Applied & Life Sciences
ProQuest One Academic
ProQuest One Academic UKI Edition
DatabaseTitle CrossRef
Advanced Technologies & Aerospace Collection
Computer Science Database
ProQuest Central Student
Technology Collection
ProQuest One Academic Middle East (New)
ProQuest Advanced Technologies & Aerospace Collection
ProQuest Central Essentials
ProQuest Computer Science Collection
ProQuest One Academic Eastern Edition
SciTech Premium Collection
ProQuest One Community College
ProQuest Technology Collection
ProQuest SciTech Collection
ProQuest Central
Advanced Technologies & Aerospace Database
ProQuest One Applied & Life Sciences
ProQuest One Academic UKI Edition
ProQuest Central Korea
ProQuest Central (New)
ProQuest One Academic
ProQuest One Academic (New)
DatabaseTitleList Advanced Technologies & Aerospace Collection

Database_xml – sequence: 1
  dbid: 8FG
  name: ProQuest Technology Collection
  url: https://search.proquest.com/technologycollection1
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
Computer Science
EISSN 1432-2315
EndPage 4818
ExternalDocumentID 10_1007_s00371_022_02628_6
GrantInformation_xml – fundername: National Natural Science Foundation of China
  grantid: 61972235; 12001327
  funderid: http://dx.doi.org/10.13039/501100001809
– fundername: National Natural Science Foundation of China
  grantid: 61772319; 62002200
  funderid: http://dx.doi.org/10.13039/501100001809
GroupedDBID -4Z
-59
-5G
-BR
-EM
-Y2
-~C
-~X
.86
.DC
.VR
06D
0R~
0VY
123
1N0
1SB
2.D
203
28-
29R
2J2
2JN
2JY
2KG
2KM
2LR
2P1
2VQ
2~H
30V
4.4
406
408
409
40D
40E
5QI
5VS
67Z
6NX
6TJ
78A
8TC
8UJ
95-
95.
95~
96X
AAAVM
AABHQ
AACDK
AAHNG
AAIAL
AAJBT
AAJKR
AANZL
AAOBN
AARHV
AARTL
AASML
AATNV
AATVU
AAUYE
AAWCG
AAYIU
AAYOK
AAYQN
AAYTO
AAYZH
ABAKF
ABBBX
ABBXA
ABDPE
ABDZT
ABECU
ABFTV
ABHLI
ABHQN
ABJNI
ABJOX
ABKCH
ABKTR
ABMNI
ABMQK
ABNWP
ABQBU
ABQSL
ABSXP
ABTEG
ABTHY
ABTKH
ABTMW
ABULA
ABWNU
ABXPI
ACAOD
ACBXY
ACDTI
ACGFS
ACHSB
ACHXU
ACKNC
ACMDZ
ACMLO
ACOKC
ACOMO
ACPIV
ACZOJ
ADHHG
ADHIR
ADIMF
ADINQ
ADKNI
ADKPE
ADQRH
ADRFC
ADTPH
ADURQ
ADYFF
ADZKW
AEBTG
AEFIE
AEFQL
AEGAL
AEGNC
AEJHL
AEJRE
AEKMD
AEMSY
AENEX
AEOHA
AEPYU
AESKC
AETLH
AEVLU
AEXYK
AFBBN
AFEXP
AFFNX
AFGCZ
AFKRA
AFLOW
AFQWF
AFWTZ
AFZKB
AGAYW
AGDGC
AGGDS
AGJBK
AGMZJ
AGQEE
AGQMX
AGRTI
AGWIL
AGWZB
AGYKE
AHAVH
AHBYD
AHSBF
AHYZX
AIAKS
AIGIU
AIIXL
AILAN
AITGF
AJBLW
AJRNO
AJZVZ
ALMA_UNASSIGNED_HOLDINGS
ALWAN
AMKLP
AMXSW
AMYLF
AMYQR
AOCGG
ARAPS
ARMRJ
ASPBG
AVWKF
AXYYD
AYJHY
AZFZN
B-.
BA0
BBWZM
BDATZ
BENPR
BGLVJ
BGNMA
BSONS
CAG
CCPQU
COF
CS3
CSCUP
DDRTE
DL5
DNIVK
DPUIP
DU5
EBLON
EBS
EIOEI
EJD
ESBYG
FEDTE
FERAY
FFXSO
FIGPU
FINBP
FNLPD
FRRFC
FSGXE
FWDCC
GGCAI
GGRSB
GJIRD
GNWQR
GQ6
GQ7
GQ8
GXS
H13
HCIFZ
HF~
HG5
HG6
HMJXF
HQYDN
HRMNR
HVGLF
HZ~
I09
IHE
IJ-
IKXTQ
ITM
IWAJR
IXC
IZIGR
IZQ
I~X
I~Z
J-C
J0Z
JBSCW
JCJTX
JZLTJ
K7-
KDC
KOV
KOW
LAS
LLZTM
M4Y
MA-
N2Q
N9A
NB0
NDZJH
NPVJJ
NQJWS
NU0
O9-
O93
O9G
O9I
O9J
OAM
P19
P2P
P9O
PF0
PT4
PT5
QOK
QOS
R4E
R89
R9I
RHV
RIG
RNI
RNS
ROL
RPX
RSV
RZK
S16
S1Z
S26
S27
S28
S3B
SAP
SCJ
SCLPG
SCO
SDH
SDM
SHX
SISQX
SJYHP
SNE
SNPRN
SNX
SOHCF
SOJ
SPISZ
SRMVM
SSLCW
STPWE
SZN
T13
T16
TN5
TSG
TSK
TSV
TUC
U2A
UG4
UOJIU
UTJUX
UZXMN
VC2
VFIZW
W23
W48
WK8
YLTOR
YOT
Z45
Z5O
Z7R
Z7S
Z7X
Z7Z
Z83
Z86
Z88
Z8M
Z8N
Z8R
Z8T
Z8W
Z92
ZMTXR
~EX
AAPKM
AAYXX
ABBRH
ABDBE
ABFSG
ACSTC
ADHKG
ADKFA
AEZWR
AFDZB
AFHIU
AFOHR
AGQPQ
AHPBZ
AHWEU
AIXLP
ATHPR
AYFIA
CITATION
PHGZM
PHGZT
8FE
8FG
ABRTQ
AZQEC
DWQXO
GNUQQ
JQ2
P62
PKEHL
PQEST
PQGLB
PQQKQ
PQUKI
ID FETCH-LOGICAL-c319t-4a73b8d6b0fc91cee34eb12cd402f6c5306981224099cf03508620a3573e5db03
IEDL.DBID BENPR
ISSN 0178-2789
IngestDate Fri Jul 25 23:39:13 EDT 2025
Tue Jul 01 01:05:52 EDT 2025
Thu Apr 24 22:58:57 EDT 2025
Fri Feb 21 02:41:37 EST 2025
IsPeerReviewed true
IsScholarly true
Issue 10
Keywords Attention mechanism
Cross-convolution
Image fusion
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c319t-4a73b8d6b0fc91cee34eb12cd402f6c5306981224099cf03508620a3573e5db03
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0002-2080-8678
PQID 2918070862
PQPubID 2043737
PageCount 18
ParticipantIDs proquest_journals_2918070862
crossref_primary_10_1007_s00371_022_02628_6
crossref_citationtrail_10_1007_s00371_022_02628_6
springer_journals_10_1007_s00371_022_02628_6
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 20231000
2023-10-00
20231001
PublicationDateYYYYMMDD 2023-10-01
PublicationDate_xml – month: 10
  year: 2023
  text: 20231000
PublicationDecade 2020
PublicationPlace Berlin/Heidelberg
PublicationPlace_xml – name: Berlin/Heidelberg
– name: Heidelberg
PublicationSubtitle International Journal of Computer Graphics
PublicationTitle The Visual computer
PublicationTitleAbbrev Vis Comput
PublicationYear 2023
Publisher Springer Berlin Heidelberg
Springer Nature B.V
Publisher_xml – name: Springer Berlin Heidelberg
– name: Springer Nature B.V
References YinWHeKXuDLuoYGongJSignificant target analysis and detail preserving based infrared and visible image fusionInfrared Phys. Technol.202212110404110.1016/j.infrared.2022.104041
Cai, W., Cai, D., Huang, S., Li, M.: Utterance-level end-to-end language identification using attention-based cnn-blstm. In: ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, pp 5991–5995 (2019)
Hou, Q., Zhang, L., Cheng, M.M., Feng, J.: Strip pooling: Rethinking spatial pooling for scene parsing. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 4003–4012 (2020)
MaJYuWLiangPLiCJiangJFusiongan: a generative adversarial network for infrared and visible image fusionInf. Fusion201948112610.1016/j.inffus.2018.09.004
LiXZhouFTanHJoint image fusion and denoising via three-layer decomposition and sparse representationKnowl.-Based Syst.202122410708710.1016/j.knosys.2021.107087
Wang, X., Wu, K., Zhang, Y., Xiao, Y., Xu, P.: A gan-based denoising method for chinese stele and rubbing calligraphic image. Vis. Comput. pp. 1–12 (2022)
XuHGongMTianXHuangJMaJCufd: An encoder-decoder network for visible and infrared image fusion based on common and unique feature decompositionComput. Vis. Image Underst.202221810340710.1016/j.cviu.2022.103407
Yang, F., Zhang, Q.: Depth aware image dehazing, pp. 1–9. The Visual Computer (2021)
Cai, W., Wei, Z.: Remote sensing image classification based on a cross-attention mechanism and graph convolution. IEEE Geosci. Remote Sens. Lett. (2020)
Brown, M., Süsstrunk, S.: Multi-spectral sift for scene category recognition. In: CVPR 2011, IEEE, pp 177–184 (2011)
Toet, A., et al.: Tno image fusion dataset<\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$<$$\end{document}https://figshare.com/articles. TN_Image_Fusion_Dataset/1008029 (2014)
Zhang, H., Xu, H., Xiao, Y., Guo, X., Ma, J.: Rethinking the image fusion: A fast unified image fusion network based on proportional maintenance of gradient and intensity. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol 34, pp. 12797–12804 (2020)
Fu, Y., Wu, X.J.: A dual-branch network for infrared and visible image fusion. In: 2020 25th International Conference on Pattern Recognition (ICPR), IEEE, pp 10675–10680 (2021)
ZhangWSuiXGuGChenQCaoHInfrared thermal imaging super-resolution via multiscale spatio-temporal feature fusion networkIEEE Sens. J.20212117191761918510.1109/JSEN.2021.3090021
Wang, Z., Wang, J., Wu, Y., Xu, J., Zhang, X.: Unfusion: A unified multi-scale densely connected network for infrared and visible image fusion. IEEE Trans. Circ. Syst. Video Technol. (2021)
LiHWuXJDensefuse: a fusion approach to infrared and visible imagesIEEE Trans. Image Process.201828526142623392519310.1109/TIP.2018.2887342
YuQGaoYZhengYZhuJDaiYShiYCrossover-net: leveraging vertical-horizontal crossover relation for robust medical image segmentationPattern Recogn.202111310775610.1016/j.patcog.2020.107756
YuQShiYSunJGaoYZhuJDaiYCrossbar-net: a novel convolutional neural network for kidney tumor segmentation in ct imagesIEEE Trans. Image Process.201928840604074397693010.1109/TIP.2019.290553707122962
Yu, Q., Qi, L., Zhou, L., Wang, L., Yin, Y., Shi, Y., Wang, W., Gao, Y.: Crosslink-net: Double-branch encoder segmentation network via fusing vertical and horizontal convolutions. arXiv preprint arXiv:2107.11517 (2021)
LiuCYangBLiYZhangXPangLAn information retention and feature transmission network for infrared and visible image fusionIEEE Sens. J.20212113149501495910.1109/JSEN.2021.3073568
SongADuanHPeiHDingLTriple-discriminator generative adversarial network for infrared and visible image fusionNeurocomputing202248318319410.1016/j.neucom.2022.02.025
ZhuZYinHChaiYLiYQiGA novel multi-modality image fusion method based on image decomposition and sparse representationInf. Sci.2018432516529375819610.1016/j.ins.2017.09.010
LiHWuXJKittlerJRfn-nest: an end-to-end residual fusion network for infrared and visible imagesInf. Fusion202173728610.1016/j.inffus.2021.02.023
WangSZhouMLiuZLiuZGuDZangYDongDGevaertOTianJCentral focused convolutional neural networks: developing a data-driven model for lung nodule segmentationMed. Image Anal.20174017218310.1016/j.media.2017.06.014
LiHWuXJDurraniTNestfuse: an infrared and visible image fusion architecture based on nest connection and spatial/channel attention modelsIEEE Trans. Instrum. Meas.202069129645965610.1109/TIM.2020.3005230
Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7132–7141 (2018)
Guo, M.H., Lu, C.Z., Liu, Z.N., Cheng, M.M., Hu, S.M. Visual attention network. arXiv preprint arXiv:2202.09741 (2022)
YangBWangLWongDFShiSTuZContext-aware self-attention networks for natural language processingNeurocomputing202145815716910.1016/j.neucom.2021.06.009
Nozaripour, A., Soltanizadeh, H.: Image classification via convolutional sparse coding. Vis. Comput. pp 1–14 (2022)
GalassiALippiMTorroniPAttention in natural language processingIEEE Trans. Neural Netw. Learn. Syst.202032104291430810.1109/TNNLS.2020.3019893
QingYLiuWHyperspectral image classification based on multi-scale residual network with attention mechanismRemote Sensing202113333510.3390/rs13030335
Lu, R., Gao, F., Yang, X., Fan, J., Li, D.: A novel infrared and visible image fusion method based on multi-level saliency integration. Vis. Comput. pp 1–15 (2022)
LiuZBlaschEXueZZhaoJLaganiereRWuWObjective assessment of multiresolution image fusion algorithms for context enhancement in night vision: a comparative studyIEEE Trans. Pattern Anal. Mach. Intell.20113419410910.1109/TPAMI.2011.109
Huang, Z., Wang, X., Huang, L., Huang, C., Wei, Y., Liu, W.: Ccnet: Criss-cross attention for semantic segmentation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 603–612 (2019)
MaJZhangHShaoZLiangPXuHGanmcc: a generative adversarial network with multiclassification constraints for infrared and visible image fusionIEEE Trans. Instrum. Meas.202070114
ZhangYLiuYSunPYanHZhaoXZhangLIfcnn: a general image fusion framework based on convolutional neural networkInf. Fusion2020549911810.1016/j.inffus.2019.07.011
Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7794–7803 (2018)
YousifASOmarZSheikhUUAn improved approach for medical image fusion using sparse representation and siamese convolutional neural networkBiomed. Signal Process. Control20227210335710.1016/j.bspc.2021.103357
LiHWuXJKittlerJMdlatlrr: a novel decomposition method for infrared and visible image fusionIEEE Trans. Image Process.2020294733474610.1109/TIP.2020.297598407586208
WangZBaiXHigh frequency assisted fusion for infrared and visible images through sparse representationInfrared Phys. Technol.20199821222210.1016/j.infrared.2019.03.030
ChenXLiuLKongXThe fusion of infrared and visible images via decomposition-based structure transfer and local saliency detectionOpt. Laser Technol.202214910778710.1016/j.optlastec.2021.107787
LiGLinYQuXAn infrared and visible image fusion method based on multi-scale transformation and norm optimizationInf. Fusion20217110912910.1016/j.inffus.2021.02.008
TangLYuanJZhangHJiangXMaJPiafusion: a progressive infrared and visible image fusion network based on illumination awareInf, Fusion202283799210.1016/j.inffus.2022.03.007
Geng, W., Wang, W., Zhao, Y., Cai, X., Xu, B., Xinyuan, C., et al.: End-to-end language identification using attention-based recurrent neural networks. In: Interspeech, pp 2944–2948 (2016)
Park, J., Woo, S., Lee, J.Y., Kweon, I.S.: Bam: Bottleneck attention module. arXiv preprint arXiv:1807.06514 (2018)
Aghamaleki, J.A., Ghorbani, A.: Image fusion using dual tree discrete wavelet transform and weights optimization. Vis. Comput. pp 1–11 (2022)
XueZYuXLiuBTanXWeiXHresnetam: Hierarchical residual network with attention mechanism for hyperspectral image classificationIEEE J. Select. Top. Appl. Earth Observ. Remote Sensing2021143566358010.1109/JSTARS.2021.3065987
XuHMaJJiangJGuoXLingHU2fusion: A unified unsupervised image fusion networkIEEE Trans. Pattern Anal. Mach. Intell.202044150251810.1109/TPAMI.2020.3012548
Liu, Y., Jia, Q., Fan, X., Wang, S., Ma, S., Gao, W.: Cross-srn: structure-preserving super-resolution network with cross convolution. IEEE Trans. Circ. Syst. Video Technol. (2021)
Liu, Y., Sun, G., Qiu, Y., Zhang, L., Chhatkuli, A., Van Gool, L.: Transformer in convolutional neural networks. arXiv preprint arXiv:2106.03180 (2021)
Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 2818–2826 (2016)
Zhang, Z., Lan, C., Zeng, W., Jin, X., Chen, Z.: Relation-aware global attention for person re-identification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 3186–3195 (2020)
G Li (2628_CR2) 2021; 71
L Tang (2628_CR20) 2022; 83
S Wang (2628_CR22) 2017; 40
J Ma (2628_CR11) 2019; 48
Q Yu (2628_CR30) 2021; 113
2628_CR39
A Galassi (2628_CR31) 2020; 32
2628_CR37
2628_CR38
X Chen (2628_CR5) 2022; 149
2628_CR33
2628_CR34
H Li (2628_CR14) 2018; 28
Y Zhang (2628_CR18) 2020; 54
Z Xue (2628_CR36) 2021; 14
Y Qing (2628_CR35) 2021; 13
2628_CR29
H Xu (2628_CR19) 2022; 218
2628_CR26
2628_CR27
2628_CR24
2628_CR25
2628_CR23
W Zhang (2628_CR3) 2021; 21
2628_CR21
W Yin (2628_CR4) 2022; 121
X Li (2628_CR10) 2021; 224
H Xu (2628_CR46) 2020; 44
2628_CR17
B Yang (2628_CR32) 2021; 458
H Li (2628_CR15) 2020; 69
2628_CR51
Q Yu (2628_CR28) 2019; 28
AS Yousif (2628_CR9) 2022; 72
Z Liu (2628_CR50) 2011; 34
Z Wang (2628_CR7) 2019; 98
2628_CR8
J Ma (2628_CR12) 2020; 70
H Li (2628_CR16) 2021; 73
A Song (2628_CR13) 2022; 483
2628_CR1
H Li (2628_CR52) 2020; 29
C Liu (2628_CR49) 2021; 21
2628_CR48
2628_CR47
2628_CR44
2628_CR45
2628_CR42
2628_CR43
Z Zhu (2628_CR6) 2018; 432
2628_CR40
2628_CR41
References_xml – reference: Park, J., Woo, S., Lee, J.Y., Kweon, I.S.: Bam: Bottleneck attention module. arXiv preprint arXiv:1807.06514 (2018)
– reference: Zhang, H., Xu, H., Xiao, Y., Guo, X., Ma, J.: Rethinking the image fusion: A fast unified image fusion network based on proportional maintenance of gradient and intensity. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol 34, pp. 12797–12804 (2020)
– reference: Guo, M.H., Lu, C.Z., Liu, Z.N., Cheng, M.M., Hu, S.M. Visual attention network. arXiv preprint arXiv:2202.09741 (2022)
– reference: Wang, Z., Wang, J., Wu, Y., Xu, J., Zhang, X.: Unfusion: A unified multi-scale densely connected network for infrared and visible image fusion. IEEE Trans. Circ. Syst. Video Technol. (2021)
– reference: LiuCYangBLiYZhangXPangLAn information retention and feature transmission network for infrared and visible image fusionIEEE Sens. J.20212113149501495910.1109/JSEN.2021.3073568
– reference: Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7794–7803 (2018)
– reference: LiXZhouFTanHJoint image fusion and denoising via three-layer decomposition and sparse representationKnowl.-Based Syst.202122410708710.1016/j.knosys.2021.107087
– reference: XuHMaJJiangJGuoXLingHU2fusion: A unified unsupervised image fusion networkIEEE Trans. Pattern Anal. Mach. Intell.202044150251810.1109/TPAMI.2020.3012548
– reference: ChenXLiuLKongXThe fusion of infrared and visible images via decomposition-based structure transfer and local saliency detectionOpt. Laser Technol.202214910778710.1016/j.optlastec.2021.107787
– reference: Lu, R., Gao, F., Yang, X., Fan, J., Li, D.: A novel infrared and visible image fusion method based on multi-level saliency integration. Vis. Comput. pp 1–15 (2022)
– reference: Yang, F., Zhang, Q.: Depth aware image dehazing, pp. 1–9. The Visual Computer (2021)
– reference: Geng, W., Wang, W., Zhao, Y., Cai, X., Xu, B., Xinyuan, C., et al.: End-to-end language identification using attention-based recurrent neural networks. In: Interspeech, pp 2944–2948 (2016)
– reference: YangBWangLWongDFShiSTuZContext-aware self-attention networks for natural language processingNeurocomputing202145815716910.1016/j.neucom.2021.06.009
– reference: GalassiALippiMTorroniPAttention in natural language processingIEEE Trans. Neural Netw. Learn. Syst.202032104291430810.1109/TNNLS.2020.3019893
– reference: ZhangYLiuYSunPYanHZhaoXZhangLIfcnn: a general image fusion framework based on convolutional neural networkInf. Fusion2020549911810.1016/j.inffus.2019.07.011
– reference: Huang, Z., Wang, X., Huang, L., Huang, C., Wei, Y., Liu, W.: Ccnet: Criss-cross attention for semantic segmentation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 603–612 (2019)
– reference: Aghamaleki, J.A., Ghorbani, A.: Image fusion using dual tree discrete wavelet transform and weights optimization. Vis. Comput. pp 1–11 (2022)
– reference: Liu, Y., Jia, Q., Fan, X., Wang, S., Ma, S., Gao, W.: Cross-srn: structure-preserving super-resolution network with cross convolution. IEEE Trans. Circ. Syst. Video Technol. (2021)
– reference: WangSZhouMLiuZLiuZGuDZangYDongDGevaertOTianJCentral focused convolutional neural networks: developing a data-driven model for lung nodule segmentationMed. Image Anal.20174017218310.1016/j.media.2017.06.014
– reference: ZhangWSuiXGuGChenQCaoHInfrared thermal imaging super-resolution via multiscale spatio-temporal feature fusion networkIEEE Sens. J.20212117191761918510.1109/JSEN.2021.3090021
– reference: LiGLinYQuXAn infrared and visible image fusion method based on multi-scale transformation and norm optimizationInf. Fusion20217110912910.1016/j.inffus.2021.02.008
– reference: XuHGongMTianXHuangJMaJCufd: An encoder-decoder network for visible and infrared image fusion based on common and unique feature decompositionComput. Vis. Image Underst.202221810340710.1016/j.cviu.2022.103407
– reference: Brown, M., Süsstrunk, S.: Multi-spectral sift for scene category recognition. In: CVPR 2011, IEEE, pp 177–184 (2011)
– reference: XueZYuXLiuBTanXWeiXHresnetam: Hierarchical residual network with attention mechanism for hyperspectral image classificationIEEE J. Select. Top. Appl. Earth Observ. Remote Sensing2021143566358010.1109/JSTARS.2021.3065987
– reference: Fu, Y., Wu, X.J.: A dual-branch network for infrared and visible image fusion. In: 2020 25th International Conference on Pattern Recognition (ICPR), IEEE, pp 10675–10680 (2021)
– reference: YuQShiYSunJGaoYZhuJDaiYCrossbar-net: a novel convolutional neural network for kidney tumor segmentation in ct imagesIEEE Trans. Image Process.201928840604074397693010.1109/TIP.2019.290553707122962
– reference: Yu, Q., Qi, L., Zhou, L., Wang, L., Yin, Y., Shi, Y., Wang, W., Gao, Y.: Crosslink-net: Double-branch encoder segmentation network via fusing vertical and horizontal convolutions. arXiv preprint arXiv:2107.11517 (2021)
– reference: Toet, A., et al.: Tno image fusion dataset<\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$<$$\end{document}https://figshare.com/articles. TN_Image_Fusion_Dataset/1008029 (2014)
– reference: Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 2818–2826 (2016)
– reference: MaJYuWLiangPLiCJiangJFusiongan: a generative adversarial network for infrared and visible image fusionInf. Fusion201948112610.1016/j.inffus.2018.09.004
– reference: Zhang, Z., Lan, C., Zeng, W., Jin, X., Chen, Z.: Relation-aware global attention for person re-identification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 3186–3195 (2020)
– reference: YousifASOmarZSheikhUUAn improved approach for medical image fusion using sparse representation and siamese convolutional neural networkBiomed. Signal Process. Control20227210335710.1016/j.bspc.2021.103357
– reference: Cai, W., Cai, D., Huang, S., Li, M.: Utterance-level end-to-end language identification using attention-based cnn-blstm. In: ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, pp 5991–5995 (2019)
– reference: SongADuanHPeiHDingLTriple-discriminator generative adversarial network for infrared and visible image fusionNeurocomputing202248318319410.1016/j.neucom.2022.02.025
– reference: LiHWuXJKittlerJRfn-nest: an end-to-end residual fusion network for infrared and visible imagesInf. Fusion202173728610.1016/j.inffus.2021.02.023
– reference: LiuZBlaschEXueZZhaoJLaganiereRWuWObjective assessment of multiresolution image fusion algorithms for context enhancement in night vision: a comparative studyIEEE Trans. Pattern Anal. Mach. Intell.20113419410910.1109/TPAMI.2011.109
– reference: Nozaripour, A., Soltanizadeh, H.: Image classification via convolutional sparse coding. Vis. Comput. pp 1–14 (2022)
– reference: MaJZhangHShaoZLiangPXuHGanmcc: a generative adversarial network with multiclassification constraints for infrared and visible image fusionIEEE Trans. Instrum. Meas.202070114
– reference: YinWHeKXuDLuoYGongJSignificant target analysis and detail preserving based infrared and visible image fusionInfrared Phys. Technol.202212110404110.1016/j.infrared.2022.104041
– reference: Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7132–7141 (2018)
– reference: Liu, Y., Sun, G., Qiu, Y., Zhang, L., Chhatkuli, A., Van Gool, L.: Transformer in convolutional neural networks. arXiv preprint arXiv:2106.03180 (2021)
– reference: LiHWuXJDensefuse: a fusion approach to infrared and visible imagesIEEE Trans. Image Process.201828526142623392519310.1109/TIP.2018.2887342
– reference: ZhuZYinHChaiYLiYQiGA novel multi-modality image fusion method based on image decomposition and sparse representationInf. Sci.2018432516529375819610.1016/j.ins.2017.09.010
– reference: Cai, W., Wei, Z.: Remote sensing image classification based on a cross-attention mechanism and graph convolution. IEEE Geosci. Remote Sens. Lett. (2020)
– reference: Hou, Q., Zhang, L., Cheng, M.M., Feng, J.: Strip pooling: Rethinking spatial pooling for scene parsing. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 4003–4012 (2020)
– reference: YuQGaoYZhengYZhuJDaiYShiYCrossover-net: leveraging vertical-horizontal crossover relation for robust medical image segmentationPattern Recogn.202111310775610.1016/j.patcog.2020.107756
– reference: QingYLiuWHyperspectral image classification based on multi-scale residual network with attention mechanismRemote Sensing202113333510.3390/rs13030335
– reference: WangZBaiXHigh frequency assisted fusion for infrared and visible images through sparse representationInfrared Phys. Technol.20199821222210.1016/j.infrared.2019.03.030
– reference: TangLYuanJZhangHJiangXMaJPiafusion: a progressive infrared and visible image fusion network based on illumination awareInf, Fusion202283799210.1016/j.inffus.2022.03.007
– reference: LiHWuXJKittlerJMdlatlrr: a novel decomposition method for infrared and visible image fusionIEEE Trans. Image Process.2020294733474610.1109/TIP.2020.297598407586208
– reference: LiHWuXJDurraniTNestfuse: an infrared and visible image fusion architecture based on nest connection and spatial/channel attention modelsIEEE Trans. Instrum. Meas.202069129645965610.1109/TIM.2020.3005230
– reference: Wang, X., Wu, K., Zhang, Y., Xiao, Y., Xu, P.: A gan-based denoising method for chinese stele and rubbing calligraphic image. Vis. Comput. pp. 1–12 (2022)
– volume: 121
  start-page: 104041
  year: 2022
  ident: 2628_CR4
  publication-title: Infrared Phys. Technol.
  doi: 10.1016/j.infrared.2022.104041
– volume: 34
  start-page: 94
  issue: 1
  year: 2011
  ident: 2628_CR50
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2011.109
– volume: 54
  start-page: 99
  year: 2020
  ident: 2628_CR18
  publication-title: Inf. Fusion
  doi: 10.1016/j.inffus.2019.07.011
– ident: 2628_CR42
– volume: 72
  start-page: 103357
  year: 2022
  ident: 2628_CR9
  publication-title: Biomed. Signal Process. Control
  doi: 10.1016/j.bspc.2021.103357
– ident: 2628_CR1
  doi: 10.1007/s00371-021-02396-9
– ident: 2628_CR41
  doi: 10.1109/CVPR42600.2020.00325
– ident: 2628_CR33
  doi: 10.21437/Interspeech.2016-686
– ident: 2628_CR48
  doi: 10.1007/s00371-022-02438-w
– ident: 2628_CR47
  doi: 10.1109/CVPR.2011.5995637
– volume: 149
  start-page: 107787
  year: 2022
  ident: 2628_CR5
  publication-title: Opt. Laser Technol.
  doi: 10.1016/j.optlastec.2021.107787
– volume: 224
  start-page: 107087
  year: 2021
  ident: 2628_CR10
  publication-title: Knowl.-Based Syst.
  doi: 10.1016/j.knosys.2021.107087
– volume: 458
  start-page: 157
  year: 2021
  ident: 2628_CR32
  publication-title: Neurocomputing
  doi: 10.1016/j.neucom.2021.06.009
– volume: 13
  start-page: 335
  issue: 3
  year: 2021
  ident: 2628_CR35
  publication-title: Remote Sensing
  doi: 10.3390/rs13030335
– ident: 2628_CR43
  doi: 10.1007/s00371-022-02410-8
– volume: 21
  start-page: 14950
  issue: 13
  year: 2021
  ident: 2628_CR49
  publication-title: IEEE Sens. J.
  doi: 10.1109/JSEN.2021.3073568
– ident: 2628_CR17
  doi: 10.1109/TCSVT.2021.3109895
– ident: 2628_CR27
  doi: 10.1109/TCSVT.2021.3138431
– volume: 28
  start-page: 4060
  issue: 8
  year: 2019
  ident: 2628_CR28
  publication-title: IEEE Trans. Image Process.
  doi: 10.1109/TIP.2019.2905537
– volume: 70
  start-page: 1
  year: 2020
  ident: 2628_CR12
  publication-title: IEEE Trans. Instrum. Meas.
– ident: 2628_CR8
  doi: 10.1007/s00371-022-02441-1
– volume: 483
  start-page: 183
  year: 2022
  ident: 2628_CR13
  publication-title: Neurocomputing
  doi: 10.1016/j.neucom.2022.02.025
– ident: 2628_CR21
  doi: 10.1109/ICPR48806.2021.9412293
– ident: 2628_CR51
  doi: 10.1609/aaai.v34i07.6975
– volume: 48
  start-page: 11
  year: 2019
  ident: 2628_CR11
  publication-title: Inf. Fusion
  doi: 10.1016/j.inffus.2018.09.004
– volume: 73
  start-page: 72
  year: 2021
  ident: 2628_CR16
  publication-title: Inf. Fusion
  doi: 10.1016/j.inffus.2021.02.023
– volume: 432
  start-page: 516
  year: 2018
  ident: 2628_CR6
  publication-title: Inf. Sci.
  doi: 10.1016/j.ins.2017.09.010
– volume: 218
  start-page: 103407
  year: 2022
  ident: 2628_CR19
  publication-title: Comput. Vis. Image Underst.
  doi: 10.1016/j.cviu.2022.103407
– ident: 2628_CR29
  doi: 10.1109/TIP.2022.3203223
– volume: 28
  start-page: 2614
  issue: 5
  year: 2018
  ident: 2628_CR14
  publication-title: IEEE Trans. Image Process.
  doi: 10.1109/TIP.2018.2887342
– ident: 2628_CR25
  doi: 10.1109/ICCV.2019.00069
– ident: 2628_CR40
– volume: 98
  start-page: 212
  year: 2019
  ident: 2628_CR7
  publication-title: Infrared Phys. Technol.
  doi: 10.1016/j.infrared.2019.03.030
– volume: 40
  start-page: 172
  year: 2017
  ident: 2628_CR22
  publication-title: Med. Image Anal.
  doi: 10.1016/j.media.2017.06.014
– ident: 2628_CR44
– volume: 71
  start-page: 109
  year: 2021
  ident: 2628_CR2
  publication-title: Inf. Fusion
  doi: 10.1016/j.inffus.2021.02.008
– volume: 21
  start-page: 19176
  issue: 17
  year: 2021
  ident: 2628_CR3
  publication-title: IEEE Sens. J.
  doi: 10.1109/JSEN.2021.3090021
– ident: 2628_CR23
  doi: 10.1109/CVPR.2016.308
– volume: 32
  start-page: 4291
  issue: 10
  year: 2020
  ident: 2628_CR31
  publication-title: IEEE Trans. Neural Netw. Learn. Syst.
  doi: 10.1109/TNNLS.2020.3019893
– volume: 113
  start-page: 107756
  year: 2021
  ident: 2628_CR30
  publication-title: Pattern Recogn.
  doi: 10.1016/j.patcog.2020.107756
– ident: 2628_CR39
– volume: 14
  start-page: 3566
  year: 2021
  ident: 2628_CR36
  publication-title: IEEE J. Select. Top. Appl. Earth Observ. Remote Sensing
  doi: 10.1109/JSTARS.2021.3065987
– ident: 2628_CR45
– ident: 2628_CR37
  doi: 10.1109/CVPR.2018.00745
– ident: 2628_CR38
  doi: 10.1109/CVPR.2018.00813
– volume: 83
  start-page: 79
  year: 2022
  ident: 2628_CR20
  publication-title: Inf, Fusion
  doi: 10.1016/j.inffus.2022.03.007
– ident: 2628_CR24
– ident: 2628_CR26
  doi: 10.1109/CVPR42600.2020.00406
– ident: 2628_CR34
  doi: 10.1109/ICASSP.2019.8682386
– volume: 44
  start-page: 502
  issue: 1
  year: 2020
  ident: 2628_CR46
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2020.3012548
– volume: 29
  start-page: 4733
  year: 2020
  ident: 2628_CR52
  publication-title: IEEE Trans. Image Process.
  doi: 10.1109/TIP.2020.2975984
– volume: 69
  start-page: 9645
  issue: 12
  year: 2020
  ident: 2628_CR15
  publication-title: IEEE Trans. Instrum. Meas.
  doi: 10.1109/TIM.2020.3005230
SSID ssj0017749
Score 2.4542115
Snippet Existing infrared and visible image fusion methods suffer from edge information loss, artifact introduction, and image distortion. Therefore, a dual-branch...
SourceID proquest
crossref
springer
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 4801
SubjectTerms Algorithms
Artificial Intelligence
Computer Graphics
Computer Science
Computer vision
Convolution
Decomposition
Deep learning
Dictionaries
Image Processing and Computer Vision
Infrared imagery
Methods
Neural networks
Original Article
Sensors
SummonAdditionalLinks – databaseName: SpringerLink Journals (ICM)
  dbid: U2A
  link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LS8NAEB60XvTgoyqtVtmDN13Ic5t4K8VSBHsy0FvIZjdYaKO0qb_fme0mVVHBazK7CZmdmW8yL4Abr8gFQQEeZ0LyINKKIyXKlUZziHAi04rqnZ8mYpwEj9NwaovCVnW2ex2SNJq6KXYz3eU4ZZ-j3-BFXOzCXoi-OyVyJd6giR0goDGg10X_iOo8banMz3t8NUdbjPktLGqszegYDi1MZIMNX09gR5dtOKpHMDArkW04-NRP8BTeh_QQnkx0dc-oyIpLmpvxwvAYLSnTnGWlYlROLueazRaoS1ixpv9lrKiTtBjZNcXwknljTmnp9nia1dSP02RIsoWmquHZanEGyejheTjmdrACz1HiKh5kfV9GSkinyGMXzaQfoMr2coXOZCHyEN2IOKKQG8LHvKDYI_o9TuaHfV-HSjr-ObTK11J3gDmaQofa93yh0dNWGZGpMMo8WUTI9S649fdNc9t1nIZfzNOmX7LhSYo8SQ1PUtGF22bN26bnxp_UvZptqZW_VerFboTKDF-7C3c1K7e3f9_t4n_kl7BP8-c32X09aFXLtb5ClFLJa3MoPwCxat3A
  priority: 102
  providerName: Springer Nature
Title Cross-UNet: dual-branch infrared and visible image fusion framework based on cross-convolution and attention mechanism
URI https://link.springer.com/article/10.1007/s00371-022-02628-6
https://www.proquest.com/docview/2918070862
Volume 39
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV3dT8IwEL8IvOiD8TOiSPrgmzaOjY3OF4OEj2gkxrgEn5Zt7SIJDOTDv9-70oGayGvXNkvveve73hfAlZ0mHkEB7kdezOtCSY4z8V4pVIcIJyIlKd_5ue_1gvrjwB2YB7e5CavMZaIW1HKS0Bv5re3XBLInAvD76SenrlHkXTUtNApQQhEsRBFKD-3-y-vaj4DgRgPgGtpKlPNp0mZ08pyuVscpmh3tEFtw77dq2uDNPy5SrXk6B7BvICNrrmh8CDsqO4K9H4UEj-GrRTvyoK8Wd4yyq3hMDTM-GPLPjELMWZRJRnnk8Uix4RiFCEuX9FDG0jw6i5FCkwyH9O9xikc3fKlXUyFOHRrJxorShYfz8QkEnfZbq8dNRwWe4FVb8HrUcGIhvdhKE7-G-tGpo6y2E4lWZOolLtoPviBfG-LGJCWnI563FTluw1GujC3nFIrZJFNnwCxFPkPl2I6n0MSWEU2TrojsOBVI7jLU8sMME1NunLpejMJ1oWRNgBAJEGoChF4ZrtdrpqtiG1tnV3IahebizcMNm5ThJqfb5vP_u51v3-0CdqnR_CqMrwLFxWypLhGOLOIqFESnW4VSs_v-1K4aDsTRwG5-A6xB32A
linkProvider ProQuest
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1LT8JAEJ4gHtSD8RlR1D3oSTfSJ8XEGIMiyOMEibfadreRBApC0fin_I3OLC2oidy4tttt0_l255udF8CZHgY2UQFe8myfm44UHEfiupKoDpFOeFJQvnOzZVc75tOz9ZyBrzQXhsIq0z1RbdRiENAZ-ZVe0hyEJxLw2-Ebp65R5F1NW2hMYVGXnx9oso1vavco33Ndrzy0y1WedBXgAcIt5qZXNHxH2H4hDEoa6gjDxP1KDwRaUqEdWMihSw75m5A7BSE53vCdBc-wioa0hF8wcN4VWDUN1OSUmV55nHktkEopuq2hZUYZpkmSjkrVU7XxOMXOo9WjO9z-rQjn7PaPQ1bpucoWbCYEld1NEbUNGRntwMaPsoW78F6mGXmnJeNrRrlc3Kf2HK8M0TqigHbmRYJR1rrfk6zbxy2LhRM6lmNhGgvGSH0KhpfU53GKfk9WgXqayn6qQEzWl5Sc3B3396CzlD-9D9loEMkDYAVJHkpp6IYt0aAXHg0TluPpfugguHKgpT_TDZLi5tRjo-fOyjIrAbgoAFcJwLVzcDF7Zjgt7bFwdD6VkZss87E7B2UOLlO5zW__P9vh4tlOYa3abjbcRq1VP4J1anE_DSDMQzYeTeQxEqHYP1HoY_CybLh_A4-VFgo
linkToPdf http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1LT8MwDLYGSAgOPAaI8cwBThCtSx90SBwQY9oYTByYtFtpm1RM2srECoh_xU_ETh88BEgcuLZpG8V2_Ln25wDsiSh0CArwuu8E3HKV5DgS7UqhO0Q44StJfOerrtPqWRd9u1-C15wLo6vd85RkymmgLk1xUh3LqFoQ33SnOU6V6BhDCJc7WVllR708Y9A2OWk3UML7QjTPb85aPDtXgIeocAm3_CMzcKUTGFFYr6GXMC3csUQoMZaKnNBGFF13KeOE6CmMKPWGsN_wTfvIVLYMDBPfOwUzFrGP0YJ64rTIWyCY0oC7hrEZcUwzms73c_7sCt_x7ZeUrPZ0zSVYyCAqO011ahlKKi7DYn78A8t2gzLMf-hluAJPZ_QR3uuq5JgRwYsHdGbHHUMVfqAqd-bHkhGVPRgqNhjhPsaiR_pXx6K8QIyRT5UML-kZcyqJz0xDP029QHV1JhspYiwPJqNV6P3L4q_BdHwfq3VghqK0pTKF6SiM8qVPw6Tt-iKIXNS4CtTy9fXCrOM5Hbwx9IpezVomHsrE0zLxnAocFM-M034fv47eysXmZbY_8US95uJGitOuwGEuyvfbP79t42_Dd2H2utH0LtvdzibMCQRfaZHhFkwnD49qG8FSEuxo_WRw-98G8Qb77h2F
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Cross-UNet%3A+dual-branch+infrared+and+visible+image+fusion+framework+based+on+cross-convolution+and+attention+mechanism&rft.jtitle=The+Visual+computer&rft.au=Wang%2C+Xuejiao&rft.au=Hua%2C+Zhen&rft.au=Li%2C+Jinjiang&rft.date=2023-10-01&rft.pub=Springer+Nature+B.V&rft.issn=0178-2789&rft.eissn=1432-2315&rft.volume=39&rft.issue=10&rft.spage=4801&rft.epage=4818&rft_id=info:doi/10.1007%2Fs00371-022-02628-6
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0178-2789&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0178-2789&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0178-2789&client=summon