TCCFusion: An infrared and visible image fusion method based on transformer and cross correlation

•We propose an end-to-end infrared and visible image fusion method based on Transformer and cross correlation called TCCFusion.•We present a local-global parallel network to adequately preserve the complementary information.•We design a cross correlation loss to train TCCFusion in an unsupervised ma...

Full description

Saved in:
Bibliographic Details
Published inPattern recognition Vol. 137; p. 109295
Main Authors Tang, Wei, He, Fazhi, Liu, Yu
Format Journal Article
LanguageEnglish
Published Elsevier Ltd 01.05.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:•We propose an end-to-end infrared and visible image fusion method based on Transformer and cross correlation called TCCFusion.•We present a local-global parallel network to adequately preserve the complementary information.•We design a cross correlation loss to train TCCFusion in an unsupervised manner.•Extensive experiments demonstrate that the proposed method outperforms the state-of-the-art methods. Infrared and visible image fusion aims to obtain a synthetic image that can simultaneously exhibit salient objects and provide abundant texture details. However, existing deep learning-based methods generally depend on convolutional operations, which indeed have good local feature extraction ability, but the restricted receptive field limits its capability in modeling long-range dependencies. To conquer this dilemma, we propose an infrared and visible image fusion method based on Transformer and cross correlation, named TCCFusion. Specifically, we design a local feature extraction branch (LFEB) to preserve local complementary information, in which a dense-shape network is introduced to reuse the information that may be lost during the convolutional operation. To avoid the limitation of the receptive field and to fully extract the global significant information, a global feature extraction branch (GFEB) is devised that consists of three Transformer blocks for long-range relationship construction. In addition, LFEB and GFEB are arranged in a parallel fashion to maintain local and global useful information in a more effective way. Furthermore, we design a cross correlation loss to train the proposed fusion model in an unsupervised manner, with which the fusion result can obtain adequate thermal radiation information in an infrared image and ample texture details in a visible image. Massive experiments on two mainstream datasets illustrate that our TCCFusion outperforms state-of-the-art algorithms not only on visual quality but also on quantitative assessments. Ablation experiments on the network framework and objective function demonstrate the effectiveness of the proposed method.
ISSN:0031-3203
1873-5142
DOI:10.1016/j.patcog.2022.109295