MAGAN: Multi-Attention Generative Adversarial Network for Infrared and Visible Image Fusion

Deep learning has been widely used in infrared and visible image fusion owing to its strong feature extraction and generalization capabilities. However, it is difficult to directly extract specific image features from different modal images. Therefore, according to the characteristics of infrared an...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on instrumentation and measurement p. 1
Main Authors Huang, Shuying, Song, Zixiang, Yang, Yong, Wan, Weiguo, Kong, Xiangkai
Format Journal Article
LanguageEnglish
Published IEEE 01.06.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Deep learning has been widely used in infrared and visible image fusion owing to its strong feature extraction and generalization capabilities. However, it is difficult to directly extract specific image features from different modal images. Therefore, according to the characteristics of infrared and visible images, this paper proposes a multi-attention generative adversarial network (MAGAN) for infrared and visible image fusion, which is composed of a multi- attention generator and two multi-attention discriminators. The multi-attention generator gradually realizes the extraction and fusion of image features by constructing two modules: a triple-path feature pre-fusion module (TFPM) and a feature emphasis fusion module (FEFM). The two multi-attention discriminators are constructed to ensure that the fused images retain the salient targets and the texture information from the source images. In MAGAN, an intensity attention and a texture attention are designed to extract the specific features of the source images to retain more intensity and texture information in the fused image. In addition, a saliency target intensity loss is defined to ensure that the fused images obtain more accurate salient information from infrared images. Experimental results on two public datasets show that the proposed MAGAN outperforms some state-of-the-art models in terms of visual effects and quantitative metrics.
ISSN:0018-9456
DOI:10.1109/TIM.2023.3282300