GANMcC: A Generative Adversarial Network With Multiclassification Constraints for Infrared and Visible Image Fusion

Visible images contain rich texture information, whereas infrared images have significant contrast. It is advantageous to combine these two kinds of information into a single image so that it not only has good contrast but also contains rich texture details. In general, previous fusion methods canno...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on instrumentation and measurement Vol. 70; pp. 1 - 14
Main Authors Ma, Jiayi, Zhang, Hao, Shao, Zhenfeng, Liang, Pengwei, Xu, Han
Format Journal Article
LanguageEnglish
Published New York IEEE 2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN0018-9456
1557-9662
DOI10.1109/TIM.2020.3038013

Cover

More Information
Summary:Visible images contain rich texture information, whereas infrared images have significant contrast. It is advantageous to combine these two kinds of information into a single image so that it not only has good contrast but also contains rich texture details. In general, previous fusion methods cannot achieve this goal well, where the fused results are inclined to either a visible or an infrared image. To address this challenge, a new fusion framework called generative adversarial network with multiclassification constraints (GANMcC) is proposed, which transforms image fusion into a multidistribution simultaneous estimation problem to fuse infrared and visible images in a more reasonable way. We adopt a generative adversarial network with multiclassification to estimate the distributions of visible light and infrared domains at the same time, in which the game of multiclassification discrimination will make the fused result to have these two distributions in a more balanced manner, so as to have significant contrast and rich texture details. In addition, we design a specific content loss to constrain the generator, which introduces the idea of main and auxiliary into the extraction of gradient and intensity information, which will enable the generator to extract more sufficient information from source images in a complementary manner. Extensive experiments demonstrate the advantages of our GANMcC over the state-of-the-art methods in terms of both qualitative effect and quantitative metric. Moreover, our method can achieve good fused results even the visible image is overexposed. Our code is publicly available at https://github.com/jiayi-ma/GANMcC.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0018-9456
1557-9662
DOI:10.1109/TIM.2020.3038013