An Improved Infrared and Visible Image Fusion Using an Adaptive Contrast Enhancement Method and Deep Learning Network with Transfer Learning

Deep learning (DL) has achieved significant attention in the field of infrared (IR) and visible (VI) image fusion, and several attempts have been made to enhance the quality of the final fused image. It produces better results than conventional methods; however, the captured image cannot acquire use...

Full description

Saved in:
Bibliographic Details
Published inRemote sensing (Basel, Switzerland) Vol. 14; no. 4; p. 939
Main Authors Bhutto, Jameel Ahmed, Tian, Lianfang, Du, Qiliang, Sun, Zhengzheng, Yu, Lubin, Soomro, Toufique Ahmed
Format Journal Article
LanguageEnglish
Published Basel MDPI AG 01.02.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Deep learning (DL) has achieved significant attention in the field of infrared (IR) and visible (VI) image fusion, and several attempts have been made to enhance the quality of the final fused image. It produces better results than conventional methods; however, the captured image cannot acquire useful information due to environments with poor lighting, fog, dense smoke, haze, and the noise generated by sensors. This paper proposes an adaptive fuzzy-based preprocessing enhancement method that automatically enhances the contrast of images with adaptive parameter calculation. The enhanced images are then decomposed into base and detail layers by anisotropic diffusion-based edge-preserving filters that remove noise while smoothing the edges. The detailed parts are fed into four convolutional layers of the VGG-19 network through transfer learning to extract features maps. These features maps are fused by multiple fusion strategies to obtain the final fused detailed layer. The base parts are fused by the PCA method to preserve the energy information. Experimental results reveal that our proposed method achieves state-of-the-art performance compared with existing fusion methods in a subjective evaluation through the visual experience of experts and statistical tests. Moreover, the objective assessment parameters are conducted by various parameters (FMI, SSIMa, API, EN, QFAB, and NFAB) which were used in the comparison method. The proposed method achieves 0.2651 to 0.3951, 0.5827 to 0.8469, 56.3710 to 71.9081, 4.0117 to 7.9907, and 0.6538 to 0.8727 gain for FMI, SSIMa, API, EN, and QFAB, respectively. At the same time, the proposed method has more noise reduction (0.3049 to 0.0021) that further justifies the efficacy of the proposed method than conventional methods.
ISSN:2072-4292
2072-4292
DOI:10.3390/rs14040939