ASFusion: Adaptive visual enhancement and structural patch decomposition for infrared and visible image fusion

Multimodal data fusion plays an increasingly important role in the field of artificial intelligence. The objective of Infrared and Visible Image Fusion (IVF) is to integrate information from different types of images to enhance the performance of target detection tasks. Meanwhile, object detection t...

Full description

Saved in:
Bibliographic Details
Published inEngineering applications of artificial intelligence Vol. 132; p. 107905
Main Authors Zhou, Yiqiao, He, Kangjian, Xu, Dan, Tao, Dapeng, Lin, Xu, Li, Chengzhou
Format Journal Article
LanguageEnglish
Published Elsevier Ltd 01.06.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Multimodal data fusion plays an increasingly important role in the field of artificial intelligence. The objective of Infrared and Visible Image Fusion (IVF) is to integrate information from different types of images to enhance the performance of target detection tasks. Meanwhile, object detection technology constitutes a crucial foundation in the field of autonomous driving. However, visible images captured under low illumination often lack important details, resulting in suboptimal fusion results,which in turn affects the accuracy of target detection tasks. We proposed an infrared and visible image fusion method based on adaptive visual enhancement and structural patch decomposition (ASFusion) to address the above issues. First, we design an efficient algorithm based on the camera response model to enhance different exposure matrices, allowing for adaptive enhancement of visible images. Second, we decompose the source infrared and the enhanced visible image into three components: mean intensity, signal structure, and signal intensity using structural patch decomposition (SPD), and then design a new degree of membership curve function to estimate the weight of the average intensity component accurately. The estimation process reduces artifacts and preserves the significance of infrared targets. Third, to achieve a higher contrast in the fusion result, we introduced sharpening operations to enhance the detail layer of both the infrared and visible images. Finally, the fused image is obtained by merging the base and detail layers. Through qualitative and quantitative experimental evaluations, the proposed method outperforms twelve state-of-the-art image fusion methods. Additionally, object detection experiments have demonstrated that our ASFusion exhibits tremendous potential in better serving advanced computer vision tasks. Our code is publicly available at https://github.com/ZhouVMC/ASFusion.
ISSN:0952-1976
DOI:10.1016/j.engappai.2024.107905