Knowledge-embedded multi-layer collaborative adaptive fusion network: Addressing challenges in foggy conditions and complex imaging
Infrared and visible image fusion aims at generating high-quality images that serve both human and machine visual perception under extreme imaging conditions. However, current fusion methods primarily rely on datasets comprising infrared and visible images captured under clear weather conditions. Wh...
Saved in:
Published in | Journal of King Saud University. Computer and information sciences Vol. 36; no. 10 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
Elsevier B.V
01.12.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Infrared and visible image fusion aims at generating high-quality images that serve both human and machine visual perception under extreme imaging conditions. However, current fusion methods primarily rely on datasets comprising infrared and visible images captured under clear weather conditions. When applied to real-world scenarios, image fusion tasks inevitably encounter challenges posed by adverse weather conditions such as heavy fog, resulting in difficulties in obtaining effective information and inferior visual perception. To address these challenges, this paper proposes a Mean Teacher-based Self-supervised Image Restoration and multimodal Image Fusion joint learning network (SIRIFN), which enhances the robustness of the fusion network in adverse weather conditions by employing deep supervision from a guiding network to the learning network. Furthermore, to enhance the network’s information extraction and integration capabilities, our Multi-level Feature Collaborative adaptive Reconstruction Network (MFCRNet) is introduced, which adopts a multi-branch, multi-scale design, with differentiated processing strategies for different features. This approach preserves rich texture information while maintaining semantic consistency from the source images. Extensive experiments demonstrate that SIRIFN outperforms current state-of-the-art algorithms in both visual quality and quantitative evaluation. Specifically, the joint implementation of image restoration and multimodal fusion provides more effective information for visual tasks under extreme weather conditions, thereby facilitating downstream visual tasks. |
---|---|
ISSN: | 1319-1578 |
DOI: | 10.1016/j.jksuci.2024.102230 |