SIFusion: Lightweight infrared and visible image fusion based on semantic injection
The objective of image fusion is to integrate complementary features from source images to better cater to the needs of human and machine vision. However, existing image fusion algorithms predominantly focus on enhancing the visual appeal of the fused image for human perception, often neglecting the...
Saved in:
Published in | PloS one Vol. 19; no. 11; p. e0307236 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
San Francisco
Public Library of Science
06.11.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | The objective of image fusion is to integrate complementary features from source images to better cater to the needs of human and machine vision. However, existing image fusion algorithms predominantly focus on enhancing the visual appeal of the fused image for human perception, often neglecting their impact on subsequent high-level visual tasks, particularly the processing of semantic information. Moreover, these fusion methods that incorporate downstream tasks tend to be overly complex and computationally intensive, which is not conducive to practical applications. To address these issues, a lightweight infrared and visible light image fusion method known as SIFusion, which is based on semantic injection, is proposed in this paper. This method employs a semantic-aware branch to extract semantic feature information, and then integrates these features into the fused features through a Semantic Injection Module (SIM) to meet the semantic requirements of high-level visual tasks. Furthermore, to simplify the complexity of the fusion network, this method introduces an Edge Convolution Module (ECB) based on structural reparameterization technology to enhance the representational capacity of the encoder and decoder. Extensive experimental comparisons demonstrate that the proposed method performs excellently in terms of visual appeal and advanced semantics, providing satisfactory fusion results for subsequent high-level visual tasks even in challenging scenarios. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 Competing Interests: The authors have declared that no competing interests exist. |
ISSN: | 1932-6203 1932-6203 |
DOI: | 10.1371/journal.pone.0307236 |