Focus Relationship Perception for Unsupervised Multi-Focus Image Fusion

Multi-focus image fusion can extract the focus regions from different source images and combine them into a fully clear image. Existing unsupervised methods typically use gradient information to measure the focus regions in images and generate a fusion weight map, but ordinary gradient operators are...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on multimedia Vol. 26; pp. 6155 - 6165
Main Authors Liu, Jinyang, Li, Shutao, Dian, Renwei, Song, Ze
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Multi-focus image fusion can extract the focus regions from different source images and combine them into a fully clear image. Existing unsupervised methods typically use gradient information to measure the focus regions in images and generate a fusion weight map, but ordinary gradient operators are difficult to measure information accurately in regions with weaker textures. In addition, using only gradient information as a constraint cannot make the model fully distinguish all the focus regions in the image, which seriously restricts the clarity of the fusion image. To address these issues, a novel unsupervised multi-focus image fusion method is proposed in this paper. Specifically, a neighborhood information fusion network is designed to generate an initial fusion weight map. It can capture features within different neighborhood ranges at once, which enhances the information association between different regions. In addition, to further improve the feature extraction ability of the model in the regions with low texture information, a local difference evaluation loss function is proposed. It is combined with the gradient measure loss function to constrain the network. Finally, a fusion weight optimization module is proposed to improve the clarity of the fusion image in the repeated defocusing regions and overexposed regions of different source images, which redistributes the weights of different source images. The proposed fusion method is compared with advanced methods on three public multi-focus datasets. Experimental results indicate that the proposed method has achieved better performance in qualitative and quantitative aspects.
ISSN:1520-9210
1941-0077
DOI:10.1109/TMM.2023.3347099