Adaptive zero-learning medical image fusion

Medical fusion image can integrate complementary information from multi-modal medical images, so that more comprehensive and accurate image results could be obtained. This paper proposes a new two-scale zero-learning medical images fusion method combined with pre-trained Res2Net and an adaptive guid...

Full description

Saved in:
Bibliographic Details
Published inBiomedical signal processing and control Vol. 84; p. 105008
Main Authors Yang, Feng, Jia, Manyu, Lu, Liyun, Yin, Mengxiao
Format Journal Article
LanguageEnglish
Published Elsevier Ltd 01.07.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Medical fusion image can integrate complementary information from multi-modal medical images, so that more comprehensive and accurate image results could be obtained. This paper proposes a new two-scale zero-learning medical images fusion method combined with pre-trained Res2Net and an adaptive guided filter. Firstly, this method utilizes the guided filter to decompose a medical image into a base layer representing large-scale intensity variations, and a detail layer containing small scale changes. Considering the influence of the parameters of the guided filter on the fusion images and the time-consuming on parameter selection, an adaptive guided filter based on the multi-modal medical image features is proposed. Then, detail layers are fused by an elementwise-sum strategy for retaining more detail information of source images. And the base layers are to fused with deep feature maps extracted from a pre-trained neural network Res2Net. Finally, the fused detail and basic layer will be reconstructed to obtain medical fusion image. The proposed method is demonstrated its superiority by ablation studies, and compared with 7 typical and advanced image fusion methods in visual effect and evaluation index. The experimental results show that the proposed image fusion method outperforms other methods in retaining effective detailed information and image clarity. •A new multi-modal medical image fusion method is proposed.•An adaptive guided filter is proposed for two-scale decomposition.•The elementwise-sum strategy is used to fuse detail layers for better performance.•A pre-trained Res2Net modal is used to extract image feature maps.•Ours method generates structure-completed and detail-clear fusion medical images.
ISSN:1746-8094
1746-8108
DOI:10.1016/j.bspc.2023.105008