Multi-source deep feature fusion for medical image analysis

In image fusion, several images are combined into one image that contains information from all input images. In medical image analysis, image fusion can help to improve the accuracy of diagnosis and treatment planning. One approach to image fusion is the saliency map, where an algorithm highlights t...

Full description

Saved in:
Bibliographic Details
Published inMultidimensional systems and signal processing Vol. 36; no. 1
Main Authors Gürsoy, Ercan, Kaya, Yasin
Format Journal Article
LanguageEnglish
Published New York Springer US 01.12.2025
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In image fusion, several images are combined into one image that contains information from all input images. In medical image analysis, image fusion can help to improve the accuracy of diagnosis and treatment planning. One approach to image fusion is the saliency map, where an algorithm highlights the most informative regions of the image and then combines these regions into a single image. This method can be particularly useful in medical image analysis, where certain areas of an image may be especially critical. This study proposes a novel model for multi-head medical image analysis based on ResNet using the fusion of saliency maps and RGB images as input from medical images. The image fusion generated by saliency maps contains more visible features. The saliency maps generated with the pre-trained model also contain background information. A combined dataset from two publicly available sources containing three classes, healthy, COVID-19, and pneumonia X-ray images, was used to evaluate the proposed model. The proposed multi-head CNN model improves the average classification accuracy from 94.68 to 96.72% with five-fold cross-validation. This approach could be implemented in an end-to-end computer-aided diagnosis system to shorten the evaluation time.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0923-6082
1573-0824
DOI:10.1007/s11045-024-00897-z