Medical Image Fusion With Parameter-Adaptive Pulse Coupled Neural Network in Nonsubsampled Shearlet Transform Domain

As an effective way to integrate the information contained in multiple medical images with different modalities, medical image fusion has emerged as a powerful technique in various clinical applications such as disease diagnosis and treatment planning. In this paper, a new multimodal medical image f...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on instrumentation and measurement Vol. 68; no. 1; pp. 49 - 64
Main Authors Yin, Ming, Liu, Xiaoning, Liu, Yu, Chen, Xun
Format Journal Article
LanguageEnglish
Published New York IEEE 01.01.2019
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN0018-9456
1557-9662
DOI10.1109/TIM.2018.2838778

Cover

More Information
Summary:As an effective way to integrate the information contained in multiple medical images with different modalities, medical image fusion has emerged as a powerful technique in various clinical applications such as disease diagnosis and treatment planning. In this paper, a new multimodal medical image fusion method in nonsubsampled shearlet transform (NSST) domain is proposed. In the proposed method, the NSST decomposition is first performed on the source images to obtain their multiscale and multidirection representations. The high-frequency bands are fused by a parameter-adaptive pulse-coupled neural network (PA-PCNN) model, in which all the PCNN parameters can be adaptively estimated by the input band. The low-frequency bands are merged by a novel strategy that simultaneously addresses two crucial issues in medical image fusion, namely, energy preservation and detail extraction. Finally, the fused image is reconstructed by performing inverse NSST on the fused high-frequency and low-frequency bands. The effectiveness of the proposed method is verified by four different categories of medical image fusion problems [computed tomography (CT) and magnetic resonance (MR), MR-T1 and MR-T2, MR and positron emission tomography, and MR and single-photon emission CT] with more than 80 pairs of source images in total. Experimental results demonstrate that the proposed method can obtain more competitive performance in comparison to nine representative medical image fusion methods, leading to state-of-the-art results on both visual quality and objective assessment.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0018-9456
1557-9662
DOI:10.1109/TIM.2018.2838778