New region-based image fusion scheme using the discrete wavelet frame transform
In the field of image fusion, multi-source image fusion methods based on the pixel level can be classified according to two categories: image fusion based on the spatial domain and on the transform domain. When the coefficients of a fusion image are combined, we usually apply a fusion rule based on...
Saved in:
Published in | 2016 12th World Congress on Intelligent Control and Automation (WCICA) pp. 3066 - 3070 |
---|---|
Main Authors | , , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
01.06.2016
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | In the field of image fusion, multi-source image fusion methods based on the pixel level can be classified according to two categories: image fusion based on the spatial domain and on the transform domain. When the coefficients of a fusion image are combined, we usually apply a fusion rule based on pixels or windows, whereas the local features in the image are usually not represented. To resolve this problem, we propose a fusion method based on the discrete wavelet frame transform and regional characteristics. First, the transform coefficients are obtained for the two source images using the discrete wavelet frame transform. The average image is acquired after averaging the transform coefficients, which can roughly represent the features of these two source images. The average image is then segmented based on region features and the region coordinates obtained by segmentation are mapped onto the coefficients of the source images obtained by the discrete wavelet frame transform. Finally, the coefficients of each region can be combined using the specific fusion rules. Our experimental results demonstrate that the proposed method performs better compared with the fusion method based on the Laplacian pyramid transform and that based on the shiftShift-invariant discrete wavelet transform at preserving the regional features of source images, as well as delivering better performance in terms of both visuals effects and an objective index. |
---|---|
DOI: | 10.1109/WCICA.2016.7578615 |