Implementation of image fusion model using DCGAN

Remote Sensing Images (RSI) are captured by the satellites. The quality of the RSIs primarily depends on environmental conditions and image-capturing device capability. Rapid development in technology leads to the generation of High- Resolution (HR) images from satellites. However, these images are...

Full description

Saved in:
Bibliographic Details
Published inI-manager's Journal on Image Processing Vol. 9; no. 4; p. 35
Main Authors Sreedhar, P. S. S. S., Balaji, Tedla, Sai, Somayajulu Meduri
Format Journal Article
LanguageEnglish
Published Nagercoil iManager Publications 01.10.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Remote Sensing Images (RSI) are captured by the satellites. The quality of the RSIs primarily depends on environmental conditions and image-capturing device capability. Rapid development in technology leads to the generation of High- Resolution (HR) images from satellites. However, these images are to be processed in a scientific way for the best results. A new Image Fusion (IF) technique with the help of wavelets, Deep Convolutional Generative Adversarial Networks (DCGAN), was designed to get super-resolution images for satellite images. Residual Convolution Neural Network (ResNet) increases the fused image accuracy by minimizing the vanishing gradient problem. Peak Signal to Noise Ratio (PSNR), Structural Similarity Index Method (SSIM), Feature Similarity Index Method (FSIM), and Universal Image Quality (UIQ) are taken as the metrics for comparing the results with other models. The experimental results are better than previous methods and minimize the spatial and spectral losses during the fusion.
ISSN:2349-4530
2349-6827
DOI:10.26634/jip.9.4.19229