Hyperspectral and Multispectral Image Fusion Via Self-Supervised Loss and Separable Loss

Fusion of hyperspectral images (HSIs) with low-spatial and high-spectral resolution and multispectral images (MSIs) with high-spatial and low-spectral resolution is an important method to improve spatial resolution. The existing deep-learning-based image fusion technologies usually neglect the abili...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on geoscience and remote sensing Vol. 60; pp. 1 - 12
Main Authors Gao, Huiling, Li, Shutao, Dian, Renwei
Format Journal Article
LanguageEnglish
Published New York IEEE 2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Fusion of hyperspectral images (HSIs) with low-spatial and high-spectral resolution and multispectral images (MSIs) with high-spatial and low-spectral resolution is an important method to improve spatial resolution. The existing deep-learning-based image fusion technologies usually neglect the ability of neural networks to understand differential features. In addition, the loss constraints do not stem from the physical characteristics of the hyperspectral (HS) imaging sensors. We propose the self-supervised loss and the spatially and spectrally separable loss, respectively: 1) the self-supervised loss: different from the previous way of directly stacking the upsampled HSIs and MSIs as input, we expect the potentially processed HSIs to ensure not only the integrity of HSI information but also the most reasonable balance between overall spatial and spectral features. First, the preinterpolated HSIs are decomposed into subspaces as self-supervised labels. Then, a network is designed to learn subspace information and obtain the most discriminative features and 2) the separable loss: according to the physical characteristics of HSIs, the pixel-based mean square error loss is first divided into the domain loss and spectral domain loss, and then the similarity score of the images is calculated and used to construct the weighting coefficients of the two domain losses. Finally, the separable loss is jointly expressed by the weights. Experiments on public benchmark datasets indicate that the self-supervised loss and separable loss can improve the fusion performance.
ISSN:0196-2892
1558-0644
DOI:10.1109/TGRS.2022.3204769