Frequency-Spatial Domain Feature Fusion for Spectral Super-Resolution

The purpose of spectral super-resolution (SSR) is to reconstruct hyperspectral image (HSI) from RGB image, which significantly reduces the difficulty of acquiring HSI. Most existing SSR methods adopt convolutional neural networks (CNNs) as the basic framework. The capability of CNNs to capture globa...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on computational imaging Vol. 10; pp. 589 - 599
Main Authors Tan, Lishan, Dian, Renwei, Li, Shutao, Liu, Jinyang
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The purpose of spectral super-resolution (SSR) is to reconstruct hyperspectral image (HSI) from RGB image, which significantly reduces the difficulty of acquiring HSI. Most existing SSR methods adopt convolutional neural networks (CNNs) as the basic framework. The capability of CNNs to capture global context is limited, which constrains the performance of SSR. In this paper, we propose a novel frequency-spatial domain feature fusion network (FSDFF) for SSR, which simultaneously learns and fuses the frequency and spatial domain features of HSI. Frequency domain features can reflect the global information of image, which can be used to obtain the global context of HSI, thereby alleviating the limitations of CNNs in capturing global context. Spatial domain features contain abundant local structural information, which is beneficial for preserving spatial details in the SSR task. The mutual fusion of the two can better model the interrelationship between HSI and RGB image, thereby achieving better SSR performance. In FSDFF, we design a frequency domain feature learning branch (FDFL) and a spatial domain feature learning branch (SDFL) to learn the frequency and spatial domain features of HSI. Furthermore, a cross-domain feature fusion module (CDFF) is designed to facilitate the complementary fusion of the two types of features. The experimental results on two public datasets indicate that FSDFF has achieved state-of-the-art performance.
ISSN:2573-0436
2333-9403
DOI:10.1109/TCI.2024.3384811