Spatial and Channel Aggregation Network for Lightweight Image Super-Resolution

Advanced deep learning-based Single Image Super-Resolution (SISR) techniques are designed to restore high-frequency image details and enhance imaging resolution through the use of rapid and lightweight network architectures. Existing SISR methodologies face the challenge of striking a balance betwee...

Full description

Saved in:
Bibliographic Details
Published inSensors (Basel, Switzerland) Vol. 23; no. 19; p. 8213
Main Authors Wu, Xianyu, Zuo, Linze, Huang, Feng
Format Journal Article
LanguageEnglish
Published Switzerland MDPI AG 01.10.2023
MDPI
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Advanced deep learning-based Single Image Super-Resolution (SISR) techniques are designed to restore high-frequency image details and enhance imaging resolution through the use of rapid and lightweight network architectures. Existing SISR methodologies face the challenge of striking a balance between performance and computational costs, which hinders the practical application of SISR methods. In response to this challenge, the present study introduces a lightweight network known as the Spatial and Channel Aggregation Network (SCAN), designed to excel in image super-resolution (SR) tasks. SCAN is the first SISR method to employ large-kernel convolutions combined with feature reduction operations. This design enables the network to focus more on challenging intermediate-level information extraction, leading to improved performance and efficiency of the network. Additionally, an innovative 9 × 9 large kernel convolution was introduced to further expand the receptive field. The proposed SCAN method outperforms state-of-the-art lightweight SISR methods on benchmark datasets with a 0.13 dB improvement in peak signal-to-noise ratio (PSNR) and a 0.0013 increase in structural similarity (SSIM). Moreover, on remote sensing datasets, SCAN achieves a 0.4 dB improvement in PSNR and a 0.0033 increase in SSIM.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1424-8220
1424-8220
DOI:10.3390/s23198213