Single image super‐resolution based on progressive fusion of orientation‐aware features
•We combined 1D and 2D convolutional kernels to extract orientation-aware features.•We employed a channel attention mechanism to adaptively select informative orientation-aware features.•Progressive feature fusion scheme is proposed to fuse hierarchical features. Single image super-resolution (SISR)...
Saved in:
Published in | Pattern recognition Vol. 133; p. 109038 |
---|---|
Main Authors | , , , , , , , , |
Format | Journal Article |
Language | English |
Published |
Elsevier Ltd
01.01.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | •We combined 1D and 2D convolutional kernels to extract orientation-aware features.•We employed a channel attention mechanism to adaptively select informative orientation-aware features.•Progressive feature fusion scheme is proposed to fuse hierarchical features.
Single image super-resolution (SISR) is an active research topic in the fields of image processing, computer vision and pattern recognition, restoring high-frequency details and textures based on the low-resolution input image. In this paper, we aim to build more accurate and faster SISR models via developing better-performing feature extraction and fusion techniques. Firstly, we proposed a novel Orientation-Aware feature extraction/selection Module (OAM), which contains a mixture of 1D and 2D convolutional kernels (i.e., 3×1, 1×3, and 3×3) for extracting orientation-aware features. The channel attention mechanism is deployed within each OAM, performing scene-specific selection of informative outputs of the orientation-dependent kernels (e.g., horizontal, vertical, and diagonal). Secondly, we present an effective fusion architecture to progressively integrate multi-scale features extracted in different convolutional stages. Instead of directly combining low-level and high-level features, similar outputs of adjacent feature extraction modules are grouped and further compressed to generate a more concise representation of a specific convolutional stage for high-accuracy SISR task. Based on the above two important improvements, we present a compact but effective CNN-based model for high-quality SISR via Progressive Fusion of Orientation-Aware features (SISR-PF-OA). Extensive experimental results verify the superiority of the proposed SISR-PF-OA model, performing favorably against the state-of-the-art models in terms of both restoration accuracy and computational efficiency (e.g., SISR-PF-OA outperforms RCAN model, achieving higher PSNR 31.25 dB vs. 31.21 dB and using fewer FLOPs 764.41 G vs. 1020.28 G on the Manga109 dataset for scale factor ×4 SISR task.). The source codes will be made publicly available. |
---|---|
ISSN: | 0031-3203 1873-5142 |
DOI: | 10.1016/j.patcog.2022.109038 |