Single image super-resolution based on directional variance attention network
•We propose a lightweight and efficient directional variance attention network (DiVANet) for high-quality image SR-. Extensive experiments on a variety of public datasets demonstrate the superiority of the proposed architecture over state-of-the-art models.•We propose a directional variance attentio...
Saved in:
Published in | Pattern recognition Vol. 133; p. 108997 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
Elsevier Ltd
01.01.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | •We propose a lightweight and efficient directional variance attention network (DiVANet) for high-quality image SR-. Extensive experiments on a variety of public datasets demonstrate the superiority of the proposed architecture over state-of-the-art models.•We propose a directional variance attention mechanism (DiVA), to enhance features in different channels and spatial regions. Such a mechanism allows the network to focus on more informative features.•We introduce a novel procedure for using attention mechanisms together with residual blocks, following two independent but parallel computational paths in order to facilitate the preservation of finer details.
Recent advances in single image super-resolution (SISR) explore the power of deep convolutional neural networks (CNNs) to achieve better performance. However, most of the progress has been made by scaling CNN architectures, which usually raise computational demands and memory consumption. This makes modern architectures less applicable in practice. In addition, most CNN-based SR methods do not fully utilize the informative hierarchical features that are helpful for final image recovery. In order to address these issues, we propose a directional variance attention network (DiVANet), a computationally efficient yet accurate network for SISR. Specifically, we introduce a novel directional variance attention (DiVA) mechanism to capture long-range spatial dependencies and exploit inter-channel dependencies simultaneously for more discriminative representations. Furthermore, we propose a residual attention feature group (RAFG) for parallelizing attention and residual block computation. The output of each residual block is linearly fused at the RAFG output to provide access to the whole feature hierarchy. In parallel, DiVA extracts most relevant features from the network for improving the final output and preventing information loss along the successive operations inside the network. Experimental results demonstrate the superiority of DiVANet over the state of the art in several datasets, while maintaining relatively low computation and memory footprint. The code is available at https://github.com/pbehjatii/DiVANet. |
---|---|
ISSN: | 0031-3203 1873-5142 |
DOI: | 10.1016/j.patcog.2022.108997 |