Global-Local Awareness Network for Image Super-Resolution

Deep-net models based on self-attention, such as Swin Transformer, have achieved great success for single image super-resolution (SISR). While self-attention excels at modeling global information, it is less effective at capturing high frequencies (e.g., edges etc.) that deliver local information pr...

Full description

Saved in:
Bibliographic Details
Published in2023 IEEE International Conference on Image Processing (ICIP) pp. 1150 - 1154
Main Authors Pan, Pin-Chi, Hsu, Tzu-Hao, Wei, Wen-Li, Lin, Jen-Chun
Format Conference Proceeding
LanguageEnglish
Published IEEE 08.10.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Deep-net models based on self-attention, such as Swin Transformer, have achieved great success for single image super-resolution (SISR). While self-attention excels at modeling global information, it is less effective at capturing high frequencies (e.g., edges etc.) that deliver local information primarily, which is crucial for SISR. To tackle this, we propose a global-local awareness network (GLA-Net) to effectively capture global and local information to learn comprehensive features with low- and high-frequency information. First, we design a GLA layer that combines a high-frequency-oriented Inception module with a low-frequency-oriented Swin Transformer module to simultaneously process local and global information. Second, we introduce dense connections in-between GLA blocks to strengthen feature propagation and alleviate the vanishing-gradient problem, where each GLA block is composed of several GLA layers. By coupling these core designs, GLA-Net achieves SOTA performance on SISR.
DOI:10.1109/ICIP49359.2023.10221952