A two-channel convolutional neural network for image super-resolution
A two-channel convolutional neural network (including one shallow and one deep channel) is proposed for the single image super-resolution (SISR). Most existing methods based on convolution neural networks (CNNs) for super resolution have a shallow channel which easily loses the detailed information....
Saved in:
Published in | Neurocomputing (Amsterdam) Vol. 275; pp. 267 - 277 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
Elsevier B.V
31.01.2018
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | A two-channel convolutional neural network (including one shallow and one deep channel) is proposed for the single image super-resolution (SISR). Most existing methods based on convolution neural networks (CNNs) for super resolution have a shallow channel which easily loses the detailed information. And these methods need preprocessing such as bicubic interpolation enlarging LR images to the size of HR images, which may introduce new noises. Meanwhile, most of them use only one fixed filter during the reconstruction. The proposed algorithm solves the above problems, which is named shallow and deep convolutional networks for image super-resolution (SDSR). First, the proposed method uses two channels: shallow and deep channel. The shallow channel mainly restores the general outline of the image. On the contrast, the deep channel extracts the detailed texture information. Second, the proposed method directly learns an end-to-end mapping between low-resolution (LR) and high-resolution (HR) images, which does not need hand-designed preprocessing. The upsampling of the network by deconvolution is embedded in the two channels, which leads to much more efficient and effective training, reducing the computational complexity of the overall SR operation. Finally, during the last period of reconstruction, the deep channel adopts multi-scale manner, which can extract both the short- and long-scale texture information simultaneously. Our model is evaluated on the different public datasets including images and videos. Experimental results demonstrate that the proposed method outperforms the existing methods in accuracy and visual impression. |
---|---|
ISSN: | 0925-2312 1872-8286 |
DOI: | 10.1016/j.neucom.2017.08.041 |