Weighted residuals for very deep networks

Deep residual networks have recently shown appealing performance on many challenging computer vision tasks. However, the original residual structure still has some defects making it difficult to converge on very deep networks. In this paper, we introduce a weighted residual network to address the in...

Full description

Saved in:
Bibliographic Details
Published in2016 3rd International Conference on Systems and Informatics (ICSAI) pp. 936 - 941
Main Authors Falong Shen, Rui Gan, Gang Zeng
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.11.2016
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Deep residual networks have recently shown appealing performance on many challenging computer vision tasks. However, the original residual structure still has some defects making it difficult to converge on very deep networks. In this paper, we introduce a weighted residual network to address the incompatibility between ReLU and element-wise addition and the deep network initialization problem. The weighted residual network is able to learn to combine residuals from different layers effectively and efficiently. The proposed models enjoy a consistent improvement over accuracy and convergence with increasing depths from 100+ layers to 1000+ layers. Besides, the weighted residual networks have little more computation and GPU memory burden than the original residual networks. The networks are optimized by projected stochastic gradient descent. Experiments on CIFAR-10 have shown that our algorithm has a faster convergence speed than the original residual networks and reaches a high accuracy at 95.3% with a 1192-layer model. Experiments on CIFAR-100 and ImageNet-1k have also verified the effectiveness of our proposed design.
DOI:10.1109/ICSAI.2016.7811085