Error bounds for approximations with deep ReLU networks

We study expressive power of shallow and deep neural networks with piece-wise linear activation functions. We establish new rigorous upper and lower bounds for the network complexity in the setting of approximations in Sobolev spaces. In particular, we prove that deep ReLU networks more efficiently...

Full description

Saved in:
Bibliographic Details
Published inNeural networks Vol. 94; pp. 103 - 114
Main Author Yarotsky, Dmitry
Format Journal Article
LanguageEnglish
Published United States Elsevier Ltd 01.10.2017
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:We study expressive power of shallow and deep neural networks with piece-wise linear activation functions. We establish new rigorous upper and lower bounds for the network complexity in the setting of approximations in Sobolev spaces. In particular, we prove that deep ReLU networks more efficiently approximate smooth functions than shallow networks. In the case of approximations of 1D Lipschitz functions we describe adaptive depth-6 network architectures more efficient than the standard shallow architecture.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0893-6080
1879-2782
1879-2782
DOI:10.1016/j.neunet.2017.07.002