Simultaneous approximation of a smooth function and its derivatives by deep neural networks with piecewise-polynomial activations

This paper investigates the approximation properties of deep neural networks with piecewise-polynomial activation functions. We derive the required depth, width, and sparsity of a deep neural network to approximate any Hölder smooth function up to a given approximation error in Hölder norms in such...

Full description

Saved in:
Bibliographic Details
Published inNeural networks Vol. 161; pp. 242 - 253
Main Authors Belomestny, Denis, Naumov, Alexey, Puchkin, Nikita, Samsonov, Sergey
Format Journal Article
LanguageEnglish
Published United States Elsevier Ltd 01.04.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This paper investigates the approximation properties of deep neural networks with piecewise-polynomial activation functions. We derive the required depth, width, and sparsity of a deep neural network to approximate any Hölder smooth function up to a given approximation error in Hölder norms in such a way that all weights of this neural network are bounded by 1. The latter feature is essential to control generalization errors in many statistical and machine learning applications. •Rates and complexity for smooth function approximation in Hölder norms by ReQU neural networks.•Explicit and uniform bounds for weights of the approximating neural network.•Exponential convergence rates for analytic functions.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0893-6080
1879-2782
DOI:10.1016/j.neunet.2023.01.035