Understanding neural networks with reproducing kernel Banach spaces

Characterizing the function spaces corresponding to neural networks can provide a way to understand their properties. In this paper we discuss how the theory of reproducing kernel Banach spaces can be used to tackle this challenge. In particular, we prove a representer theorem for a wide class of re...

Full description

Saved in:
Bibliographic Details
Published inApplied and computational harmonic analysis Vol. 62; pp. 194 - 236
Main Authors Bartolucci, Francesca, De Vito, Ernesto, Rosasco, Lorenzo, Vigogna, Stefano
Format Journal Article
LanguageEnglish
Published Elsevier Inc 01.01.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Characterizing the function spaces corresponding to neural networks can provide a way to understand their properties. In this paper we discuss how the theory of reproducing kernel Banach spaces can be used to tackle this challenge. In particular, we prove a representer theorem for a wide class of reproducing kernel Banach spaces that admit a suitable integral representation and include one hidden layer neural networks of possibly infinite width. Further, we show that, for a suitable class of ReLU activation functions, the norm in the corresponding reproducing kernel Banach space can be characterized in terms of the inverse Radon transform of a bounded real measure, with norm given by the total variation norm of the measure. Our analysis simplifies and extends recent results in [45,36,37].
ISSN:1063-5203
1096-603X
DOI:10.1016/j.acha.2022.08.006