Neural Networks in Fr\'echet spaces
We define a neural network in infinite dimensional spaces for which we can show the universal approximation property. Indeed, we derive approximation results for continuous functions from a Fr\'echet space $\X$ into a Banach space $\Y$. The approximation results are generalising the well known...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
28.09.2021
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | We define a neural network in infinite dimensional spaces for which we can
show the universal approximation property. Indeed, we derive approximation
results for continuous functions from a Fr\'echet space $\X$ into a Banach
space $\Y$. The approximation results are generalising the well known universal
approximation theorem for continuous functions from $\mathbb{R}^n$ to
$\mathbb{R}$, where approximation is done with (multilayer) neural networks
[15, 25, 18, 29]. Our infinite dimensional networks are constructed using
activation functions being nonlinear operators and affine transforms. Several
examples are given of such activation functions. We show furthermore that our
neural networks on infinite dimensional spaces can be projected down to finite
dimensional subspaces with any desirable accuracy, thus obtaining approximating
networks that are easy to implement and allow for fast computation and fitting.
The resulting neural network architecture is therefore applicable for
prediction tasks based on functional data. |
---|---|
DOI: | 10.48550/arxiv.2109.13512 |