Affine symmetries and neural network identifiability

We address the following question of neural network identifiability: Suppose we are given a function f:Rm→Rn and a nonlinearity ρ. Can we specify the architecture, weights, and biases of all feed-forward neural networks with respect to ρ giving rise to f? Existing literature on the subject suggests...

Full description

Saved in:
Bibliographic Details
Published inAdvances in mathematics (New York. 1965) Vol. 376; p. 107485
Main Authors Vlačić, Verner, Bölcskei, Helmut
Format Journal Article
LanguageEnglish
Published Elsevier Inc 06.01.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:We address the following question of neural network identifiability: Suppose we are given a function f:Rm→Rn and a nonlinearity ρ. Can we specify the architecture, weights, and biases of all feed-forward neural networks with respect to ρ giving rise to f? Existing literature on the subject suggests that the answer should be yes, provided we are only concerned with finding networks that satisfy certain “genericity conditions”. Moreover, the identified networks are mutually related by symmetries of the nonlinearity. For instance, the tanh function is odd, and so flipping the signs of the incoming and outgoing weights of a neuron does not change the output map of the network. The results known hitherto, however, apply either to single-layer networks, or to networks satisfying specific structural assumptions (such as full connectivity), as well as to specific nonlinearities. In an effort to answer the identifiability question in greater generality, we consider arbitrary nonlinearities with potentially complicated affine symmetries, and we show that the symmetries can be used to find a rich set of networks giving rise to the same function f. The set obtained in this manner is, in fact, exhaustive (i.e., it contains all networks giving rise to f) unless there exists a network A “with no internal symmetries” giving rise to the identically zero function. This result can thus be interpreted as an analog of the rank-nullity theorem for linear operators. We furthermore exhibit a class of “tanh-type” nonlinearities (including the tanh function itself) for which such a network A does not exist, thereby solving the identifiability question for these nonlinearities in full generality and settling an open problem posed by Fefferman in [6]. Finally, we show that this class contains nonlinearities with arbitrarily complicated symmetries.
ISSN:0001-8708
1090-2082
DOI:10.1016/j.aim.2020.107485