Any-dimensional equivariant neural networks

Traditional supervised learning aims to learn an unknown mapping by fitting a function to a set of input-output pairs with a fixed dimension. The fitted function is then defined on inputs of the same dimension. However, in many settings, the unknown mapping takes inputs in any dimension; examples in...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Levin, Eitan, Díaz, Mateo
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 29.04.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Traditional supervised learning aims to learn an unknown mapping by fitting a function to a set of input-output pairs with a fixed dimension. The fitted function is then defined on inputs of the same dimension. However, in many settings, the unknown mapping takes inputs in any dimension; examples include graph parameters defined on graphs of any size and physics quantities defined on an arbitrary number of particles. We leverage a newly-discovered phenomenon in algebraic topology, called representation stability, to define equivariant neural networks that can be trained with data in a fixed dimension and then extended to accept inputs in any dimension. Our approach is user-friendly, requiring only the network architecture and the groups for equivariance, and can be combined with any training procedure. We provide a simple open-source implementation of our methods and offer preliminary numerical experiments.
ISSN:2331-8422