Learning Linear Groups in Neural Networks

Employing equivariance in neural networks leads to greater parameter efficiency and improved generalization performance through the encoding of domain knowledge in the architecture; however, the majority of existing approaches require an a priori specification of the desired symmetries. We present a...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Theodosis, Emmanouil, Helwani, Karim, Demba Ba
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 29.05.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Employing equivariance in neural networks leads to greater parameter efficiency and improved generalization performance through the encoding of domain knowledge in the architecture; however, the majority of existing approaches require an a priori specification of the desired symmetries. We present a neural network architecture, Linear Group Networks (LGNs), for learning linear groups acting on the weight space of neural networks. Linear groups are desirable due to their inherent interpretability, as they can be represented as finite matrices. LGNs learn groups without any supervision or knowledge of the hidden symmetries in the data and the groups can be mapped to well known operations in machine learning. We use LGNs to learn groups on multiple datasets while considering different downstream tasks; we demonstrate that the linear group structure depends on both the data distribution and the considered task.
ISSN:2331-8422