Lattice Representation Learning
In this article we introduce theory and algorithms for learning discrete representations that take on a lattice that is embedded in an Euclidean space. Lattice representations possess an interesting combination of properties: a) they can be computed explicitly using lattice quantization, yet they ca...
Saved in:
Main Author | |
---|---|
Format | Journal Article |
Language | English |
Published |
24.06.2020
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | In this article we introduce theory and algorithms for learning discrete
representations that take on a lattice that is embedded in an Euclidean space.
Lattice representations possess an interesting combination of properties: a)
they can be computed explicitly using lattice quantization, yet they can be
learned efficiently using the ideas we introduce in this paper, b) they are
highly related to Gaussian Variational Autoencoders, allowing designers
familiar with the latter to easily produce discrete representations from their
models and c) since lattices satisfy the axioms of a group, their adoption can
lead into a way of learning simple algebras for modeling binary operations
between objects through symbolic formalisms, yet learn these structures also
formally using differentiation techniques. This article will focus on laying
the groundwork for exploring and exploiting the first two properties, including
a new mathematical result linking expressions used during training and
inference time and experimental validation on two popular datasets. |
---|---|
DOI: | 10.48550/arxiv.2006.13833 |