Analyzing the Capacity of Distributed Vector Representations to Encode Spatial Information

Vector Symbolic Architectures belong to a family of related cognitive modeling approaches that encode symbols and structures in high-dimensional vectors. Similar to human subjects, whose capacity to process and store information or concepts in short-term memory is subject to numerical restrictions,t...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Mirus, Florian, Stewart, Terrence C, Conradt, Jorg
Format Paper Journal Article
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 30.09.2020
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Vector Symbolic Architectures belong to a family of related cognitive modeling approaches that encode symbols and structures in high-dimensional vectors. Similar to human subjects, whose capacity to process and store information or concepts in short-term memory is subject to numerical restrictions,the capacity of information that can be encoded in such vector representations is limited and one way of modeling the numerical restrictions to cognition. In this paper, we analyze these limits regarding information capacity of distributed representations. We focus our analysis on simple superposition and more complex, structured representations involving convolutive powers to encode spatial information. In two experiments, we find upper bounds for the number of concepts that can effectively be stored in a single vector.
Bibliography:SourceType-Working Papers-1
ObjectType-Working Paper/Pre-Print-1
content type line 50
ISSN:2331-8422
DOI:10.48550/arxiv.2010.00055