Universal Recurrent Event Memories for Streaming Data
In this paper, we propose a new event memory architecture (MemNet) for recurrent neural networks, which is universal for different types of time series data such as scalar, multivariate or symbolic. Unlike other external neural memory architectures, it stores key-value pairs, which separate the info...
Saved in:
Main Authors | , |
---|---|
Format | Journal Article |
Language | English |
Published |
28.07.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | In this paper, we propose a new event memory architecture (MemNet) for
recurrent neural networks, which is universal for different types of time
series data such as scalar, multivariate or symbolic. Unlike other external
neural memory architectures, it stores key-value pairs, which separate the
information for addressing and for content to improve the representation, as in
the digital archetype. Moreover, the key-value pairs also avoid the compromise
between memory depth and resolution that applies to memories constructed by the
model state. One of the MemNet key characteristics is that it requires only
linear adaptive mapping functions while implementing a nonlinear operation on
the input data. MemNet architecture can be applied without modifications to
scalar time series, logic operators on strings, and also to natural language
processing, providing state-of-the-art results in all application domains such
as the chaotic time series, the symbolic operation tasks, and the
question-answering tasks (bAbI). Finally, controlled by five linear layers,
MemNet requires a much smaller number of training parameters than other
external memory networks as well as the transformer network. The space
complexity of MemNet equals a single self-attention layer. It greatly improves
the efficiency of the attention mechanism and opens the door for IoT
applications. |
---|---|
DOI: | 10.48550/arxiv.2307.15694 |