Separations in the Representational Capabilities of Transformers and Recurrent Architectures
Transformer architectures have been widely adopted in foundation models. Due to their high inference costs, there is renewed interest in exploring the potential of efficient recurrent architectures (RNNs). In this paper, we analyze the differences in the representational capabilities of Transformers...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
13.06.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Transformer architectures have been widely adopted in foundation models. Due
to their high inference costs, there is renewed interest in exploring the
potential of efficient recurrent architectures (RNNs). In this paper, we
analyze the differences in the representational capabilities of Transformers
and RNNs across several tasks of practical relevance, including index lookup,
nearest neighbor, recognizing bounded Dyck languages, and string equality. For
the tasks considered, our results show separations based on the size of the
model required for different architectures. For example, we show that a
one-layer Transformer of logarithmic width can perform index lookup, whereas an
RNN requires a hidden state of linear size. Conversely, while constant-size
RNNs can recognize bounded Dyck languages, we show that one-layer Transformers
require a linear size for this task. Furthermore, we show that two-layer
Transformers of logarithmic size can perform decision tasks such as string
equality or disjointness, whereas both one-layer Transformers and recurrent
models require linear size for these tasks. We also show that a log-size
two-layer Transformer can implement the nearest neighbor algorithm in its
forward pass; on the other hand recurrent models require linear size. Our
constructions are based on the existence of $N$ nearly orthogonal vectors in
$O(\log N)$ dimensional space and our lower bounds are based on reductions from
communication complexity problems. We supplement our theoretical results with
experiments that highlight the differences in the performance of these
architectures on practical-size sequences. |
---|---|
DOI: | 10.48550/arxiv.2406.09347 |