LSG Attention: Extrapolation of pretrained Transformers to long sequences
Transformer models achieve state-of-the-art performance on a wide range of NLP tasks. They however suffer from a prohibitive limitation due to the self-attention mechanism, inducing $O(n^2)$ complexity with regard to sequence length. To answer this limitation we introduce the LSG architecture which...
Saved in:
Main Authors | , |
---|---|
Format | Journal Article |
Language | English |
Published |
13.10.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Transformer models achieve state-of-the-art performance on a wide range of
NLP tasks. They however suffer from a prohibitive limitation due to the
self-attention mechanism, inducing $O(n^2)$ complexity with regard to sequence
length. To answer this limitation we introduce the LSG architecture which
relies on Local, Sparse and Global attention. We show that LSG attention is
fast, efficient and competitive in classification and summarization tasks on
long documents. Interestingly, it can also be used to adapt existing pretrained
models to efficiently extrapolate to longer sequences with no additional
training. Along with the introduction of the LSG attention mechanism, we
propose tools to train new models and adapt existing ones based on this
mechanism. |
---|---|
DOI: | 10.48550/arxiv.2210.15497 |