S\(^3\)Attention: Improving Long Sequence Attention with Smoothed Skeleton Sketching

Attention based models have achieved many remarkable breakthroughs in numerous applications. However, the quadratic complexity of Attention makes the vanilla Attention based models hard to apply to long sequence tasks. Various improved Attention structures are proposed to reduce the computation cost...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Wang, Xue, Zhou, Tian, Zhu, Jianqing, Liu, Jialin, Yuan, Kun, Yao, Tao, Yin, Wotao, Jin, Rong, Cai, HanQin
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 17.09.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Attention based models have achieved many remarkable breakthroughs in numerous applications. However, the quadratic complexity of Attention makes the vanilla Attention based models hard to apply to long sequence tasks. Various improved Attention structures are proposed to reduce the computation cost by inducing low rankness and approximating the whole sequence by sub-sequences. The most challenging part of those approaches is maintaining the proper balance between information preservation and computation reduction: the longer sub-sequences used, the better information is preserved, but at the price of introducing more noise and computational costs. In this paper, we propose a smoothed skeleton sketching based Attention structure, coined S\(^3\)Attention, which significantly improves upon the previous attempts to negotiate this trade-off. S\(^3\)Attention has two mechanisms to effectively minimize the impact of noise while keeping the linear complexity to the sequence length: a smoothing block to mix information over long sequences and a matrix sketching method that simultaneously selects columns and rows from the input matrix. We verify the effectiveness of S\(^3\)Attention both theoretically and empirically. Extensive studies over Long Range Arena (LRA) datasets and six time-series forecasting show that S\(^3\)Attention significantly outperforms both vanilla Attention and other state-of-the-art variants of Attention structures.
ISSN:2331-8422
DOI:10.48550/arxiv.2408.08567