Efficient Transformers: A Survey

Transformer model architectures have garnered immense interest lately due to their effectiveness across a range of domains like language, vision, and reinforcement learning. In the field of natural language processing for example, Transformers have become an indispensable staple in the modern deep l...

Full description

Saved in:
Bibliographic Details
Published inACM computing surveys Vol. 55; no. 6; pp. 1 - 28
Main Authors Tay, Yi, Dehghani, Mostafa, Bahri, Dara, Metzler, Donald
Format Journal Article
LanguageEnglish
Published New York, NY ACM 30.06.2023
Association for Computing Machinery
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Transformer model architectures have garnered immense interest lately due to their effectiveness across a range of domains like language, vision, and reinforcement learning. In the field of natural language processing for example, Transformers have become an indispensable staple in the modern deep learning stack. Recently, a dizzying number of “X-former” models have been proposed—Reformer, Linformer, Performer, Longformer, to name a few—which improve upon the original Transformer architecture, many of which make improvements around computational and memory efficiency. With the aim of helping the avid researcher navigate this flurry, this article characterizes a large and thoughtful selection of recent efficiency-flavored “X-former” models, providing an organized and comprehensive overview of existing work and models across multiple domains.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0360-0300
1557-7341
DOI:10.1145/3530811