Vision Xformers: Efficient Attention for Image Classification
Although transformers have become the neural architectures of choice for natural language processing, they require orders of magnitude more training data, GPU memory, and computations in order to compete with convolutional neural networks for computer vision. The attention mechanism of transformers...
Saved in:
Main Authors | , |
---|---|
Format | Journal Article |
Language | English |
Published |
05.07.2021
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Although transformers have become the neural architectures of choice for
natural language processing, they require orders of magnitude more training
data, GPU memory, and computations in order to compete with convolutional
neural networks for computer vision. The attention mechanism of transformers
scales quadratically with the length of the input sequence, and unrolled images
have long sequence lengths. Plus, transformers lack an inductive bias that is
appropriate for images. We tested three modifications to vision transformer
(ViT) architectures that address these shortcomings. Firstly, we alleviate the
quadratic bottleneck by using linear attention mechanisms, called X-formers
(such that, X in {Performer, Linformer, Nystr\"omformer}), thereby creating
Vision X-formers (ViXs). This resulted in up to a seven times reduction in the
GPU memory requirement. We also compared their performance with FNet and
multi-layer perceptron mixers, which further reduced the GPU memory
requirement. Secondly, we introduced an inductive bias for images by replacing
the initial linear embedding layer by convolutional layers in ViX, which
significantly increased classification accuracy without increasing the model
size. Thirdly, we replaced the learnable 1D position embeddings in ViT with
Rotary Position Embedding (RoPE), which increases the classification accuracy
for the same model size. We believe that incorporating such changes can
democratize transformers by making them accessible to those with limited data
and computing resources. |
---|---|
DOI: | 10.48550/arxiv.2107.02239 |