Strong Baselines for Parameter Efficient Few-Shot Fine-tuning
Few-shot classification (FSC) entails learning novel classes given only a few examples per class after a pre-training (or meta-training) phase on a set of base classes. Recent works have shown that simply fine-tuning a pre-trained Vision Transformer (ViT) on new test classes is a strong approach for...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
04.04.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Few-shot classification (FSC) entails learning novel classes given only a few
examples per class after a pre-training (or meta-training) phase on a set of
base classes. Recent works have shown that simply fine-tuning a pre-trained
Vision Transformer (ViT) on new test classes is a strong approach for FSC.
Fine-tuning ViTs, however, is expensive in time, compute and storage. This has
motivated the design of parameter efficient fine-tuning (PEFT) methods which
fine-tune only a fraction of the Transformer's parameters. While these methods
have shown promise, inconsistencies in experimental conditions make it
difficult to disentangle their advantage from other experimental factors
including the feature extractor architecture, pre-trained initialization and
fine-tuning algorithm, amongst others. In our paper, we conduct a large-scale,
experimentally consistent, empirical analysis to study PEFTs for few-shot image
classification. Through a battery of over 1.8k controlled experiments on
large-scale few-shot benchmarks including Meta-Dataset (MD) and ORBIT, we
uncover novel insights on PEFTs that cast light on their efficacy in
fine-tuning ViTs for few-shot classification. Through our controlled empirical
study, we have two main findings: (i) Fine-tuning just the LayerNorm parameters
(which we call LN-Tune) during few-shot adaptation is an extremely strong
baseline across ViTs pre-trained with both self-supervised and supervised
objectives, (ii) For self-supervised ViTs, we find that simply learning a set
of scaling parameters for each attention matrix (which we call AttnScale) along
with a domain-residual adapter (DRA) module leads to state-of-the-art
performance (while being $\sim\!$ 9$\times$ more parameter-efficient) on MD.
Our extensive empirical findings set strong baselines and call for rethinking
the current design of PEFT methods for FSC. |
---|---|
DOI: | 10.48550/arxiv.2304.01917 |