A linearized framework and a new benchmark for model selection for fine-tuning
Fine-tuning from a collection of models pre-trained on different domains (a "model zoo") is emerging as a technique to improve test accuracy in the low-data regime. However, model selection, i.e. how to pre-select the right model to fine-tune from a model zoo without performing any trainin...
Saved in:
Main Authors | , , , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
29.01.2021
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Fine-tuning from a collection of models pre-trained on different domains (a
"model zoo") is emerging as a technique to improve test accuracy in the
low-data regime. However, model selection, i.e. how to pre-select the right
model to fine-tune from a model zoo without performing any training, remains an
open topic. We use a linearized framework to approximate fine-tuning, and
introduce two new baselines for model selection -- Label-Gradient and
Label-Feature Correlation. Since all model selection algorithms in the
literature have been tested on different use-cases and never compared directly,
we introduce a new comprehensive benchmark for model selection comprising of:
i) A model zoo of single and multi-domain models, and ii) Many target tasks.
Our benchmark highlights accuracy gain with model zoo compared to fine-tuning
Imagenet models. We show our model selection baseline can select optimal models
to fine-tune in few selections and has the highest ranking correlation to
fine-tuning accuracy compared to existing algorithms. |
---|---|
DOI: | 10.48550/arxiv.2102.00084 |