Fast Context Adaptation via Meta-Learning

We propose CAVIA for meta-learning, a simple extension to MAML that is less prone to meta-overfitting, easier to parallelise, and more interpretable. CAVIA partitions the model parameters into two parts: context parameters that serve as additional input to the model and are adapted on individual tas...

Full description

Saved in:
Bibliographic Details
Main Authors Zintgraf, Luisa M, Shiarlis, Kyriacos, Kurin, Vitaly, Hofmann, Katja, Whiteson, Shimon
Format Journal Article
LanguageEnglish
Published 08.10.2018
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:We propose CAVIA for meta-learning, a simple extension to MAML that is less prone to meta-overfitting, easier to parallelise, and more interpretable. CAVIA partitions the model parameters into two parts: context parameters that serve as additional input to the model and are adapted on individual tasks, and shared parameters that are meta-trained and shared across tasks. At test time, only the context parameters are updated, leading to a low-dimensional task representation. We show empirically that CAVIA outperforms MAML for regression, classification, and reinforcement learning. Our experiments also highlight weaknesses in current benchmarks, in that the amount of adaptation needed in some cases is small.
DOI:10.48550/arxiv.1810.03642