Meta-Learning with Variational Bayes
The field of meta-learning seeks to improve the ability of today's machine learning systems to adapt efficiently to small amounts of data. Typically this is accomplished by training a system with a parametrized update rule to improve a task-relevant objective based on supervision or a reward fu...
Saved in:
Published in | arXiv.org |
---|---|
Main Author | |
Format | Paper |
Language | English |
Published |
Ithaca
Cornell University Library, arXiv.org
25.03.2021
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | The field of meta-learning seeks to improve the ability of today's machine learning systems to adapt efficiently to small amounts of data. Typically this is accomplished by training a system with a parametrized update rule to improve a task-relevant objective based on supervision or a reward function. However, in many domains of practical interest, task data is unlabeled, or reward functions are unavailable. In this paper we introduce a new approach to address the more general problem of generative meta-learning, which we argue is an important prerequisite for obtaining human-level cognitive flexibility in artificial agents, and can benefit many practical applications along the way. Our contribution leverages the AEVB framework and mean-field variational Bayes, and creates fast-adapting latent-space generative models. At the heart of our contribution is a new result, showing that for a broad class of deep generative latent variable models, the relevant VB updates do not depend on any generative neural network. The theoretical merits of our approach are reflected in empirical experiments. |
---|---|
ISSN: | 2331-8422 |