Accelerated Bayesian Optimization throughWeight-Prior Tuning

PMLR 108:635-645, 2020 Bayesian optimization (BO) is a widely-used method for optimizing expensive (to evaluate) problems. At the core of most BO methods is the modeling of the objective function using a Gaussian Process (GP) whose covariance is selected from a set of standard covariance functions....

Full description

Saved in:
Bibliographic Details
Main Authors Shilton, Alistair, Gupta, Sunil, Rana, Santu, Vellanki, Pratibha, Park, Laurence, Li, Cheng, Venkatesh, Svetha, Sutti, Alessandra, Rubin, David, Dorin, Thomas, Vahid, Alireza, Height, Murray, Slezak, Teo
Format Journal Article
LanguageEnglish
Published 20.05.2018
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:PMLR 108:635-645, 2020 Bayesian optimization (BO) is a widely-used method for optimizing expensive (to evaluate) problems. At the core of most BO methods is the modeling of the objective function using a Gaussian Process (GP) whose covariance is selected from a set of standard covariance functions. From a weight-space view, this models the objective as a linear function in a feature space implied by the given covariance K, with an arbitrary Gaussian weight prior ${\bf w} \sim \mathcal{N} ({\bf 0}, {\bf I})$. In many practical applications there is data available that has a similar (covariance) structure to the objective, but which, having different form, cannot be used directly in standard transfer learning. In this paper we show how such auxiliary data may be used to construct a GP covariance corresponding to a more appropriate weight prior for the objective function. Building on this, we show that we may accelerate BO by modeling the objective function using this (learned) weight prior, which we demonstrate on both test functions and a practical application to short-polymer fibre manufacture.
DOI:10.48550/arxiv.1805.07852