Learning to Optimize Black-Box Functions with Extreme Limits on the Number of Function Evaluations

We consider black-box optimization in which only an extremely limited number of function evaluations, on the order of around 100, are affordable and the function evaluations must be performed in even fewer batches of a limited number of parallel trials. This is a typical scenario when optimizing var...

Full description

Saved in:
Bibliographic Details
Published inLearning and Intelligent Optimization Vol. 12931; pp. 7 - 24
Main Authors Ansótegui, Carlos, Sellmann, Meinolf, Shah, Tapan, Tierney, Kevin
Format Book Chapter
LanguageEnglish
Published Switzerland Springer International Publishing AG 2021
Springer International Publishing
SeriesLecture Notes in Computer Science
Online AccessGet full text
ISBN3030921204
9783030921200
ISSN0302-9743
1611-3349
DOI10.1007/978-3-030-92121-7_2

Cover

Loading…
More Information
Summary:We consider black-box optimization in which only an extremely limited number of function evaluations, on the order of around 100, are affordable and the function evaluations must be performed in even fewer batches of a limited number of parallel trials. This is a typical scenario when optimizing variable settings that are very costly to evaluate, for example in the context of simulation-based optimization or machine learning hyperparameterization. We propose an original method that uses established approaches to propose a set of points for each batch and then down-selects from these candidate points to the number of trials that can be run in parallel. The key novelty of our approach lies in the introduction of a hyperparameterized method for down-selecting the number of candidates to the allowed batch-size, which is optimized offline using automated algorithm configuration. We tune this method for black box optimization and then evaluate on classical black box optimization benchmarks. Our results show that it is possible to learn how to combine evaluation points suggested by highly diverse black box optimization methods conditioned on the progress of the optimization. Compared with the state of the art in black box minimization and various other methods specifically geared towards few-shot minimization, we achieve an average reduction of 50% of normalized cost, which is a highly significant improvement in performance.
ISBN:3030921204
9783030921200
ISSN:0302-9743
1611-3349
DOI:10.1007/978-3-030-92121-7_2