Algorithm Configuration: Learning Policies for the Quick Termination of Poor Performers

One way to speed up the algorithm configuration task is to use short runs instead of long runs as much as possible, but without discarding the configurations that eventually do well on the long runs. We consider the problem of selecting the top performing configurations of Conditional Markov Chain S...

Full description

Saved in:
Bibliographic Details
Published inLearning and Intelligent Optimization Vol. 11353; pp. 220 - 224
Main Authors Karapetyan, Daniel, Parkes, Andrew J., Stützle, Thomas
Format Book Chapter
LanguageEnglish
Published Switzerland Springer International Publishing AG 01.01.2019
Springer International Publishing
SeriesLecture Notes in Computer Science
Online AccessGet full text

Cover

Loading…
More Information
Summary:One way to speed up the algorithm configuration task is to use short runs instead of long runs as much as possible, but without discarding the configurations that eventually do well on the long runs. We consider the problem of selecting the top performing configurations of Conditional Markov Chain Search (CMCS), a general algorithm schema that includes, for example, VNS. We investigate how the structure of performance on short tests links with those on long tests, showing that significant differences arise between test domains. We propose a “performance envelope” method to exploit the links; that learns when runs should be terminated, but that automatically adapts to the domain.
ISBN:3030053474
9783030053475
ISSN:0302-9743
1611-3349
DOI:10.1007/978-3-030-05348-2_20