Tuning Mixed Input Hyperparameters on the Fly for Efficient Population Based AutoRL
Despite a series of recent successes in reinforcement learning (RL), many RL algorithms remain sensitive to hyperparameters. As such, there has recently been interest in the field of AutoRL, which seeks to automate design decisions to create more general algorithms. Recent work suggests that populat...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
30.06.2021
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Despite a series of recent successes in reinforcement learning (RL), many RL
algorithms remain sensitive to hyperparameters. As such, there has recently
been interest in the field of AutoRL, which seeks to automate design decisions
to create more general algorithms. Recent work suggests that population based
approaches may be effective AutoRL algorithms, by learning hyperparameter
schedules on the fly. In particular, the PB2 algorithm is able to achieve
strong performance in RL tasks by formulating online hyperparameter
optimization as time varying GP-bandit problem, while also providing
theoretical guarantees. However, PB2 is only designed to work for continuous
hyperparameters, which severely limits its utility in practice. In this paper
we introduce a new (provably) efficient hierarchical approach for optimizing
both continuous and categorical variables, using a new time-varying bandit
algorithm specifically designed for the population based training regime. We
evaluate our approach on the challenging Procgen benchmark, where we show that
explicitly modelling dependence between data augmentation and other
hyperparameters improves generalization. |
---|---|
DOI: | 10.48550/arxiv.2106.15883 |