SAMBA: safe model-based & active reinforcement learning

In this paper, we propose SAMBA, a novel framework for safe reinforcement learning that combines aspects from probabilistic modelling, information theory, and statistics. Our method builds upon PILCO to enable active exploration using novel acquisition functions for out-of-sample Gaussian process ev...

Full description

Saved in:
Bibliographic Details
Published inMachine learning Vol. 111; no. 1; pp. 173 - 203
Main Authors Cowen-Rivers, Alexander I., Palenicek, Daniel, Moens, Vincent, Abdullah, Mohammed Amin, Sootla, Aivar, Wang, Jun, Bou-Ammar, Haitham
Format Journal Article
LanguageEnglish
Published New York Springer US 01.01.2022
Springer Nature B.V
Subjects
Online AccessGet full text
ISSN0885-6125
1573-0565
DOI10.1007/s10994-021-06103-6

Cover

Loading…
More Information
Summary:In this paper, we propose SAMBA, a novel framework for safe reinforcement learning that combines aspects from probabilistic modelling, information theory, and statistics. Our method builds upon PILCO to enable active exploration using novel acquisition functions for out-of-sample Gaussian process evaluation optimised through a multi-objective problem that supports conditional-value-at-risk constraints. We evaluate our algorithm on a variety of safe dynamical system benchmarks involving both low and high-dimensional state representations. Our results show orders of magnitude reductions in samples and violations compared to state-of-the-art methods. Lastly, we provide intuition as to the effectiveness of the framework by a detailed analysis of our acquisition functions and safety constraints.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0885-6125
1573-0565
DOI:10.1007/s10994-021-06103-6