Sequential Bayesian experimental designs via reinforcement learning
Bayesian experimental design (BED) has been used as a method for conducting efficient experiments based on Bayesian inference. The existing methods, however, mostly focus on maximizing the expected information gain (EIG); the cost of experiments and sample efficiency are often not taken into account...
Saved in:
Main Author | |
---|---|
Format | Journal Article |
Language | English |
Published |
13.02.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Bayesian experimental design (BED) has been used as a method for conducting
efficient experiments based on Bayesian inference. The existing methods,
however, mostly focus on maximizing the expected information gain (EIG); the
cost of experiments and sample efficiency are often not taken into account. In
order to address this issue and enhance practical applicability of BED, we
provide a new approach Sequential Experimental Design via Reinforcement
Learning to construct BED in a sequential manner by applying reinforcement
learning in this paper. Here, reinforcement learning is a branch of machine
learning in which an agent learns a policy to maximize its reward by
interacting with the environment. The characteristics of interacting with the
environment are similar to the sequential experiment, and reinforcement
learning is indeed a method that excels at sequential decision making.
By proposing a new real-world-oriented experimental environment, our approach
aims to maximize the EIG while keeping the cost of experiments and sample
efficiency in mind simultaneously. We conduct numerical experiments for three
different examples. It is confirmed that our method outperforms the existing
methods in various indices such as the EIG and sampling efficiency, indicating
that our proposed method and experimental environment can make a significant
contribution to application of BED to the real world. |
---|---|
DOI: | 10.48550/arxiv.2202.07472 |