Prior Preference Learning from Experts:Designing a Reward with Active Inference
Active inference may be defined as Bayesian modeling of a brain with a biologically plausible model of the agent. Its primary idea relies on the free energy principle and the prior preference of the agent. An agent will choose an action that leads to its prior preference for a future observation. In...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
21.01.2021
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Active inference may be defined as Bayesian modeling of a brain with a
biologically plausible model of the agent. Its primary idea relies on the free
energy principle and the prior preference of the agent. An agent will choose an
action that leads to its prior preference for a future observation. In this
paper, we claim that active inference can be interpreted using reinforcement
learning (RL) algorithms and find a theoretical connection between them. We
extend the concept of expected free energy (EFE), which is a core quantity in
active inference, and claim that EFE can be treated as a negative value
function. Motivated by the concept of prior preference and a theoretical
connection, we propose a simple but novel method for learning a prior
preference from experts. This illustrates that the problem with inverse RL can
be approached with a new perspective of active inference. Experimental results
of prior preference learning show the possibility of active inference with
EFE-based rewards and its application to an inverse RL problem. |
---|---|
DOI: | 10.48550/arxiv.2101.08937 |