A Tensor Network Approach to Finite Markov Decision Processes
Tensor network (TN) techniques - often used in the context of quantum many-body physics - have shown promise as a tool for tackling machine learning (ML) problems. The application of TNs to ML, however, has mostly focused on supervised and unsupervised learning. Yet, with their direct connection to...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
12.02.2020
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Tensor network (TN) techniques - often used in the context of quantum
many-body physics - have shown promise as a tool for tackling machine learning
(ML) problems. The application of TNs to ML, however, has mostly focused on
supervised and unsupervised learning. Yet, with their direct connection to
hidden Markov chains, TNs are also naturally suited to Markov decision
processes (MDPs) which provide the foundation for reinforcement learning (RL).
Here we introduce a general TN formulation of finite, episodic and discrete
MDPs. We show how this formulation allows us to exploit algorithms developed
for TNs for policy optimisation, the key aim of RL. As an application we
consider the issue - formulated as an RL problem - of finding a stochastic
evolution that satisfies specific dynamical conditions, using the simple
example of random walk excursions as an illustration. |
---|---|
DOI: | 10.48550/arxiv.2002.05185 |