An Approximate Dynamic Programming Approach to Multiagent Persistent Monitoring in Stochastic Environments With Temporal Logic Constraints

We consider the problem of generating control policies for a team of robots moving in a stochastic environment. The team is required to achieve an optimal surveillance mission, in which a certain "optimizing proposition" needs to be satisfied infinitely often. In addition, a correctness re...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on automatic control Vol. 62; no. 9; pp. 4549 - 4563
Main Authors Kun Deng, Yushan Chen, Belta, Calin
Format Journal Article
LanguageEnglish
Published IEEE 01.09.2017
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:We consider the problem of generating control policies for a team of robots moving in a stochastic environment. The team is required to achieve an optimal surveillance mission, in which a certain "optimizing proposition" needs to be satisfied infinitely often. In addition, a correctness requirement expressed as a temporal logic formula is imposed. By modeling the robots as game transition systems and the environmental elements as Markov chains, the problem reduces to finding an optimal control policy for a Markov decision process, which also satisfies a temporal logic specification. The existing approaches based on dynamic programming are computationally intensive, thus not feasible for large environments and/or large numbers of robots. We propose an approximate dynamic programming (ADP) framework to obtain suboptimal policies with reduced computational complexity. Specifically, we choose a set of basis functions to approximate the optimal costs and find the best approximation through the least-squares method. We also propose a simulation-based ADP approach to further reduce the computational complexity by employing low-dimensional calculations and simulation samples.
ISSN:0018-9286
1558-2523
DOI:10.1109/TAC.2017.2678920