Stochastic optimal control via Bellman's principle

This paper presents a strategy for finding optimal controls of non-linear systems subject to random excitations. The method is capable to generate global control solutions when state and control constraints are present. The solution is global in the sense that controls for all initial conditions in...

Full description

Saved in:
Bibliographic Details
Published inAutomatica (Oxford) Vol. 39; no. 12; pp. 2109 - 2114
Main Authors Crespo, Luis G., Sun, Jian-Qiao
Format Journal Article
LanguageEnglish
Published Oxford Elsevier Ltd 01.12.2003
Elsevier
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This paper presents a strategy for finding optimal controls of non-linear systems subject to random excitations. The method is capable to generate global control solutions when state and control constraints are present. The solution is global in the sense that controls for all initial conditions in a region of the state space are obtained. The approach is based on Bellman's principle of optimality, the cumulant neglect closure method and the short-time Gaussian approximation. Problems with state-dependent diffusion terms, non-closeable hierarchies of moment equations for the states and singular state boundary condition are considered in the examples. The uncontrolled and controlled system responses are evaluated by creating a Markov chain with a control dependent transition probability matrix via the generalized cell mapping method. In all numerical examples, excellent controlled performances were obtained.
ISSN:0005-1098
1873-2836
DOI:10.1016/S0005-1098(03)00238-3