Reducing the computational effort of optimal process controllers for continuous state spaces by using incremental learning and post-decision state formulations

•We have realized multiple optimal control approaches for continuous state spaces.•Stochastic influences in the state space are taken into account.•We compare the control approaches by their efficiency and computational effort.•The evaluation sample problem allows us to apply all suggested approache...

Full description

Saved in:
Bibliographic Details
Published inJournal of process control Vol. 24; no. 3; pp. 133 - 143
Main Authors Senn, Melanie, Link, Norbert, Pollak, Jürgen, Lee, Jay H.
Format Journal Article
LanguageEnglish
Published Elsevier Ltd 01.03.2014
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:•We have realized multiple optimal control approaches for continuous state spaces.•Stochastic influences in the state space are taken into account.•We compare the control approaches by their efficiency and computational effort.•The evaluation sample problem allows us to apply all suggested approaches.•We compare batch and incremental learning for Artificial Neural Networks. Multistage optimization problems that are represented by Markov Decision Processes (MDPs) can be solved by the approach of Dynamic Programming (DP). However, in process control problems involving continuous state spaces, the classical DP formulation leads to computational intractability known as the ‘curse of dimensionality’. This issue can be overcome by the approach of Approximate Dynamic Programming (ADP) using simulation-based sampling in combination with value function approximators replacing the traditional value tables. In this paper, we investigate different approaches of ADP in the context of a deep cup drawing process, which is simulated by a finite element model. In applying ADP to the problem, Artificial Neural Networks (ANNs) are created as global parametric function approximators to represent the value functions as well as the state transitions. For each time step of the finite time horizon, time-indexed function approximations are built. We compare a classical DP approach to a backward ADP approach with batch learning of the ANNs and a forward ADP approach with incremental learning of the ANNs. In the batch learning mode, the ANNs are trained from temporary value tables constructed by exhaustive search backwards in time. In the incremental learning mode, on the other hand, the ANNs are initialized and then improved continually using data obtained by stochastic sampling of the simulation moving forward in time. For both learning modes, we obtain value function approximations with good performance. The cup deep drawing process under consideration is of medium model complexity and therefore allows us to apply all three methods and to perform a comparison with respect to the achieved efficiency and the associated computational effort as well as the decision behavior of the controllers.
ISSN:0959-1524
1873-2771
DOI:10.1016/j.jprocont.2014.01.002