Physics-Informed Neural Network Policy Iteration: Algorithms, Convergence, and Verification
Solving nonlinear optimal control problems is a challenging task, particularly for high-dimensional problems. We propose algorithms for model-based policy iterations to solve nonlinear optimal control problems with convergence guarantees. The main component of our approach is an iterative procedure...
Saved in:
Main Authors | , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
15.02.2024
|
Subjects | |
Online Access | Get full text |
DOI | 10.48550/arxiv.2402.10119 |
Cover
Loading…
Summary: | Solving nonlinear optimal control problems is a challenging task,
particularly for high-dimensional problems. We propose algorithms for
model-based policy iterations to solve nonlinear optimal control problems with
convergence guarantees. The main component of our approach is an iterative
procedure that utilizes neural approximations to solve linear partial
differential equations (PDEs), ensuring convergence. We present two variants of
the algorithms. The first variant formulates the optimization problem as a
linear least square problem, drawing inspiration from extreme learning machine
(ELM) for solving PDEs. This variant efficiently handles low-dimensional
problems with high accuracy. The second variant is based on a physics-informed
neural network (PINN) for solving PDEs and has the potential to address
high-dimensional problems. We demonstrate that both algorithms outperform
traditional approaches, such as Galerkin methods, by a significant margin. We
provide a theoretical analysis of both algorithms in terms of convergence of
neural approximations towards the true optimal solutions in a general setting.
Furthermore, we employ formal verification techniques to demonstrate the
verifiable stability of the resulting controllers. |
---|---|
DOI: | 10.48550/arxiv.2402.10119 |