A recurrent neural network for nonlinear convex optimization subject to nonlinear inequality constraints
This paper presents a novel recurrent neural network for solving nonlinear convex programming problems subject to nonlinear inequality constraints. Under the condition that the objective function is convex and all constraint functions are strictly convex or that the objective function is strictly co...
Saved in:
Published in | IEEE transactions on circuits and systems. I, Regular papers Vol. 51; no. 7; pp. 1385 - 1394 |
---|---|
Main Authors | , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.07.2004
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | This paper presents a novel recurrent neural network for solving nonlinear convex programming problems subject to nonlinear inequality constraints. Under the condition that the objective function is convex and all constraint functions are strictly convex or that the objective function is strictly convex and the constraint function is convex, the proposed neural network is proved to be stable in the sense of Lyapunov and globally convergent to an exact optimal solution. Compared with the existing neural networks for solving such nonlinear optimization problems, the proposed neural network has two major advantages. One is that it can solve convex programming problems with general convex inequality constraints. Another is that it does not require a Lipschitz condition on the objective function and constraint function. Simulation results are given to illustrate further the global convergence and performance of the proposed neural network for constrained nonlinear optimization. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
ISSN: | 1549-8328 1558-0806 |
DOI: | 10.1109/TCSI.2004.830694 |