Two Projection Neural Networks With Reduced Model Complexity for Nonlinear Programming
Recent reports show that projection neural networks with a low-dimensional state space can enhance computation speed obviously. This paper proposes two projection neural networks with reduced model dimension and complexity (RDPNNs) for solving nonlinear programming (NP) problems. Compared with exist...
Saved in:
Published in | IEEE transaction on neural networks and learning systems Vol. 31; no. 6; pp. 2020 - 2029 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
United States
IEEE
01.06.2020
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Recent reports show that projection neural networks with a low-dimensional state space can enhance computation speed obviously. This paper proposes two projection neural networks with reduced model dimension and complexity (RDPNNs) for solving nonlinear programming (NP) problems. Compared with existing projection neural networks for solving NP, the proposed two RDPNNs have a low-dimensional state space and low model complexity. Under the condition that the Hessian matrix of the associated Lagrangian function is positive semi-definite and positive definite at each Karush-Kuhn-Tucker point, the proposed two RDPNNs are proven to be globally stable in the sense of Lyapunov and converge globally to a point satisfying the reduced optimality condition of NP. Therefore, the proposed two RDPNNs are theoretically guaranteed to solve convex NP problems and a class of nonconvex NP problems. Computed results show that the proposed two RDPNNs have a faster computation speed than the existing projection neural networks for solving NP problems. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
ISSN: | 2162-237X 2162-2388 2162-2388 |
DOI: | 10.1109/TNNLS.2019.2927639 |