Learning to Solve Optimization Problems With Hard Linear Constraints

Constrained optimization problems have appeared in a wide variety of challenging real-world problems, where constraints often capture the physics of the underlying system. Classic methods for solving these problems relied on iterative algorithms that explored the feasible domain in the search for th...

Full description

Saved in:
Bibliographic Details
Published inIEEE access Vol. 11; pp. 59995 - 60004
Main Authors Li, Meiyi, Kolouri, Soheil, Mohammadi, Javad
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN2169-3536
2169-3536
DOI10.1109/ACCESS.2023.3285199

Cover

More Information
Summary:Constrained optimization problems have appeared in a wide variety of challenging real-world problems, where constraints often capture the physics of the underlying system. Classic methods for solving these problems relied on iterative algorithms that explored the feasible domain in the search for the best solution. These iterative methods often became the computational bottleneck in decision-making and adversely impacted time-sensitive applications. Recently, neural approximators have shown promise as a replacement for the iterative solvers that can output the optimal solution in a single feed-forward providing rapid solutions to optimization problems. However, enforcing constraints through neural networks remains an open challenge. In this paper, we have developed a neural approximator that maps the inputs to an optimization problem with hard linear constraints to a feasible solution that is nearly optimal. Our proposed approach consists of five main steps: 1) reducing the original problem to optimization on a set of independent variables, 2) finding a gauge function that maps the <inline-formula> <tex-math notation="LaTeX">\ell _{\infty} </tex-math></inline-formula>-norm unit ball to the feasible set of the reduced problem, 3) learning a neural approximator that maps the optimization's inputs to a virtual optimal point in the <inline-formula> <tex-math notation="LaTeX">\ell _{\infty} </tex-math></inline-formula>-norm unit ball, and 4) gauge mapping to project the virtual optimal point in the <inline-formula> <tex-math notation="LaTeX">\ell _{\infty} </tex-math></inline-formula>-norm unit ball onto the feasible space, then 5) finding the values of the dependent variables from the independent variable to recover the solution to the original problem. We can guarantee hard feasibility through this sequence of steps. Unlike the current learning-assisted solutions, our method is free of parameter-tuning (compared to penalty-based methods) and removes iterations altogether. We have demonstrated the performance of our proposed method in quadratic programming in the context of optimal power dispatch (critical to the resiliency of our electric grid) and constrained non-convex optimization in the context of image registration problems. Our results have supported our theoretical findings and demonstrate superior performance in terms of computational time, optimality, and the feasibility of the solution compared to existing approaches.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2023.3285199