Constrained optimization in simulation: efficient global optimization and Karush-Kuhn-Tucker conditions
We develop a novel methodology for solving constrained optimization problems in deterministic simulation. In these problems, the goal (or objective) output is to be minimized, subject to one or more constraints for the other outputs and for the inputs. Our methododology combines the“Karush-Kuhn-Tuck...
Saved in:
Published in | Journal of global optimization Vol. 91; no. 4; pp. 897 - 922 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
New York
Springer US
01.04.2025
Springer Nature B.V |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | We develop a novel methodology for solving constrained optimization problems in deterministic simulation. In these problems, the goal (or objective) output is to be minimized, subject to one or more constraints for the other outputs and for the inputs. Our methododology combines the“Karush-Kuhn-Tucker”(KKT) conditions with“efficient global optimization”(EGO).These KKT conditions are well-known first-order necessary optimality conditions in white-box mathematical optimization, but our method is the first EGO method that uses these conditions. EGO is a popular type of algorithm that is closely related to“Bayesian optimization” and“active machine learning”, as they all use Gaussian processes or Kriging to approximate the input/output behavior of black-box models. We numerically compare the performance of our KKT-EGO algorithm and two alternative EGO algorithms, in several popular examples. In some examples our algorithm converges faster to the true optimum, so our algorithm may provide a suitable alternative. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 0925-5001 1573-2916 |
DOI: | 10.1007/s10898-024-01448-3 |