A global optimization method, αBB, for general twice-differentiable constrained NLPs — I. Theoretical advances
In this paper, the deterministic global optimization algorithm, αBB ( α-based Branch and Bound) is presented. This algorithm offers mathematical guarantees for convergence to a point arbitrarily close to the global minimum for the large class of twice-differentiable NLPs. The key idea is the constru...
Saved in:
Published in | Computers & chemical engineering Vol. 22; no. 9; pp. 1137 - 1158 |
---|---|
Main Authors | , , , |
Format | Journal Article Conference Proceeding |
Language | English |
Published |
Oxford
Elsevier Ltd
01.01.1998
Elsevier |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | In this paper, the deterministic global optimization algorithm,
αBB (
α-based Branch and Bound) is presented. This algorithm offers mathematical guarantees for convergence to a point arbitrarily close to the global minimum for the large class of twice-differentiable NLPs. The key idea is the construction of a converging sequence of upper and lower bounds on the global minimum through the convex relaxation of the original problem. This relaxation is obtained by (i) replacing all nonconvex terms of special structure (i.e., bilinear, trilinear, fractional, fractional trilinear, univariate concave) with customized tight convex lower bounding functions and (ii) by utilizing some
α parameters as defined by
Maranas and Floudas (1994b) to generate valid convex underestimators for nonconvex terms of generic structure. In most cases, the calculation of appropriate values for the
α parameters is a challenging task. A number of approaches are proposed, which rigorously generate a set of
α parameters for general twice-differentiable functions. A crucial phase in the design of such procedures is the use of interval arithmetic on the Hessian matrix or the characteristic polynomial of the function being investigated. Thanks to this step, the proposed schemes share the common property of computational tractability and preserve the global optimality guarantees of the algorithm. However, their accuracy and computational requirements differ so that no method can be shown to perform consistently better than others for all problems. Their use is illustrated on an unconstrained and a constrained example.
The second part of this paper (
Adjiman et al., 1998) is devoted to the discussion of issues related to the implementation of the
αBB algorithm and to extensive computational studies illustrating its potential applications. |
---|---|
ISSN: | 0098-1354 1873-4375 |
DOI: | 10.1016/S0098-1354(98)00027-1 |