A global optimization method, αBB, for general twice-differentiable constrained NLPs — I. Theoretical advances

In this paper, the deterministic global optimization algorithm, αBB ( α-based Branch and Bound) is presented. This algorithm offers mathematical guarantees for convergence to a point arbitrarily close to the global minimum for the large class of twice-differentiable NLPs. The key idea is the constru...

Full description

Saved in:
Bibliographic Details
Published inComputers & chemical engineering Vol. 22; no. 9; pp. 1137 - 1158
Main Authors Adjiman, C.S., Dallwig, S., Floudas, C.A., Neumaier, A.
Format Journal Article Conference Proceeding
LanguageEnglish
Published Oxford Elsevier Ltd 01.01.1998
Elsevier
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In this paper, the deterministic global optimization algorithm, αBB ( α-based Branch and Bound) is presented. This algorithm offers mathematical guarantees for convergence to a point arbitrarily close to the global minimum for the large class of twice-differentiable NLPs. The key idea is the construction of a converging sequence of upper and lower bounds on the global minimum through the convex relaxation of the original problem. This relaxation is obtained by (i) replacing all nonconvex terms of special structure (i.e., bilinear, trilinear, fractional, fractional trilinear, univariate concave) with customized tight convex lower bounding functions and (ii) by utilizing some α parameters as defined by Maranas and Floudas (1994b) to generate valid convex underestimators for nonconvex terms of generic structure. In most cases, the calculation of appropriate values for the α parameters is a challenging task. A number of approaches are proposed, which rigorously generate a set of α parameters for general twice-differentiable functions. A crucial phase in the design of such procedures is the use of interval arithmetic on the Hessian matrix or the characteristic polynomial of the function being investigated. Thanks to this step, the proposed schemes share the common property of computational tractability and preserve the global optimality guarantees of the algorithm. However, their accuracy and computational requirements differ so that no method can be shown to perform consistently better than others for all problems. Their use is illustrated on an unconstrained and a constrained example. The second part of this paper ( Adjiman et al., 1998) is devoted to the discussion of issues related to the implementation of the αBB algorithm and to extensive computational studies illustrating its potential applications.
ISSN:0098-1354
1873-4375
DOI:10.1016/S0098-1354(98)00027-1