Improving the performance of weighted Lagrange-multiplier methods for nonlinear constrained optimization

Nonlinear constrained optimization problems in discrete and continuous spaces are an important class of problems studied extensively in artificial intelligence and operations research. These problems can be solved by a Lagrange-multiplier method in continuous space and by an extended discrete Lagran...

Full description

Saved in:
Bibliographic Details
Published inInformation sciences Vol. 124; no. 1; pp. 241 - 272
Main Authors Wah, Benjamin W., Wang, Tao, Shang, Yi, Wu, Zhe
Format Journal Article
LanguageEnglish
Published Elsevier Inc 01.05.2000
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Nonlinear constrained optimization problems in discrete and continuous spaces are an important class of problems studied extensively in artificial intelligence and operations research. These problems can be solved by a Lagrange-multiplier method in continuous space and by an extended discrete Lagrange-multiplier method in discrete space. When constraints are satisfied, these methods rely on gradient descents in the objective space to find high-quality solutions. On the other hand, when constraints are violated, these methods rely on gradient ascents in the Lagrange-multiplier space in order to increase the penalties on unsatisfied constraints and to force the constraints into satisfaction. The balance between gradient descents and gradient ascents depends on the relative weights between the objective function and the constraints, which indirectly control the convergence speed and solution quality of the Lagrangian method. To improve convergence speed without degrading solution quality, we propose an algorithm to dynamically control the relative weights between the objective and the constraints. Starting from an initial weight, the algorithm automatically adjusts the weights based on the behavior of the search progress. With this strategy, we are able to eliminate divergence, reduce oscillation, and speed up convergence. We show improved convergence behavior of our proposed algorithm on both nonlinear continuous and discrete problems.
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 23
ISSN:0020-0255
1872-6291
DOI:10.1016/S0020-0255(99)00081-X