Quasi-Newton methods for minimizing a quadratic function subject to uncertainty
We investigate quasi-Newton methods for minimizing a strictly convex quadratic function which is subject to errors in the evaluation of the gradients. The methods all give identical behavior in exact arithmetic, generating minimizers of Krylov subspaces of increasing dimensions, thereby having finit...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
31.08.2021
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | We investigate quasi-Newton methods for minimizing a strictly convex
quadratic function which is subject to errors in the evaluation of the
gradients. The methods all give identical behavior in exact arithmetic,
generating minimizers of Krylov subspaces of increasing dimensions, thereby
having finite termination. A BFGS quasi-Newton method is empirically known to
behave very well on a quadratic problem subject to small errors. We also
investigate large-error scenarios, in which the expected behavior is not so
clear. In particular, we are interested in the behavior of quasi-Newton
matrices that differ from the identity by a low-rank matrix, such as a
memoryless BFGS method. Our numerical results indicate that for large errors, a
memory-less quasi-Newton method often outperforms a BFGS method. We also
consider a more advanced model for generating search directions, based on
solving a chance-constrained optimization problem. Our results indicate that
such a model often gives a slight advantage in final accuracy, although the
computational cost is significantly higher. |
---|---|
DOI: | 10.48550/arxiv.2109.00072 |