A stochastic moving ball approximation method for smooth convex constrained minimization

Abstract In this paper, we consider constrained optimization problems with convex objective and smooth convex functional constraints. We propose a new stochastic gradient algorithm, called the Stochastic Moving Ball Approximation (SMBA) method, to solve this class of problems, where at each iteratio...

Full description

Saved in:
Bibliographic Details
Published inComputational optimization and applications
Main Authors Singh, Nitesh Kumar, Necoara, Ion
Format Journal Article
LanguageEnglish
Published 07.10.2024
Online AccessGet full text

Cover

Loading…
More Information
Summary:Abstract In this paper, we consider constrained optimization problems with convex objective and smooth convex functional constraints. We propose a new stochastic gradient algorithm, called the Stochastic Moving Ball Approximation (SMBA) method, to solve this class of problems, where at each iteration we first take a (sub)gradient step for the objective function and then perform a projection step onto one ball approximation of a randomly chosen constraint. The computational simplicity of SMBA, which uses first-order information and considers only one constraint at a time, makes it suitable for large-scale problems with many functional constraints. We provide a convergence analysis for the SMBA algorithm using basic assumptions on the problem, that yields new convergence rates in both optimality and feasibility criteria evaluated at some average point. Our convergence proofs are novel since we need to deal properly with infeasible iterates and with quadratic upper approximations of constraints that may yield empty balls. We derive convergence rates of order $${\mathcal {O}} (k^{-1/2})$$ O ( k - 1 / 2 ) when the objective function is convex, and $${\mathcal {O}} (k^{-1})$$ O ( k - 1 ) when the objective function is strongly convex. Preliminary numerical experiments on quadratically constrained quadratic problems demonstrate the viability and performance of our method when compared to some existing state-of-the-art optimization methods and software.
ISSN:0926-6003
1573-2894
DOI:10.1007/s10589-024-00612-5