On the relations of stochastic convex optimization problems with empirical risk minimization problems on \(p\)-norm balls

In this paper, we consider convex stochastic optimization problems arising in machine learning applications (e.g., risk minimization) and mathematical statistics (e.g., maximum likelihood estimation). There are two main approaches to solve such kinds of problems, namely the Stochastic Approximation...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Dvinskikh, Darina, Pirau, Vitali, Gasnikov, Alexander
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 02.03.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In this paper, we consider convex stochastic optimization problems arising in machine learning applications (e.g., risk minimization) and mathematical statistics (e.g., maximum likelihood estimation). There are two main approaches to solve such kinds of problems, namely the Stochastic Approximation approach (online approach) and the Sample Average Approximation approach, also known as the Monte Carlo approach, (offline approach). In the offline approach, the problem is replaced by its empirical counterpart (the empirical risk minimization problem). The natural question is how to define the problem sample size, i.e., how many realizations should be sampled so that the quite accurate solution of the empirical problem be the solution of the original problem with the desired precision. This issue is one of the main issues in modern machine learning and optimization. In the last decade, a lot of significant advances were made in these areas to solve convex stochastic optimization problems on the Euclidean balls (or the whole space). In this work, we are based on these advances and study the case of arbitrary balls in the \(\ell_p\)-norms. We also explore the question of how the parameter \(p\) affects the estimates of the required number of terms as a function of empirical risk.
ISSN:2331-8422