Self-adaptive algorithms for quasiconvex programming and applications to machine learning

For solving a broad class of nonconvex programming problems on an unbounded constraint set, we provide a self-adaptive step-size strategy that does not include line-search techniques and establishes the convergence of a generic approach under mild assumptions. Specifically, the objective function ma...

Full description

Saved in:
Bibliographic Details
Published inComputational & applied mathematics Vol. 43; no. 4
Main Authors Thang, Tran Ngoc, Hai, Trinh Ngoc
Format Journal Article
LanguageEnglish
Published Cham Springer International Publishing 01.06.2024
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:For solving a broad class of nonconvex programming problems on an unbounded constraint set, we provide a self-adaptive step-size strategy that does not include line-search techniques and establishes the convergence of a generic approach under mild assumptions. Specifically, the objective function may not satisfy the convexity condition. Unlike descent line-search algorithms, it does not need a known Lipschitz constant to figure out how big the first step should be. The crucial feature of this process is the steady reduction of the step size until a certain condition is fulfilled. In particular, it can provide a new gradient projection approach to optimization problems with an unbounded constrained set. To demonstrate the effectiveness of the proposed technique for large-scale problems, we apply it to some experiments on machine learning, such as supervised feature selection, multi-variable logistic regressions and neural networks for classification.
ISSN:2238-3603
1807-0302
DOI:10.1007/s40314-024-02764-w