A Method for Transforming Non-Convex Optimization Problem to Distributed Form

We propose a novel distributed method for non-convex optimization problems with coupling equality and inequality constraints. This method transforms the optimization problem into a specific form to allow distributed implementation of modified gradient descent and Newton’s methods so that they operat...

Full description

Saved in:
Bibliographic Details
Published inMathematics (Basel) Vol. 12; no. 17; p. 2796
Main Authors Khamisov, Oleg O., Khamisov, Oleg V., Ganchev, Todor D., Semenkin, Eugene S.
Format Journal Article
LanguageEnglish
Published Basel MDPI AG 01.09.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:We propose a novel distributed method for non-convex optimization problems with coupling equality and inequality constraints. This method transforms the optimization problem into a specific form to allow distributed implementation of modified gradient descent and Newton’s methods so that they operate as if they were distributed. We demonstrate that for the proposed distributed method: (i) communications are significantly less time-consuming than oracle calls, (ii) its convergence rate is equivalent to the convergence of Newton’s method concerning oracle calls, and (iii) for the cases when oracle calls are more expensive than communication between agents, the transition from a centralized to a distributed paradigm does not significantly affect computational time. The proposed method is applicable when the objective function is twice differentiable and constraints are differentiable, which holds for a wide range of machine learning methods and optimization setups.
ISSN:2227-7390
2227-7390
DOI:10.3390/math12172796