Randomized Gradient-Free Method for Multiagent Optimization Over Time-Varying Networks

In this brief, we consider the multiagent optimization over a network where multiple agents try to minimize a sum of nonsmooth but Lipschitz continuous functions, subject to a convex state constraint set. The underlying network topology is modeled as time varying. We propose a randomized derivative-...

Full description

Saved in:
Bibliographic Details
Published inIEEE transaction on neural networks and learning systems Vol. 26; no. 6; pp. 1342 - 1347
Main Authors Deming Yuan, Ho, Daniel W. C.
Format Journal Article
LanguageEnglish
Published United States IEEE 01.06.2015
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In this brief, we consider the multiagent optimization over a network where multiple agents try to minimize a sum of nonsmooth but Lipschitz continuous functions, subject to a convex state constraint set. The underlying network topology is modeled as time varying. We propose a randomized derivative-free method, where in each update, the random gradient-free oracles are utilized instead of the subgradients (SGs). In contrast to the existing work, we do not require that agents are able to compute the SGs of their objective functions. We establish the convergence of the method to an approximate solution of the multiagent optimization problem within the error level depending on the smoothing parameter and the Lipschitz constant of each agent's objective function. Finally, a numerical example is provided to demonstrate the effectiveness of the method.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:2162-237X
2162-2388
2162-2388
DOI:10.1109/TNNLS.2014.2336806