Swarm-Based Gradient Descent Method for Non-Convex Optimization
We introduce a new Swarm-Based Gradient Descent (SBGD) method for non-convex optimization. The swarm consists of agents, each is identified with a position, \({\mathbf x}\), and mass, \(m\). The key to their dynamics is communication: masses are being transferred from agents at high ground to low(-e...
Saved in:
Published in | arXiv.org |
---|---|
Main Authors | , , |
Format | Paper |
Language | English |
Published |
Ithaca
Cornell University Library, arXiv.org
30.04.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | We introduce a new Swarm-Based Gradient Descent (SBGD) method for non-convex optimization. The swarm consists of agents, each is identified with a position, \({\mathbf x}\), and mass, \(m\). The key to their dynamics is communication: masses are being transferred from agents at high ground to low(-est) ground. At the same time, agents change positions with step size, \(h=h({\mathbf x},m)\), adjusted to their relative mass: heavier agents proceed with small time-steps in the direction of local gradient, while lighter agents take larger time-steps based on a backtracking protocol. Accordingly, the crowd of agents is dynamically divided between `heavier' leaders, expected to approach local minima, and `lighter' explorers. With their large-step protocol, explorers are expected to encounter improved position for the swarm; if they do, then they assume the role of `heavy' swarm leaders and so on. Convergence analysis and numerical simulations in one-, two-, and 20-dimensional benchmarks demonstrate the effectiveness of SBGD as a global optimizer. |
---|---|
ISSN: | 2331-8422 |