Coordination Complexity: Small Information Coordinating Large Populations

We initiate the study of a quantity that we call coordination complexity. In a distributed optimization problem, the information defining a problem instance is distributed among $n$ parties, who need to each choose an action, which jointly will form a solution to the optimization problem. The coordi...

Full description

Saved in:
Bibliographic Details
Main Authors Cummings, Rachel, Ligett, Katrina, Radhakrishnan, Jaikumar, Roth, Aaron, Wu, Zhiwei Steven
Format Journal Article
LanguageEnglish
Published 15.08.2015
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:We initiate the study of a quantity that we call coordination complexity. In a distributed optimization problem, the information defining a problem instance is distributed among $n$ parties, who need to each choose an action, which jointly will form a solution to the optimization problem. The coordination complexity represents the minimal amount of information that a centralized coordinator, who has full knowledge of the problem instance, needs to broadcast in order to coordinate the $n$ parties to play a nearly optimal solution. We show that upper bounds on the coordination complexity of a problem imply the existence of good jointly differentially private algorithms for solving that problem, which in turn are known to upper bound the price of anarchy in certain games with dynamically changing populations. We show several results. We fully characterize the coordination complexity for the problem of computing a many-to-one matching in a bipartite graph by giving almost matching lower and upper bounds.Our upper bound in fact extends much more generally, to the problem of solving a linearly separable convex program. We also give a different upper bound technique, which we use to bound the coordination complexity of coordinating a Nash equilibrium in a routing game, and of computing a stable matching.
DOI:10.48550/arxiv.1508.03735