FedGTA: Topology-aware Averaging for Federated Graph Learning
Federated Graph Learning (FGL) is a distributed machine learning paradigm that enables collaborative training on large-scale subgraphs across multiple local systems. Existing FGL studies fall into two categories: (i) FGL Optimization, which improves multi-client training in existing machine learning...
Saved in:
Main Authors | , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
22.01.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Federated Graph Learning (FGL) is a distributed machine learning paradigm
that enables collaborative training on large-scale subgraphs across multiple
local systems. Existing FGL studies fall into two categories: (i) FGL
Optimization, which improves multi-client training in existing machine learning
models; (ii) FGL Model, which enhances performance with complex local models
and multi-client interactions. However, most FGL optimization strategies are
designed specifically for the computer vision domain and ignore graph
structure, presenting dissatisfied performance and slow convergence. Meanwhile,
complex local model architectures in FGL Models studies lack scalability for
handling large-scale subgraphs and have deployment limitations. To address
these issues, we propose Federated Graph Topology-aware Aggregation (FedGTA), a
personalized optimization strategy that optimizes through topology-aware local
smoothing confidence and mixed neighbor features. During experiments, we deploy
FedGTA in 12 multi-scale real-world datasets with the Louvain and Metis split.
This allows us to evaluate the performance and robustness of FedGTA across a
range of scenarios. Extensive experiments demonstrate that FedGTA achieves
state-of-the-art performance while exhibiting high scalability and efficiency.
The experiment includes ogbn-papers100M, the most representative large-scale
graph database so that we can verify the applicability of our method to
large-scale graph learning. To the best of our knowledge, our study is the
first to bridge large-scale graph learning with FGL using this optimization
strategy, contributing to the development of efficient and scalable FGL
methods. |
---|---|
DOI: | 10.48550/arxiv.2401.11755 |