A Manifold Perspective on the Statistical Generalization of Graph Neural Networks
Graph Neural Networks (GNNs) extend convolutional neural networks to operate on graphs. Despite their impressive performances in various graph learning tasks, the theoretical understanding of their generalization capability is still lacking. Previous GNN generalization bounds ignore the underlying g...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
07.06.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Graph Neural Networks (GNNs) extend convolutional neural networks to operate
on graphs. Despite their impressive performances in various graph learning
tasks, the theoretical understanding of their generalization capability is
still lacking. Previous GNN generalization bounds ignore the underlying graph
structures, often leading to bounds that increase with the number of nodes -- a
behavior contrary to the one experienced in practice. In this paper, we take a
manifold perspective to establish the statistical generalization theory of GNNs
on graphs sampled from a manifold in the spectral domain. As demonstrated
empirically, we prove that the generalization bounds of GNNs decrease linearly
with the size of the graphs in the logarithmic scale, and increase linearly
with the spectral continuity constants of the filter functions. Notably, our
theory explains both node-level and graph-level tasks. Our result has two
implications: i) guaranteeing the generalization of GNNs to unseen data over
manifolds; ii) providing insights into the practical design of GNNs, i.e.,
restrictions on the discriminability of GNNs are necessary to obtain a better
generalization performance. We demonstrate our generalization bounds of GNNs
using synthetic and multiple real-world datasets. |
---|---|
DOI: | 10.48550/arxiv.2406.05225 |