Towards Bayesian Learning of the Architecture, Graph and Parameters for Graph Neural Networks
Real life data often arises from relational structures that are best modeled by graphs. Bayesian learning on graphs has emerged as a framework which allows us to model prior beliefs about network data in a mathematically principled way. The approach provides uncertainty estimates and can perform ver...
Saved in:
Published in | 2022 56th Asilomar Conference on Signals, Systems, and Computers pp. 852 - 856 |
---|---|
Main Authors | , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
31.10.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Real life data often arises from relational structures that are best modeled by graphs. Bayesian learning on graphs has emerged as a framework which allows us to model prior beliefs about network data in a mathematically principled way. The approach provides uncertainty estimates and can perform very well on a small sample size when provided with an informative prior. Much of the work on Bayesian graph neural networks (GNNs) has focused on inferring the structure of the underlying graph and the model weights. Although research effort has been directed towards network architecture search for GNNs, existing strategies are not Bayesian and return a point estimate of the optimal architecture. In this work, we propose a method for principled Bayesian modelling for GNNs that allows for inference of a posterior over the architecture (number of layers, number of active neurons, aggregators, pooling), the graph, and the model parameters. We evaluate our proposed method on three mainstream datasets. |
---|---|
ISSN: | 2576-2303 |
DOI: | 10.1109/IEEECONF56349.2022.10052017 |