Adaptive curvature exploration geometric graph neural network
Graph Neural networks (GNNs) which are powerful and widely applied models are based on the assumption that graph topologies play key roles in the graph representation learning.However, the existing GNN methods are based on the Euclidean space embedding, which is difficult to represent a variety of g...
Saved in:
Published in | Knowledge and information systems Vol. 65; no. 5; pp. 2281 - 2304 |
---|---|
Main Authors | , , , , , , , , |
Format | Journal Article |
Language | English |
Published |
London
Springer London
01.05.2023
Springer Nature B.V |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Graph Neural networks (GNNs) which are powerful and widely applied models are based on the assumption that graph topologies play key roles in the graph representation learning.However, the existing GNN methods are based on the Euclidean space embedding, which is difficult to represent a variety of graph geometric properties well. Recently, Riemannian geometries have been introduced into GNNs, such as Hyperbolic Graph Neural Networks proposed for the hierarchy-preserving graph representation learning. In Riemannian geometry, the different graph topological structures can be reflected by corresponding curved embedding spaces, such as a hyperbolic space can be understood as a continuous tree-like structure and a spherical space can be understood as a continuous clique. However, most existing non-Euclidean GNNs are based on heuristic, manual statistical, or estimation methods, which is difficult to automatically select the appropriate embedding space for graphs with different topological properties. To deal with this problem, we propose the Adaptive Curvature Exploration Geometric Graph Neural Network to automatically learn high-quality graph representations and explore the embedding space with optimal curvature at the same time. We optimize the multi-objective optimization problem of the graph representation learning and curvature exploration with the multi-agent reinforcement learning and using the Nash Q-learning algorithm to collaboratively train the two agents to reach Nash equilibrium. We construct extensive experiments including synthetic and real-world graph datasets, and the results demonstrate significant and consistent performance improvement and generalization of our method. |
---|---|
ISSN: | 0219-1377 0219-3116 |
DOI: | 10.1007/s10115-022-01811-4 |