Balancing structure and position information in Graph Transformer network with a learnable node embedding
The Transformer-based graph neural network models have achieved remarkable results in graph representation learning in recent years. One of the main challenges in graph representation learning with Transformer architecture is the non-existence of a universal positional encoding. Standard position en...
Saved in:
Published in | Expert systems with applications Vol. 238; p. 122096 |
---|---|
Main Authors | , |
Format | Journal Article |
Language | English |
Published |
Elsevier Ltd
15.03.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | The Transformer-based graph neural network models have achieved remarkable results in graph representation learning in recent years. One of the main challenges in graph representation learning with Transformer architecture is the non-existence of a universal positional encoding. Standard position encoding methods usually evolve the usage of the graph Laplacian matrix eigenvectors. However, exploiting the structural information from these eigenvectors failed to perform graph learning tasks requiring the node’s local structures. In our work, we propose a novel node encoding that leverages both the node’s global position information and the node’s local structural information, which can generalize well for a wide range of graph learning tasks. The global position encoding branch operates on the eigenvalues and eigenvectors of the Laplacian matrix of the entire graph. The structural encoding branch is derived through the spectral-based encoding of the local subgraph. It represents the local properties, which are usually omitted in the Laplacian position encoding because of the cutoff of high graph frequencies. Two encoding branches are designed with learnable weights and mapped into predefined embedding spaces. Then, a weighted combination is employed to create a unique location encoding for each node. We validate the efficiency of our proposed encoding through various graph learning datasets, including node classification, link prediction, graph classification, and graph regression tasks. The overall results demonstrate that our structural and positional encoding can balance between the local and global structural information and outperforms most of the baseline models.
•Graph Transformer Networks need both structural and positional encoding.•Propose a lightweight and robust node positional encoding.•Propose an adaptive learning for structural information.•Unify structure with positional information into a general node embedding. |
---|---|
ISSN: | 0957-4174 1873-6793 |
DOI: | 10.1016/j.eswa.2023.122096 |