Structure-Aware Transformer for hyper-relational knowledge graph completion
Hyper-relational knowledge graph (HKG) has gradually gained attention in recent studies. Different from vanilla knowledge graph (KG) which represents connections between things in the real world in numerous triples (subject, relation, object), HKG can describe a complex event in one fact, represente...
Saved in:
Published in | Expert systems with applications Vol. 277; p. 126992 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
Elsevier Ltd
05.06.2025
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Hyper-relational knowledge graph (HKG) has gradually gained attention in recent studies. Different from vanilla knowledge graph (KG) which represents connections between things in the real world in numerous triples (subject, relation, object), HKG can describe a complex event in one fact, represented as a main triple and some auxiliary key–value pairs. Recently, there have been gradually increasing efforts to use the Transformer framework for representation learning of HKG. Although these methods achieve moderate performance, they do not make good use of the structural information (i.e., connectivity and direction) in HKG, which is important for HKG representation learning. To handle these problems, we propose a novel Transformer-based and structure-aware method of HKG representation learning. Specifically, we first provide a subgraph sampling strategy that makes reasonable use of the connection information between the central fact and subgraph in the HKG. Next, we consider heterogeneous characteristics in the original HKG and binding property between key–value pairs, thus introducing the heterogeneous attention biases and key–value joint attention mechanism respectively. Beyond that, we introduce the direction information during propagation between Transformer layers. More importantly, although our model is originally designed for HKG, it can be regarded as a universal framework. It can be generalized to other types of KGs, such as triple-based and temporal KGs. We evaluate our model on the link prediction task, or so-called knowledge graph completion (KGC). Compared to existing methods, experimental results show that our model outperforms baseline models on the HKG datasets. Meanwhile, our model achieves results surpassing or on par with the baselines on triple-based KG and temporal KG even though it is not specifically designed for them, proving the great generalization ability of our approach. The code is available at https://github.com/zjukg/HyperSAT.
•The structure information in the knowledge graph should be considered.•Three structural types are effective: subgraphs, connectivity, and directionality.•A uniform framework (HyperSAT) is proposed to model different forms of KGs.•HyperSAT achieves excellent performance on different forms of KGs. |
---|---|
ISSN: | 0957-4174 |
DOI: | 10.1016/j.eswa.2025.126992 |