Position-Aware Relational Transformer for Knowledge Graph Embedding

Although Transformer has achieved success in language and vision tasks, its capacity for knowledge graph (KG) embedding has not been fully exploited. Using the self-attention (SA) mechanism in Transformer to model the subject-relation-object triples in KGs suffers from training inconsistency as SA i...

Full description

Saved in:
Bibliographic Details
Published inIEEE transaction on neural networks and learning systems Vol. 35; no. 8; pp. 11580 - 11594
Main Authors Li, Guangyao, Sun, Zequn, Hu, Wei, Cheng, Gong, Qu, Yuzhong
Format Journal Article
LanguageEnglish
Published United States IEEE 01.08.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Although Transformer has achieved success in language and vision tasks, its capacity for knowledge graph (KG) embedding has not been fully exploited. Using the self-attention (SA) mechanism in Transformer to model the subject-relation-object triples in KGs suffers from training inconsistency as SA is invariant to the order of input tokens. As a result, it is unable to distinguish a (real) relation triple from its shuffled (fake) variants (e.g., object-relation-subject) and, thus, fails to capture the correct semantics. To cope with this issue, we propose a novel Transformer architecture, namely, Knowformer, for KG embedding. It incorporates relational compositions in entity representations to explicitly inject semantics and capture the role of an entity based on its position (subject or object) in a relation triple. The relational composition for a subject (or object) entity of a relation triple refers to an operator on the relation and the object (or subject). We borrow ideas from the typical translational and semantic-matching embedding techniques to design relational compositions. We carefully design a residual block to integrate relational compositions into SA and efficiently propagate the composed relational semantics layer by layer. We formally prove that the SA with relational compositions is able to distinguish the entity roles in different positions and correctly capture relational semantics. Extensive experiments and analyses on six benchmark datasets show that Knowformer achieves state-of-the-art performance on both link prediction and entity alignment.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:2162-237X
2162-2388
2162-2388
DOI:10.1109/TNNLS.2023.3262937