Position-Aware Relational Transformer for Knowledge Graph Embedding

Although Transformer has achieved success in language and vision tasks, its capacity for knowledge graph (KG) embedding has not been fully exploited. Using the self-attention (SA) mechanism in Transformer to model the subject-relation-object triples in KGs suffers from training inconsistency as SA i...

Full description

Saved in:
Bibliographic Details
Published inIEEE transaction on neural networks and learning systems Vol. 35; no. 8; pp. 11580 - 11594
Main Authors Li, Guangyao, Sun, Zequn, Hu, Wei, Cheng, Gong, Qu, Yuzhong
Format Journal Article
LanguageEnglish
Published United States IEEE 01.08.2024
Subjects
Online AccessGet full text

Cover

Loading…