Position-Aware Relational Transformer for Knowledge Graph Embedding
Although Transformer has achieved success in language and vision tasks, its capacity for knowledge graph (KG) embedding has not been fully exploited. Using the self-attention (SA) mechanism in Transformer to model the subject-relation-object triples in KGs suffers from training inconsistency as SA i...
Saved in:
Published in | IEEE transaction on neural networks and learning systems Vol. 35; no. 8; pp. 11580 - 11594 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
United States
IEEE
01.08.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Be the first to leave a comment!