Representation Learning of Knowledge Graphs with Embedding Subspaces

Most of the existing knowledge graph embedding models are supervised methods and largely relying on the quality and quantity of obtainable labelled training data. The cost of obtaining high quality triples is high and the data sources are facing a serious problem of data sparsity, which may result i...

Full description

Saved in:
Bibliographic Details
Published inScientific programming Vol. 2020; no. 2020; pp. 1 - 10
Main Authors Cui, Zhiming, Ai, Xusheng, Xian, Xuefeng, Li, Chunhua
Format Journal Article
LanguageEnglish
Published Cairo, Egypt Hindawi Publishing Corporation 2020
Hindawi
Hindawi Limited
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Most of the existing knowledge graph embedding models are supervised methods and largely relying on the quality and quantity of obtainable labelled training data. The cost of obtaining high quality triples is high and the data sources are facing a serious problem of data sparsity, which may result in insufficient training of long-tail entities. However, unstructured text encoding entities and relational knowledge can be obtained anywhere in large quantities. Word vectors of entity names estimated from the unlabelled raw text using natural language model encode syntax and semantic properties of entities. Yet since these feature vectors are estimated through minimizing prediction error on unsupervised entity names, they may not be the best for knowledge graphs. We propose a two-phase approach to adapt unsupervised entity name embeddings to a knowledge graph subspace and jointly learn the adaptive matrix and knowledge representation. Experiments on Freebase show that our method can rely less on the labelled data and outperforms the baselines when the labelled data is relatively less. Especially, it is applicable to zero-shot scenario.
ISSN:1058-9244
1875-919X
DOI:10.1155/2020/4741963