Principled Representation Learning for Entity Alignment

Embedding-based entity alignment (EEA) has recently received great attention. Despite significant performance improvement, few efforts have been paid to facilitate understanding of EEA methods. Most existing studies rest on the assumption that a small number of pre-aligned entities can serve as anch...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Guo, Lingbing, Sun, Zequn, Chen, Mingyang, Hu, Wei, Zhang, Qiang, Chen, Huajun
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 21.10.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Embedding-based entity alignment (EEA) has recently received great attention. Despite significant performance improvement, few efforts have been paid to facilitate understanding of EEA methods. Most existing studies rest on the assumption that a small number of pre-aligned entities can serve as anchors connecting the embedding spaces of two KGs. Nevertheless, no one investigates the rationality of such an assumption. To fill the research gap, we define a typical paradigm abstracted from existing EEA methods and analyze how the embedding discrepancy between two potentially aligned entities is implicitly bounded by a predefined margin in the scoring function. Further, we find that such a bound cannot guarantee to be tight enough for alignment learning. We mitigate this problem by proposing a new approach, named NeoEA, to explicitly learn KG-invariant and principled entity embeddings. In this sense, an EEA model not only pursues the closeness of aligned entities based on geometric distance, but also aligns the neural ontologies of two KGs by eliminating the discrepancy in embedding distribution and underlying ontology knowledge. Our experiments demonstrate consistent and significant improvement in performance against the best-performing EEA methods.
ISSN:2331-8422