Temporal knowledge graph reasoning based on evolutional representation and contrastive learning
Temporal knowledge graphs (TKGs) are a form of knowledge representation constructed based on the evolution of events at different time points. It provides an additional perspective by extending the temporal dimension for a range of downstream tasks. Given the evolving nature of events, it is essenti...
Saved in:
Published in | Applied intelligence (Dordrecht, Netherlands) Vol. 54; no. 21; pp. 10929 - 10947 |
---|---|
Main Authors | , , , , , , , |
Format | Journal Article |
Language | English |
Published |
New York
Springer US
01.11.2024
Springer Nature B.V |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Temporal knowledge graphs (TKGs) are a form of knowledge representation constructed based on the evolution of events at different time points. It provides an additional perspective by extending the temporal dimension for a range of downstream tasks. Given the evolving nature of events, it is essential for TKGs to reason about non-existent or future events. Most of the existing models divide the graph into multiple time snapshots and predict future events by modeling information within and between snapshots. However, since the knowledge graph inherently suffers from missing data and uneven data distribution, this time-based division leads to a drastic reduction in available data within each snapshot, which makes it difficult to learn high-quality representations of entities and relationships. In addition, the contribution of historical information changes over time, distinguishing its importance to the final results when capturing information that evolves over time. In this paper, we introduce CH-TKG (Contrastive Learning and Historical Information Learning for TKG Reasoning) to addresses issues related to data sparseness and the ambiguity of historical information weights. Firstly, we obtain embedding representations of entities and relationships with evolutionary dependencies by R-GCN and GRU. On this foundation, we introduce a novel contrastive learning method to optimize the representation of entities and relationships within individual snapshots of sparse data. Then we utilize self-attention and copy mechanisms to learn the effects of different historical data on the final inference results. We conduct extensive experiments on four datasets, and the experimental results demonstrate the effectiveness of our proposed model with sparse data. |
---|---|
ISSN: | 0924-669X 1573-7497 |
DOI: | 10.1007/s10489-024-05767-6 |