LengthPath: The Length Reward of Knowledge Graph Reasoning Based on Deep Reinforcement Learning

Knowledge Graph (KG) always suffers from incompleteness. Knowledge Graph Reasoning (KGR) aims to predict the unknown entity or find reasoning paths for relations over incomplete KG. However, multi-hop reasoning still facing challenges, because the process of reasoning usually experiences the neighbo...

Full description

Saved in:
Bibliographic Details
Published in2024 International Joint Conference on Neural Networks (IJCNN) pp. 1 - 8
Main Authors Ling-Xiao, Xu, Lin, Feng, Zi-Hao, Li, Ling, Yue, Qiu-Ping, Shuai, Jie-Wei, Li
Format Conference Proceeding
LanguageEnglish
Published IEEE 30.06.2024
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Knowledge Graph (KG) always suffers from incompleteness. Knowledge Graph Reasoning (KGR) aims to predict the unknown entity or find reasoning paths for relations over incomplete KG. However, multi-hop reasoning still facing challenges, because the process of reasoning usually experiences the neighbor information issue. Prior works just use path efficiency as a part of reward function and do not utilize the neighbor information effectively. In order to deal with the situation, we propose the length reward in the Reinforcement Learning (RL) framework to represent the length of reasoning paths and use the neighbor information effectively. Our model utilizes the position information to design reward function. To solve this problem of save the semantic information of entity neighbors and historical trajectory information, we propose a new GRU-GAT framework to capture neighbor feature of the current entity and the target entity. Experimental results on NELL-995 and FB15K237 demonstrate the effectiveness of our model and our model can identify a more balanced route for every relation.
AbstractList Knowledge Graph (KG) always suffers from incompleteness. Knowledge Graph Reasoning (KGR) aims to predict the unknown entity or find reasoning paths for relations over incomplete KG. However, multi-hop reasoning still facing challenges, because the process of reasoning usually experiences the neighbor information issue. Prior works just use path efficiency as a part of reward function and do not utilize the neighbor information effectively. In order to deal with the situation, we propose the length reward in the Reinforcement Learning (RL) framework to represent the length of reasoning paths and use the neighbor information effectively. Our model utilizes the position information to design reward function. To solve this problem of save the semantic information of entity neighbors and historical trajectory information, we propose a new GRU-GAT framework to capture neighbor feature of the current entity and the target entity. Experimental results on NELL-995 and FB15K237 demonstrate the effectiveness of our model and our model can identify a more balanced route for every relation.
Author Zi-Hao, Li
Jie-Wei, Li
Ling, Yue
Qiu-Ping, Shuai
Lin, Feng
Ling-Xiao, Xu
Author_xml – sequence: 1
  givenname: Xu
  surname: Ling-Xiao
  fullname: Ling-Xiao, Xu
  email: jayxlx@stu.sicnu.edu.cn
  organization: SiChuan Normal University,School of Computer Science,ChengDu,China
– sequence: 2
  givenname: Feng
  surname: Lin
  fullname: Lin, Feng
  email: fenglin@stu.sicnu.edu.cn
  organization: SiChuan Normal University,School of Computer Science,ChengDu,China
– sequence: 3
  givenname: Li
  surname: Zi-Hao
  fullname: Zi-Hao, Li
  email: li_zihao@stu.sicnu.edu.cn
  organization: SiChuan Normal University,School of Computer Science,ChengDu,China
– sequence: 4
  givenname: Yue
  surname: Ling
  fullname: Ling, Yue
  email: 20221302005@stu.sicnu.edu.cn
  organization: SiChuan Normal University,School of Computer Science,ChengDu,China
– sequence: 5
  givenname: Shuai
  surname: Qiu-Ping
  fullname: Qiu-Ping, Shuai
  email: qiuping@stu.sicnu.edu.cn
  organization: SiChuan Normal University,School of Computer Science,ChengDu,China
– sequence: 6
  givenname: Li
  surname: Jie-Wei
  fullname: Jie-Wei, Li
  email: lijiewei@stu.sicnu.edu.cn
  organization: SiChuan Normal University,School of Computer Science,ChengDu,China
BookMark eNqFjs1OAkEQhBujiaC8AYd-AZaemd2F8Sj-giHGcN9MpPfHQM9mZhPi27tEPXOqVH2VSo3gUrwwACpKlCI7e10tN5ucFtYmmnSaKMozyigdwNjO7cJkZDJrlL6AoVa5mqYpza9hFOMXkTbWmiEUbyxVV7-7rr7Dbc346_GDjy7s0Je4Fn_c865ifA6uPREXvTRS4b2L3FcEH5jbPm-k9OGTDyxdP-PCqXQLV6XbRx7_6Q1Mnh63y5dpw8xFG5qDC9_F_3FzBv8ADWRIuA
ContentType Conference Proceeding
DBID 6IE
6IH
CBEJK
RIE
RIO
DOI 10.1109/IJCNN60899.2024.10650504
DatabaseName IEEE Electronic Library (IEL) Conference Proceedings
IEEE Proceedings Order Plan (POP) 1998-present by volume
IEEE Xplore All Conference Proceedings
IEEE/IET Electronic Library
IEEE Proceedings Order Plans (POP) 1998-present
DatabaseTitleList
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Xplore
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISBN 9798350359312
EISSN 2161-4407
EndPage 8
ExternalDocumentID 10650504
Genre orig-research
GroupedDBID 6IE
6IF
6IH
6IK
6IL
6IM
6IN
AAJGR
ABLEC
ADZIZ
ALMA_UNASSIGNED_HOLDINGS
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CBEJK
CHZPO
IEGSK
IJVOP
IPLJI
OCL
RIE
RIL
RIO
RNS
ID FETCH-ieee_primary_106505043
IEDL.DBID RIE
IngestDate Wed Sep 18 05:50:08 EDT 2024
IsPeerReviewed false
IsScholarly false
Language English
LinkModel DirectLink
MergedId FETCHMERGED-ieee_primary_106505043
ParticipantIDs ieee_primary_10650504
PublicationCentury 2000
PublicationDate 2024-June-30
PublicationDateYYYYMMDD 2024-06-30
PublicationDate_xml – month: 06
  year: 2024
  text: 2024-June-30
  day: 30
PublicationDecade 2020
PublicationTitle 2024 International Joint Conference on Neural Networks (IJCNN)
PublicationTitleAbbrev IJCNN
PublicationYear 2024
Publisher IEEE
Publisher_xml – name: IEEE
SSID ssj0023993
Score 3.8609633
Snippet Knowledge Graph (KG) always suffers from incompleteness. Knowledge Graph Reasoning (KGR) aims to predict the unknown entity or find reasoning paths for...
SourceID ieee
SourceType Publisher
StartPage 1
SubjectTerms Cognition
Deep reinforcement learning
Knowledge Graph Reasoning
Knowledge graphs
length reward
Neural networks
Reinforcement Learning
Semantics
Statistical analysis
Trajectory
Title LengthPath: The Length Reward of Knowledge Graph Reasoning Based on Deep Reinforcement Learning
URI https://ieeexplore.ieee.org/document/10650504
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LT4NAEJ5oT57qA-Ojmjl4BRGW0nq0WmtV0hhNeiOwO9TEBJoKF3-9swvUaDTxBstrssvk25n95luAM8aITClP2YEMpS0kB6yJlwlbummQytAnoXSh8GPUn7yI6TyYN8XqphaGiAz5jBx9aNbyVSErnSpjD-f5RKDVPzcHrlcXa62jK420LVXHHZ7fTUdR1NeLWhwEesJpn_22i4oBkXEXovbzNXfkzanK1JEfP5QZ_23fNlhf9Xo4WyPRDmxQvgvddsMGbPx3D-IHyhfl64xnfZfIPwjW5_hEmjuLRYb3bYoNb7WSNV9J3k3CFq8Y7viWHK-JltxuFFelSS5iI9K6sKA3vnkeTWxtdLyshSzi1l5_Hzp5kdMBoJtKjriSzPdCJRJBQ_bogQwvyNcaj4oOwfr1FUd_tB_Dlu7-mmHXg065quiEYbxMT83wfQJ6rKEI
link.rule.ids 310,311,786,790,795,796,802,27956,55107
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1NT4NAEJ2YetBT_cD4UXUPXkGEpViPVittKWlMTXojsDu0iQk0Si_-emcXqNFo4g12FzLJ7uTtzM57C3BFGJFJ6UjTE74wuaCANXEybgo79VLhu8ilIgpPom7wwkdzb16T1TUXBhF18Rla6lGf5ctCrFWqjDyc9hOeUv_cJqC3_YqutYmvFNY2xTp273o46kdRVx1rURjocKv5-ts9KhpGBm2IGgOq6pFXa12mlvj4oc34bwv3wPhi7LHpBov2YQvzA2g3Vzaw2oMPIQ4xX5TLKe377hgtEVa9s2dU1bOsyNi4SbKxJ6VlTT3Ju07ZsnsCPBqSswfEFbVrzVWh04uslmldGNAZPM76gamMjleVlEXc2OseQSsvcjwGZqeCYq4kcx1f8oRjj3z6Vvg36CqVR4knYPz6i9M_2i9hJ5hNwjgcRuMz2FVTUdXbdaBVvq3xnEC9TC_0VH4CJFakXA
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=proceeding&rft.title=2024+International+Joint+Conference+on+Neural+Networks+%28IJCNN%29&rft.atitle=LengthPath%3A+The+Length+Reward+of+Knowledge+Graph+Reasoning+Based+on+Deep+Reinforcement+Learning&rft.au=Ling-Xiao%2C+Xu&rft.au=Lin%2C+Feng&rft.au=Zi-Hao%2C+Li&rft.au=Ling%2C+Yue&rft.date=2024-06-30&rft.pub=IEEE&rft.eissn=2161-4407&rft.spage=1&rft.epage=8&rft_id=info:doi/10.1109%2FIJCNN60899.2024.10650504&rft.externalDocID=10650504