Position-Aware Relational Transformer for Knowledge Graph Embedding
Although Transformer has achieved success in language and vision tasks, its capacity for knowledge graph (KG) embedding has not been fully exploited. Using the self-attention (SA) mechanism in Transformer to model the subject-relation-object triples in KGs suffers from training inconsistency as SA i...
Saved in:
Published in | IEEE transaction on neural networks and learning systems Vol. 35; no. 8; pp. 11580 - 11594 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
United States
IEEE
01.08.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Although Transformer has achieved success in language and vision tasks, its capacity for knowledge graph (KG) embedding has not been fully exploited. Using the self-attention (SA) mechanism in Transformer to model the subject-relation-object triples in KGs suffers from training inconsistency as SA is invariant to the order of input tokens. As a result, it is unable to distinguish a (real) relation triple from its shuffled (fake) variants (e.g., object-relation-subject) and, thus, fails to capture the correct semantics. To cope with this issue, we propose a novel Transformer architecture, namely, Knowformer, for KG embedding. It incorporates relational compositions in entity representations to explicitly inject semantics and capture the role of an entity based on its position (subject or object) in a relation triple. The relational composition for a subject (or object) entity of a relation triple refers to an operator on the relation and the object (or subject). We borrow ideas from the typical translational and semantic-matching embedding techniques to design relational compositions. We carefully design a residual block to integrate relational compositions into SA and efficiently propagate the composed relational semantics layer by layer. We formally prove that the SA with relational compositions is able to distinguish the entity roles in different positions and correctly capture relational semantics. Extensive experiments and analyses on six benchmark datasets show that Knowformer achieves state-of-the-art performance on both link prediction and entity alignment. |
---|---|
AbstractList | Although Transformer has achieved success in language and vision tasks, its capacity for knowledge graph (KG) embedding has not been fully exploited. Using the self-attention (SA) mechanism in Transformer to model the subject-relation-object triples in KGs suffers from training inconsistency as SA is invariant to the order of input tokens. As a result, it is unable to distinguish a (real) relation triple from its shuffled (fake) variants (e.g., object-relation-subject) and, thus, fails to capture the correct semantics. To cope with this issue, we propose a novel Transformer architecture, namely, Knowformer, for KG embedding. It incorporates relational compositions in entity representations to explicitly inject semantics and capture the role of an entity based on its position (subject or object) in a relation triple. The relational composition for a subject (or object) entity of a relation triple refers to an operator on the relation and the object (or subject). We borrow ideas from the typical translational and semantic-matching embedding techniques to design relational compositions. We carefully design a residual block to integrate relational compositions into SA and efficiently propagate the composed relational semantics layer by layer. We formally prove that the SA with relational compositions is able to distinguish the entity roles in different positions and correctly capture relational semantics. Extensive experiments and analyses on six benchmark datasets show that Knowformer achieves state-of-the-art performance on both link prediction and entity alignment. Although Transformer has achieved success in language and vision tasks, its capacity for knowledge graph (KG) embedding has not been fully exploited. Using the self-attention (SA) mechanism in Transformer to model the subject-relation-object triples in KGs suffers from training inconsistency as SA is invariant to the order of input tokens. As a result, it is unable to distinguish a (real) relation triple from its shuffled (fake) variants (e.g., object-relation-subject) and, thus, fails to capture the correct semantics. To cope with this issue, we propose a novel Transformer architecture, namely, Knowformer, for KG embedding. It incorporates relational compositions in entity representations to explicitly inject semantics and capture the role of an entity based on its position (subject or object) in a relation triple. The relational composition for a subject (or object) entity of a relation triple refers to an operator on the relation and the object (or subject). We borrow ideas from the typical translational and semantic-matching embedding techniques to design relational compositions. We carefully design a residual block to integrate relational compositions into SA and efficiently propagate the composed relational semantics layer by layer. We formally prove that the SA with relational compositions is able to distinguish the entity roles in different positions and correctly capture relational semantics. Extensive experiments and analyses on six benchmark datasets show that Knowformer achieves state-of-the-art performance on both link prediction and entity alignment.Although Transformer has achieved success in language and vision tasks, its capacity for knowledge graph (KG) embedding has not been fully exploited. Using the self-attention (SA) mechanism in Transformer to model the subject-relation-object triples in KGs suffers from training inconsistency as SA is invariant to the order of input tokens. As a result, it is unable to distinguish a (real) relation triple from its shuffled (fake) variants (e.g., object-relation-subject) and, thus, fails to capture the correct semantics. To cope with this issue, we propose a novel Transformer architecture, namely, Knowformer, for KG embedding. It incorporates relational compositions in entity representations to explicitly inject semantics and capture the role of an entity based on its position (subject or object) in a relation triple. The relational composition for a subject (or object) entity of a relation triple refers to an operator on the relation and the object (or subject). We borrow ideas from the typical translational and semantic-matching embedding techniques to design relational compositions. We carefully design a residual block to integrate relational compositions into SA and efficiently propagate the composed relational semantics layer by layer. We formally prove that the SA with relational compositions is able to distinguish the entity roles in different positions and correctly capture relational semantics. Extensive experiments and analyses on six benchmark datasets show that Knowformer achieves state-of-the-art performance on both link prediction and entity alignment. |
Author | Hu, Wei Qu, Yuzhong Cheng, Gong Li, Guangyao Sun, Zequn |
Author_xml | – sequence: 1 givenname: Guangyao orcidid: 0000-0002-2233-8470 surname: Li fullname: Li, Guangyao email: gyli.nju@gmail.com organization: State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China – sequence: 2 givenname: Zequn orcidid: 0000-0003-4177-9199 surname: Sun fullname: Sun, Zequn email: zqsun.nju@gmail.com organization: State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China – sequence: 3 givenname: Wei orcidid: 0000-0003-3635-6335 surname: Hu fullname: Hu, Wei email: whu@nju.edu.cn organization: State Key Laboratory for Novel Software Technology and the National Institute of Healthcare Data Science, Nanjing University, Nanjing, China – sequence: 4 givenname: Gong orcidid: 0000-0003-3539-7776 surname: Cheng fullname: Cheng, Gong email: gcheng@nju.edu.cn organization: State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China – sequence: 5 givenname: Yuzhong surname: Qu fullname: Qu, Yuzhong email: yzqu@nju.edu.cn organization: State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China |
BackLink | https://www.ncbi.nlm.nih.gov/pubmed/37018088$$D View this record in MEDLINE/PubMed |
BookMark | eNp9kMtKAzEUhoNUrNa-gIjM0s3UXOaSWZZSq1iqaAV3IZOcqZGZSU2mFN_e6cUiLkwW5wS-Pzn5zlCntjUgdEHwgBCc3cxns-nLgGLKBowmNGPpETqlJKEhZZx3Dn361kV97z9wuxIcJ1F2grosxYRjzk_R6Ml60xhbh8O1dBA8Qyk3R1kGcydrX1hXgQvaEjzUdl2CXkAwcXL5HoyrHLQ29eIcHRey9NDf1x56vR3PR3fh9HFyPxpOQ8Vo1IS6YEWEEwocOI8j1c7AlFYMIp1HPMcSoOBapgw0iVXGQUuS61xGqZJ5HmvWQ9e7e5fOfq7AN6IyXkFZyhrsyguaZglpN41a9GqPrvIKtFg6U0n3JX4-3gJ0ByhnvXdQHBCCxUaw2AoWG8FiL7gN8T8hZZqtrsZJU_4fvdxFDQD8egtnNKYx-waXjYmO |
CODEN | ITNNAL |
CitedBy_id | crossref_primary_10_1007_s10489_024_05677_7 crossref_primary_10_1109_TNNLS_2024_3383873 crossref_primary_10_1109_TKDE_2024_3486747 |
Cites_doi | 10.18653/v1/D15-1174 10.18653/v1/D19-1023 10.1109/TNNLS.2021.3055147 10.1145/3132847.3132912 10.24963/ijcai.2020/194 10.1609/aaai.v32i1.11573 10.1109/CVPR.2016.90 10.1145/3018661.3018739 10.18653/v1/D19-1522 10.1007/978-3-319-68288-4_37 10.3233/SW-140134 10.1145/3442381.3450118 10.1007/978-3-030-77385-4_24 10.18653/v1/N18-2074 10.18653/v1/2020.acl-main.457 10.1109/MWSCAS.2017.8053243 10.48550/ARXIV.1706.03762 10.1145/3424672 10.1609/aaai.v34i01.5354 10.18653/v1/2020.emnlp-main.515 10.18653/v1/P19-1140 10.1609/aaai.v29i1.9491 10.1609/aaai.v33i01.3301297 10.18653/v1/D18-1032 10.1145/3336191.3371804 10.18653/v1/2020.acl-main.412 10.1609/aaai.v28i1.8870 10.1145/3442381.3449925 10.24963/ijcai.2018/611 10.18653/v1/2020.acl-main.578 10.1145/219717.219748 10.18653/v1/D15-1082 10.1109/TNNLS.2021.3070843 10.1145/1376616.1376746 10.18653/v1/P18-1223 10.1109/tnnls.2022.3189994 10.1609/aaai.v34i03.5701 10.24963/ijcai.2019/733 10.24963/ijcai.2017/209 10.1016/j.aiopen.2022.10.001 10.1609/aaai.v35i16.17654 10.1109/TNNLS.2020.3019893 10.1145/2487575.2487592 10.24963/ijcai.2018/556 10.1609/aaai.v35i8.16850 10.4324/9780203786031-11 10.1109/TKDE.2017.2754499 10.18653/v1/2020.acl-main.617 10.1109/TNNLS.2021.3083259 10.1016/j.neunet.2005.06.042 10.18653/v1/D19-1274 10.1145/3442381.3450043 |
ContentType | Journal Article |
DBID | 97E RIA RIE AAYXX CITATION NPM 7X8 |
DOI | 10.1109/TNNLS.2023.3262937 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef PubMed MEDLINE - Academic |
DatabaseTitle | CrossRef PubMed MEDLINE - Academic |
DatabaseTitleList | PubMed MEDLINE - Academic |
Database_xml | – sequence: 1 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Computer Science |
EISSN | 2162-2388 |
EndPage | 11594 |
ExternalDocumentID | 37018088 10_1109_TNNLS_2023_3262937 10092525 |
Genre | orig-research Journal Article |
GrantInformation_xml | – fundername: National Natural Science Foundation of China grantid: 62272219; 61872172 funderid: 10.13039/501100001809 |
GroupedDBID | 0R~ 4.4 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACIWK ACPRK AENEX AFRAH AGQYO AGSQL AHBIQ AKJIK AKQYR ALMA_UNASSIGNED_HOLDINGS ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ EBS EJD IFIPE IPLJI JAVBF M43 MS~ O9- OCL PQQKQ RIA RIE RNS AAYXX CITATION RIG NPM 7X8 |
ID | FETCH-LOGICAL-c324t-df3f4062e8e8854c7013cdc3e4db48b0aeef8da73ed15c98eda1bdba47cabb5d3 |
IEDL.DBID | RIE |
ISSN | 2162-237X 2162-2388 |
IngestDate | Fri Jul 11 05:45:56 EDT 2025 Mon Jul 21 06:07:11 EDT 2025 Thu Apr 24 23:07:33 EDT 2025 Tue Jul 01 00:27:50 EDT 2025 Wed Aug 27 02:35:16 EDT 2025 |
IsPeerReviewed | false |
IsScholarly | true |
Issue | 8 |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c324t-df3f4062e8e8854c7013cdc3e4db48b0aeef8da73ed15c98eda1bdba47cabb5d3 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
ORCID | 0000-0003-3539-7776 0000-0003-4177-9199 0000-0002-2233-8470 0000-0003-3635-6335 |
PMID | 37018088 |
PQID | 2796161624 |
PQPubID | 23479 |
PageCount | 15 |
ParticipantIDs | crossref_primary_10_1109_TNNLS_2023_3262937 pubmed_primary_37018088 crossref_citationtrail_10_1109_TNNLS_2023_3262937 ieee_primary_10092525 proquest_miscellaneous_2796161624 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2024-08-01 |
PublicationDateYYYYMMDD | 2024-08-01 |
PublicationDate_xml | – month: 08 year: 2024 text: 2024-08-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | United States |
PublicationPlace_xml | – name: United States |
PublicationTitle | IEEE transaction on neural networks and learning systems |
PublicationTitleAbbrev | TNNLS |
PublicationTitleAlternate | IEEE Trans Neural Netw Learn Syst |
PublicationYear | 2024 |
Publisher | IEEE |
Publisher_xml | – name: IEEE |
References | ref13 ref57 ref59 Liu (ref36) ref53 ref11 ref10 ref54 Müller (ref42) ref17 ref16 ref18 ref51 ref50 ref46 ref45 ref48 Vashishth (ref56) Wang (ref60) 2020 Yao (ref70) 2019 ref47 ref41 Guo (ref19) Tang (ref52) ref49 Bordes (ref4) Raffel (ref44) 2020; 21 ref8 ref7 Dosovitskiy (ref14) ref9 Velickovic (ref58) ref3 ref6 ref5 ref40 ref35 ref34 ref37 ref31 ref30 ref74 ref33 ref32 ref76 Kipf (ref26) ref1 ref38 Kingma (ref25) Nickel (ref43) Fey (ref15) ref71 ref73 Yang (ref69) ref72 Devlin (ref12) Ke (ref24) Lee (ref28) Trouillon (ref55) ref68 ref23 ref67 ref20 ref64 ref63 ref22 ref66 ref65 Mahdisoltani (ref39) ref27 Zhu (ref75) ref29 Hochreiter (ref21) Bhojanapalli (ref2) ref62 ref61 |
References_xml | – start-page: 1 volume-title: Proc. ICLR ident: ref56 article-title: Composition-based multi-relational graph convolutional networks – start-page: 4258 volume-title: Proc. IJCAI ident: ref75 article-title: Iterative entity alignment via knowledge embeddings – ident: ref53 doi: 10.18653/v1/D15-1174 – start-page: 2787 volume-title: Proc. NIPS ident: ref4 article-title: Translating embeddings for modeling multi-relational data – ident: ref67 doi: 10.18653/v1/D19-1023 – ident: ref32 doi: 10.1109/TNNLS.2021.3055147 – ident: ref76 doi: 10.1145/3132847.3132912 – ident: ref72 doi: 10.24963/ijcai.2020/194 – ident: ref11 doi: 10.1609/aaai.v32i1.11573 – ident: ref20 doi: 10.1109/CVPR.2016.90 – start-page: 2168 volume-title: Proc. ICML ident: ref36 article-title: Analogical inference for multi-relational embeddings – ident: ref16 doi: 10.1145/3018661.3018739 – start-page: 4171 volume-title: Proc. NAACL-HLT ident: ref12 article-title: BERT: Pre-training of deep bidirectional transformers for language understanding – ident: ref1 doi: 10.18653/v1/D19-1522 – ident: ref49 doi: 10.1007/978-3-319-68288-4_37 – volume: 21 start-page: 140 issue: 1 year: 2020 ident: ref44 article-title: Exploring the limits of transfer learning with a unified text-to-text transformer publication-title: J. Mach. Learn. Res. – start-page: 1 volume-title: Proc. ICLR ident: ref24 article-title: Rethinking positional encoding in language pre-training – ident: ref29 doi: 10.3233/SW-140134 – ident: ref62 doi: 10.1145/3442381.3450118 – ident: ref10 doi: 10.1007/978-3-030-77385-4_24 – ident: ref47 doi: 10.18653/v1/N18-2074 – ident: ref22 doi: 10.18653/v1/2020.acl-main.457 – start-page: 864 volume-title: Proc. ICML ident: ref2 article-title: Low-rank bottleneck in multi-head attention models – ident: ref13 doi: 10.1109/MWSCAS.2017.8053243 – ident: ref57 doi: 10.48550/ARXIV.1706.03762 – start-page: 1 volume-title: Proc. ICLR ident: ref15 article-title: Deep graph matching consensus – ident: ref45 doi: 10.1145/3424672 – ident: ref51 doi: 10.1609/aaai.v34i01.5354 – ident: ref37 doi: 10.18653/v1/2020.emnlp-main.515 – ident: ref5 doi: 10.18653/v1/P19-1140 – start-page: 4696 volume-title: Proc. NeurIPS ident: ref42 article-title: When does label smoothing help? – start-page: 1 volume-title: Proc. ICLR ident: ref26 article-title: Semi-supervised classification with graph convolutional networks – ident: ref35 doi: 10.1609/aaai.v29i1.9491 – start-page: 2505 volume-title: Proc. ICML ident: ref19 article-title: Learning to exploit long-term relational dependencies in knowledge graphs – start-page: 1 volume-title: Proc. ICLR ident: ref69 article-title: Embedding entities and relations for learning and inference in knowledge bases – ident: ref54 doi: 10.1609/aaai.v33i01.3301297 – start-page: 1 volume-title: Proc. ICLR ident: ref25 article-title: Adam: A method for stochastic optimization – ident: ref63 doi: 10.18653/v1/D18-1032 – ident: ref40 doi: 10.1145/3336191.3371804 – start-page: 1 year: 2019 ident: ref70 article-title: KG-BERT: BERT for knowledge graph completion publication-title: CoRR – start-page: 3744 volume-title: Proc. ICML ident: ref28 article-title: Set transformer: A framework for attention-based permutation-invariant neural networks – ident: ref46 doi: 10.18653/v1/2020.acl-main.412 – ident: ref64 doi: 10.1609/aaai.v28i1.8870 – ident: ref71 doi: 10.1145/3442381.3449925 – ident: ref50 doi: 10.24963/ijcai.2018/611 – ident: ref68 doi: 10.18653/v1/2020.acl-main.578 – start-page: 473 volume-title: Proc. NIPS ident: ref21 article-title: LSTM can solve hard long time lag problems – ident: ref41 doi: 10.1145/219717.219748 – ident: ref34 doi: 10.18653/v1/D15-1082 – ident: ref23 doi: 10.1109/TNNLS.2021.3070843 – ident: ref3 doi: 10.1145/1376616.1376746 – ident: ref38 doi: 10.18653/v1/P18-1223 – ident: ref74 doi: 10.1109/tnnls.2022.3189994 – start-page: 1 volume-title: Proc. ICLR ident: ref14 article-title: An image is worth 16×16 words: Transformers for image recognition at scale – ident: ref73 doi: 10.1609/aaai.v34i03.5701 – start-page: 3174 volume-title: Proc. IJCAI ident: ref52 article-title: BERT-INT: A BERT-based interaction model for knowledge graph alignment – ident: ref66 doi: 10.24963/ijcai.2019/733 – start-page: 1 volume-title: Proc. ICLR ident: ref58 article-title: Graph attention networks – ident: ref9 doi: 10.24963/ijcai.2017/209 – ident: ref33 doi: 10.1016/j.aiopen.2022.10.001 – start-page: 1 volume-title: Proc. CIDR ident: ref39 article-title: YAGO3: A knowledge base from multilingual Wikipedias – ident: ref65 doi: 10.1609/aaai.v35i16.17654 – ident: ref17 doi: 10.1109/TNNLS.2020.3019893 – ident: ref27 doi: 10.1145/2487575.2487592 – start-page: 1 year: 2020 ident: ref60 article-title: CoKE: Contextualized knowledge graph embedding publication-title: CoRR – ident: ref8 doi: 10.24963/ijcai.2018/556 – ident: ref6 doi: 10.1609/aaai.v35i8.16850 – ident: ref48 doi: 10.4324/9780203786031-11 – start-page: 2071 volume-title: Proc. ICML ident: ref55 article-title: Complex embeddings for simple link prediction – ident: ref61 doi: 10.1109/TKDE.2017.2754499 – ident: ref7 doi: 10.18653/v1/2020.acl-main.617 – ident: ref31 doi: 10.1109/TNNLS.2021.3083259 – start-page: 809 volume-title: Proc. ICML ident: ref43 article-title: A three-way model for collective learning on multi-relational data – ident: ref18 doi: 10.1016/j.neunet.2005.06.042 – ident: ref30 doi: 10.18653/v1/D19-1274 – ident: ref59 doi: 10.1145/3442381.3450043 |
SSID | ssj0000605649 |
Score | 2.509049 |
Snippet | Although Transformer has achieved success in language and vision tasks, its capacity for knowledge graph (KG) embedding has not been fully exploited. Using the... |
SourceID | proquest pubmed crossref ieee |
SourceType | Aggregation Database Index Database Enrichment Source Publisher |
StartPage | 11580 |
SubjectTerms | Encoding Entity alignment knowledge graph (KG) embedding Knowledge graphs link prediction position encoding Predictive models Semantics Task analysis Training Transformer Transformers |
Title | Position-Aware Relational Transformer for Knowledge Graph Embedding |
URI | https://ieeexplore.ieee.org/document/10092525 https://www.ncbi.nlm.nih.gov/pubmed/37018088 https://www.proquest.com/docview/2796161624 |
Volume | 35 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LT9tAEB4VTr2UQqGkpciVuKF1_diXjyiCRn1ESAQpN2sf40shqdJElfrrO7v2hqoSiJN92PVjZlbz7ezMfABnkqzAc-WZsaVnXDeSNTV6ZoXsVFdoJ-NRzPepnNzyL3MxH4rVYy0MIsbkM8zDbTzL90u3CaEyWuFFU4lK7MAO7dz6Yq1tQKUgYC4j3K1KWbGqVvNUJFM0n2bT6bebPHCF5wRYyMcF9r1ahfZVkXTlwSdFkpXH8Wb0O1d7ME1f3Keb_Mg3a5u7P_81c3z2L72GVwMCzS56k9mHF7g4gL3E7pANi_0NjK-HfC528dusMEtpczR3ltAujadL9jUF5rLPof91dnlv0QeneAi3V5ez8YQNlAvMEbJaM9_VHbn4CjVqLbgjAdXOuxq5t1zbwiB22htF6iyFazR6U1pvDVfOWCt8fQS7i-UCjyEjoICC3CPtmRRvpNVKScMLE_oDqcKKEZRJ6K0b-pEHWoy7Nu5LiqaNOmuDztpBZyM438752XfjeHL0YRD4PyN7WY_gY1JuS4spnJCYBS43v9pKNZIgsKz4CN72Wt_OTsby7pGnvoeX9HLeJweewO56tcEPBFjW9jQa6l8x1uMK |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9QwEB5BOcCl5VHoQgtB4oYc8vArx6pqWeg2QmIr7S3yY3Ip7KKyKyR-PWMnXiqkop6Sgx05M2PNZ8_jA3gnyQo8V54ZW3rGdSNZU6NnVshe9YV2MoZiLlo5veSfF2IxFqvHWhhEjMlnmIfXGMv3K7cJV2W0w4umEpW4Dw_I8YtyKNfaXqkUBM1lBLxVKStW1WqRymSK5sO8bWdf88AWnhNkIS8X-PdqFRpYRdqVv14p0qzcjjij5znbgzateUg4uco3a5u73_-0c7zzTz2G3RGDZseD0TyBe7h8CnuJ3yEbt_szOPkyZnSx41_mGrOUOEdz5wnv0nh6ZOfpai77GDpgZ6ffLfrgFvfh8ux0fjJlI-kCc4St1sz3dU9OvkKNWgvuSEC1865G7i3XtjCIvfZGkUJL4RqN3pTWW8OVM9YKXz-HneVqiQeQEVRAQQ6STk2KN9JqpaThhQkdglRhxQTKJPTOjR3JAzHGty6eTIqmizrrgs66UWcTeL-d82Pox_Hf0ftB4DdGDrKewNuk3I62U4iRmCWuNj-7SjWSQLCs-AReDFrfzk7G8vKWr76Bh9P5xaybfWrPX8EjWggfUgUPYWd9vcEjgi9r-zoa7R81DeZT |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Position-Aware+Relational+Transformer+for+Knowledge+Graph+Embedding&rft.jtitle=IEEE+transaction+on+neural+networks+and+learning+systems&rft.au=Li%2C+Guangyao&rft.au=Sun%2C+Zequn&rft.au=Hu%2C+Wei&rft.au=Cheng%2C+Gong&rft.date=2024-08-01&rft.pub=IEEE&rft.issn=2162-237X&rft.volume=35&rft.issue=8&rft.spage=11580&rft.epage=11594&rft_id=info:doi/10.1109%2FTNNLS.2023.3262937&rft_id=info%3Apmid%2F37018088&rft.externalDocID=10092525 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2162-237X&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2162-237X&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2162-237X&client=summon |