Semantic Matching Template-Based Zero-Shot Relation Triplet Extraction
To address the limitation of annotated datasets confined to fixed relation domains, which hampers the effective extraction of triplets, especially for novel relation types, our work introduces an innovative approach. We propose a method for training large-scale language models using prompt templates...
Saved in:
Published in | IEICE Transactions on Information and Systems Vol. E108.D; no. 3; pp. 277 - 285 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
Tokyo
The Institute of Electronics, Information and Communication Engineers
01.03.2025
Japan Science and Technology Agency |
Subjects | |
Online Access | Get full text |
ISSN | 0916-8532 1745-1361 |
DOI | 10.1587/transinf.2024EDP7137 |
Cover
Abstract | To address the limitation of annotated datasets confined to fixed relation domains, which hampers the effective extraction of triplets, especially for novel relation types, our work introduces an innovative approach. We propose a method for training large-scale language models using prompt templates designed for zero-shot learning in relation triplet extraction tasks. By utilizing these specially crafted prompt templates in combination with fine-grained matching scoring rules, we transform the structured prediction task into a cloze task. This transformation aligns the task more closely with the intrinsic capabilities of the language model, facilitating a more natural processing flow.Experimental evaluations on two public datasets show that our method achieves stable and enhanced performance compared to baseline models. This improvement underscores the efficiency and potential of our approach in facilitating zero-shot extraction of relation triplets, thus broadening the scope of applicable relation types without the need for domain-specific training data. |
---|---|
AbstractList | To address the limitation of annotated datasets confined to fixed relation domains, which hampers the effective extraction of triplets, especially for novel relation types, our work introduces an innovative approach. We propose a method for training large-scale language models using prompt templates designed for zero-shot learning in relation triplet extraction tasks. By utilizing these specially crafted prompt templates in combination with fine-grained matching scoring rules, we transform the structured prediction task into a cloze task. This transformation aligns the task more closely with the intrinsic capabilities of the language model, facilitating a more natural processing flow.Experimental evaluations on two public datasets show that our method achieves stable and enhanced performance compared to baseline models. This improvement underscores the efficiency and potential of our approach in facilitating zero-shot extraction of relation triplets, thus broadening the scope of applicable relation types without the need for domain-specific training data. |
ArticleNumber | 2024EDP7137 |
Author | ZHANG, Mei TIAN, Yu DUAN, Jianyong YANG, Yuechen |
Author_xml | – sequence: 1 fullname: DUAN, Jianyong organization: College of Informatics, North China University of Technology – sequence: 1 fullname: ZHANG, Mei organization: College of Informatics, North China University of Technology – sequence: 1 fullname: TIAN, Yu organization: College of Informatics, North China University of Technology – sequence: 1 fullname: YANG, Yuechen organization: College of Informatics, North China University of Technology |
BookMark | eNpNkEtPwkAUhScGEwH9By6auC7Os50ulYeaYDSAGzeTabmFkjKtM0Oi_94hCLK6NyfnnJv79VDHNAYQuiV4QIRM773VxlWmHFBM-Xj0nhKWXqAuSbmICUtIB3VxRpJYCkavUM-5DcZEUiK6aDKHrTa-KqJX7Yt1ZVbRArZtrT3Ej9rBMvoE28TzdeOjGQS5aky0sFVbg4_G3-FysZeu0WWpawc3f7OPPibjxfA5nr49vQwfpnHBMfWx5IClZjoruaA8I8u8lCBzqXOdiyUPCiU5lWlCKWOlSFhwCxB5lucl05qyPro79La2-dqB82rT7KwJJxUjGcZpeF0EFz-4Cts4Z6FUra222v4ogtWemDoSU2fEQmx2iG2c1ys4hbQNfGr4D40Jlmqk2HE5KzmZi7W2Cgz7BdgWf50 |
Cites_doi | 10.1007/s12559-021-09917-7 10.18653/v1/2021.findings-acl.161 10.18653/v1/2022.findings-acl.5 10.18653/v1/2023.acl-long.369 10.18653/v1/P19-1129 10.18653/v1/2022.acl-long.395 10.18653/v1/D18-1514 10.18653/v1/2022.emnlp-main.249 10.1145/2629489 10.18653/v1/2020.coling-main.488 10.18653/v1/D19-1410 10.18653/v1/2020.acl-main.703 10.18653/v1/2021.naacl-main.272 |
ContentType | Journal Article |
Copyright | 2025 The Institute of Electronics, Information and Communication Engineers Copyright Japan Science and Technology Agency 2025 |
Copyright_xml | – notice: 2025 The Institute of Electronics, Information and Communication Engineers – notice: Copyright Japan Science and Technology Agency 2025 |
DBID | AAYXX CITATION 7SC 8FD JQ2 L7M L~C L~D |
DOI | 10.1587/transinf.2024EDP7137 |
DatabaseName | CrossRef Computer and Information Systems Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
DatabaseTitle | CrossRef Computer and Information Systems Abstracts Technology Research Database Computer and Information Systems Abstracts – Academic Advanced Technologies Database with Aerospace ProQuest Computer Science Collection Computer and Information Systems Abstracts Professional |
DatabaseTitleList | Computer and Information Systems Abstracts |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering Computer Science |
EISSN | 1745-1361 |
EndPage | 285 |
ExternalDocumentID | 10_1587_transinf_2024EDP7137 article_transinf_E108_D_3_E108_D_2024EDP7137_article_char_en |
GroupedDBID | -~X 5GY ABJNI ABZEH ACGFS ADNWM AENEX ALMA_UNASSIGNED_HOLDINGS CS3 DU5 EBS EJD F5P ICE JSF JSH KQ8 OK1 P2P RJT RZJ TN5 ZKX AAYXX CITATION 7SC 8FD JQ2 L7M L~C L~D |
ID | FETCH-LOGICAL-c402t-84e08a3a9f452491dbf8e8b8abab5d449121b28762233f56308a5e5b9bbf3aa23 |
ISSN | 0916-8532 |
IngestDate | Mon Jun 30 11:45:56 EDT 2025 Tue Jul 01 05:21:56 EDT 2025 Wed Sep 03 06:30:49 EDT 2025 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 3 |
Language | English |
LinkModel | OpenURL |
MergedId | FETCHMERGED-LOGICAL-c402t-84e08a3a9f452491dbf8e8b8abab5d449121b28762233f56308a5e5b9bbf3aa23 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
OpenAccessLink | https://www.jstage.jst.go.jp/article/transinf/E108.D/3/E108.D_2024EDP7137/_article/-char/en |
PQID | 3190077135 |
PQPubID | 2048497 |
PageCount | 9 |
ParticipantIDs | proquest_journals_3190077135 crossref_primary_10_1587_transinf_2024EDP7137 jstage_primary_article_transinf_E108_D_3_E108_D_2024EDP7137_article_char_en |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2025-03-01 |
PublicationDateYYYYMMDD | 2025-03-01 |
PublicationDate_xml | – month: 03 year: 2025 text: 2025-03-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | Tokyo |
PublicationPlace_xml | – name: Tokyo |
PublicationTitle | IEICE Transactions on Information and Systems |
PublicationTitleAlternate | IEICE Trans. Inf. & Syst. |
PublicationYear | 2025 |
Publisher | The Institute of Electronics, Information and Communication Engineers Japan Science and Technology Agency |
Publisher_xml | – name: The Institute of Electronics, Information and Communication Engineers – name: Japan Science and Technology Agency |
References | [4] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever, “Language models are unsupervised multitask learners,” OpenAI blog, vol.1, no.8, 9, 2019. [9] X. Han, H. Zhu, P. Yu, Z. Wang, Y. Yao, Z. Liu, and M. Sun, “FewRel: A large-scale supervised few-shot relation classification dataset with state-of-the-art evaluation,” Proc. 2018 Conference on Empirical Methods in Natural Language Processing, pp.4803-4809, Oct,-Nov. 2018. 10.18653/v1/d18-1514 [10] Y. Meng, J. Huang, Y. Zhang, and J. Han, “Generating training data with language models: Towards zero-shot language understanding,” Advances in Neural Information Processing Systems, vol.35, pp.462-477, 2022. [1] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P.J. Liu, “Exploring the limits of transfer learning with a unified text-to-text transformer,” arXiv preprint, arXiv:1910.10683, Oct. 2019. 10.48550/arXiv.1910.10683 [12] T. Nayak, N. Majumder, P. Goyal, and S. Poria, “Deep neural approaches to relation triplets extraction: A comprehensive survey,” Cognitive Computation, vol.13, no.5, pp.1215-1232, 2021. 10.1007/s12559-021-09917-7 [6] L. Cui, Y. Wu, J. Liu, S. Yang, and Y. Zhang, “Template-based named entity recognition using BART,” Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pp.1835-1845, Aug. 2021. 10.18653/v1/2021.findings-acl.161 [15] C.-Y. Chen and C.-T. Li, “ZS-BERT: Towards zero-shot relation extraction with attribute representation learning,” Proc. 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp.3470-3479, June 2021. 10.18653/v1/2021.naacl-main.272 [3] L. Dong, N. Yang, W. Wang, F. Wei, X. Liu, Y. Wang, J. Gao, M. Zhou, and H.W. Hon, “Unified language model pre-training for natural language understanding and generation,” Neural Information Processing Systems, pp.13042-13054, May 2019. [14] M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy, V. Stoyanov, and L. Zettlemoyer, “BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension,” Proc. 58th Annual Meeting of the Association for Computational Linguistics, pp.7871-7880, July 2020. 10.18653/v1/2020.acl-main.703 [20] V. Sanh, A. Webson, C. Raffel, S.H. Bach, L. Sutawika, Z. Alyafeai, A. Chaffin, A. Stiegler, T.L. Scao, A. Raja, M. Dey, M.S. Bari, C. Xu, U. Thakker, S.S. Sharma, E. Szczechla, T. Kim, G. Chhablani, N. Nayak, D. Datta, J. Chang, M.T.-J. Jiang, H. Wang, M. Manica, S. Shen, Z.X. Yong, H. Pandey, R. Bawden, T. Wang, T. Neeraj, J. Rozen, A. Sharma, A. Santilli, T. Fevry, J.A. Fries, R. Teehan, T. Bers, S. Biderman, L. Gao, T. Wolf, and A.M. Rush, “Multitask prompted training enables zero-shot task generalization,” arXiv preprint, arXiv:2110.08207, Oct. 2021. 10.48550/arXiv.2110.08207 [17] D. Sachan, M. Lewis, M. Joshi, A. Aghajanyan, W.-t. Yih, J. Pineau, and L. Zettlemoyer, “Improving passage retrieval with zero-shot question generation,” Proc. 2022 Conference on Empirical Methods in Natural Language Processing, pp.3781-3797, Dec. 2022. 10.18653/v1/2022.emnlp-main.249 [8] N. Reimers and I. Gurevych, “Sentence-BERT: Sentence embeddings using Siamese BERT-networks,” arXiv preprint arXiv:1908.10084, Aug. 2019. 10.48550/arXiv.1908.10084 [22] J. Zhao, W. Zhan, X. Zhao, Q. Zhang, T. Gui, Z. Wei, J. Wang, M. Peng, and M. Sun, “Re-Matching: A fine-grained semantic matching method for zero-shot relation extraction,” Proc. 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp.6680-6691, July 2023. 10.18653/v1/2023.acl-long.369 [7] D. Vrandečić and M. Krötzsch, “Wikidata: A free collaborative knowledgebase,” Commun. ACM, vol.57, no.10, pp.78-85, Sept. 2014. 10.1145/2629489 [18] N.S. Keskar, B. McCann, L.R. Varshney, C. Xiong, and R. Socher, “CTRL: A conditional transformer language model for controllable generation,” arXiv preprint, arXiv:1909.05858, Sept. 2019. 10.48550/arXiv.1909.05858 [16] Y. Ganin and V. Lempitsky, “Unsupervised domain adaptation by backpropagation,” Proc. 32nd International Conference on Machine Learning, pp.1180-1189, July 2015. [5] Y. Lu, Q. Liu, D. Dai, X. Xiao, H. Lin, X. Han, L. Sun, and H. Wu, “Unified structure generation for universal information extraction,” Proc. 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp.5755-5772, 2022. 10.18653/v1/2022.acl-long.395 [2] X. Li, F. Yin, Z. Sun, X. Li, A. Yuan, D. Chai, M. Zhou, and J. Li, “Entity-relation extraction as multi-turn question answering,” Proc. 57th Annual Meeting of the Association for Computational Linguistics, pp.1340-1350, July 2019. 10.18653/v1/p19-1129 [11] Y.K. Chia, L. Bing, S. Poria, and L. Si, “RelationPrompt: Leveraging prompts to generate synthetic data for zero-shot relation triplet extraction,” Findings of the Association for Computational Linguistics: ACL 2022, pp.45-57, 2022. 10.18653/v1/2022.findings-acl.5 [13] B. Yu, Z. Zhang, X. Shu, Y. Wang, T. Liu, B. Wang, and S. Li, “Joint extraction of entities and relations based on a novel decomposition strategy,” arXiv preprint, arXiv:1909.04273, Sept. 2019. 10.48550/arXiv.1909.04273 [19] Y. Abbasi-Yadkori, D. Pál, and C. Szepesvári, “Improved algorithms for linear stochastic bandits,” Advances in Neural Information Processing Systems, vol.24, 2011. [21] T. Schick, H. Schmid, and H. Schütze, “Automatically identifying words that can serve as labels for few-shot text classification,” Proc. 28th International Conference on Computational Linguistics, pp.5569-5578, Dec. 2020. 10.18653/v1/2020.coling-main.488 11 22 12 13 14 15 16 17 18 19 1 2 3 4 5 6 7 8 9 20 10 21 |
References_xml | – reference: [10] Y. Meng, J. Huang, Y. Zhang, and J. Han, “Generating training data with language models: Towards zero-shot language understanding,” Advances in Neural Information Processing Systems, vol.35, pp.462-477, 2022. – reference: [21] T. Schick, H. Schmid, and H. Schütze, “Automatically identifying words that can serve as labels for few-shot text classification,” Proc. 28th International Conference on Computational Linguistics, pp.5569-5578, Dec. 2020. 10.18653/v1/2020.coling-main.488 – reference: [14] M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy, V. Stoyanov, and L. Zettlemoyer, “BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension,” Proc. 58th Annual Meeting of the Association for Computational Linguistics, pp.7871-7880, July 2020. 10.18653/v1/2020.acl-main.703 – reference: [6] L. Cui, Y. Wu, J. Liu, S. Yang, and Y. Zhang, “Template-based named entity recognition using BART,” Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pp.1835-1845, Aug. 2021. 10.18653/v1/2021.findings-acl.161 – reference: [9] X. Han, H. Zhu, P. Yu, Z. Wang, Y. Yao, Z. Liu, and M. Sun, “FewRel: A large-scale supervised few-shot relation classification dataset with state-of-the-art evaluation,” Proc. 2018 Conference on Empirical Methods in Natural Language Processing, pp.4803-4809, Oct,-Nov. 2018. 10.18653/v1/d18-1514 – reference: [7] D. Vrandečić and M. Krötzsch, “Wikidata: A free collaborative knowledgebase,” Commun. ACM, vol.57, no.10, pp.78-85, Sept. 2014. 10.1145/2629489 – reference: [12] T. Nayak, N. Majumder, P. Goyal, and S. Poria, “Deep neural approaches to relation triplets extraction: A comprehensive survey,” Cognitive Computation, vol.13, no.5, pp.1215-1232, 2021. 10.1007/s12559-021-09917-7 – reference: [16] Y. Ganin and V. Lempitsky, “Unsupervised domain adaptation by backpropagation,” Proc. 32nd International Conference on Machine Learning, pp.1180-1189, July 2015. – reference: [18] N.S. Keskar, B. McCann, L.R. Varshney, C. Xiong, and R. Socher, “CTRL: A conditional transformer language model for controllable generation,” arXiv preprint, arXiv:1909.05858, Sept. 2019. 10.48550/arXiv.1909.05858 – reference: [19] Y. Abbasi-Yadkori, D. Pál, and C. Szepesvári, “Improved algorithms for linear stochastic bandits,” Advances in Neural Information Processing Systems, vol.24, 2011. – reference: [5] Y. Lu, Q. Liu, D. Dai, X. Xiao, H. Lin, X. Han, L. Sun, and H. Wu, “Unified structure generation for universal information extraction,” Proc. 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp.5755-5772, 2022. 10.18653/v1/2022.acl-long.395 – reference: [13] B. Yu, Z. Zhang, X. Shu, Y. Wang, T. Liu, B. Wang, and S. Li, “Joint extraction of entities and relations based on a novel decomposition strategy,” arXiv preprint, arXiv:1909.04273, Sept. 2019. 10.48550/arXiv.1909.04273 – reference: [15] C.-Y. Chen and C.-T. Li, “ZS-BERT: Towards zero-shot relation extraction with attribute representation learning,” Proc. 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp.3470-3479, June 2021. 10.18653/v1/2021.naacl-main.272 – reference: [3] L. Dong, N. Yang, W. Wang, F. Wei, X. Liu, Y. Wang, J. Gao, M. Zhou, and H.W. Hon, “Unified language model pre-training for natural language understanding and generation,” Neural Information Processing Systems, pp.13042-13054, May 2019. – reference: [11] Y.K. Chia, L. Bing, S. Poria, and L. Si, “RelationPrompt: Leveraging prompts to generate synthetic data for zero-shot relation triplet extraction,” Findings of the Association for Computational Linguistics: ACL 2022, pp.45-57, 2022. 10.18653/v1/2022.findings-acl.5 – reference: [4] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever, “Language models are unsupervised multitask learners,” OpenAI blog, vol.1, no.8, 9, 2019. – reference: [8] N. Reimers and I. Gurevych, “Sentence-BERT: Sentence embeddings using Siamese BERT-networks,” arXiv preprint arXiv:1908.10084, Aug. 2019. 10.48550/arXiv.1908.10084 – reference: [17] D. Sachan, M. Lewis, M. Joshi, A. Aghajanyan, W.-t. Yih, J. Pineau, and L. Zettlemoyer, “Improving passage retrieval with zero-shot question generation,” Proc. 2022 Conference on Empirical Methods in Natural Language Processing, pp.3781-3797, Dec. 2022. 10.18653/v1/2022.emnlp-main.249 – reference: [22] J. Zhao, W. Zhan, X. Zhao, Q. Zhang, T. Gui, Z. Wei, J. Wang, M. Peng, and M. Sun, “Re-Matching: A fine-grained semantic matching method for zero-shot relation extraction,” Proc. 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp.6680-6691, July 2023. 10.18653/v1/2023.acl-long.369 – reference: [1] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P.J. Liu, “Exploring the limits of transfer learning with a unified text-to-text transformer,” arXiv preprint, arXiv:1910.10683, Oct. 2019. 10.48550/arXiv.1910.10683 – reference: [2] X. Li, F. Yin, Z. Sun, X. Li, A. Yuan, D. Chai, M. Zhou, and J. Li, “Entity-relation extraction as multi-turn question answering,” Proc. 57th Annual Meeting of the Association for Computational Linguistics, pp.1340-1350, July 2019. 10.18653/v1/p19-1129 – reference: [20] V. Sanh, A. Webson, C. Raffel, S.H. Bach, L. Sutawika, Z. Alyafeai, A. Chaffin, A. Stiegler, T.L. Scao, A. Raja, M. Dey, M.S. Bari, C. Xu, U. Thakker, S.S. Sharma, E. Szczechla, T. Kim, G. Chhablani, N. Nayak, D. Datta, J. Chang, M.T.-J. Jiang, H. Wang, M. Manica, S. Shen, Z.X. Yong, H. Pandey, R. Bawden, T. Wang, T. Neeraj, J. Rozen, A. Sharma, A. Santilli, T. Fevry, J.A. Fries, R. Teehan, T. Bers, S. Biderman, L. Gao, T. Wolf, and A.M. Rush, “Multitask prompted training enables zero-shot task generalization,” arXiv preprint, arXiv:2110.08207, Oct. 2021. 10.48550/arXiv.2110.08207 – ident: 12 doi: 10.1007/s12559-021-09917-7 – ident: 3 – ident: 18 – ident: 6 doi: 10.18653/v1/2021.findings-acl.161 – ident: 4 – ident: 11 doi: 10.18653/v1/2022.findings-acl.5 – ident: 1 – ident: 22 doi: 10.18653/v1/2023.acl-long.369 – ident: 2 doi: 10.18653/v1/P19-1129 – ident: 5 doi: 10.18653/v1/2022.acl-long.395 – ident: 10 – ident: 19 – ident: 9 doi: 10.18653/v1/D18-1514 – ident: 13 – ident: 16 – ident: 17 doi: 10.18653/v1/2022.emnlp-main.249 – ident: 7 doi: 10.1145/2629489 – ident: 21 doi: 10.18653/v1/2020.coling-main.488 – ident: 8 doi: 10.18653/v1/D19-1410 – ident: 14 doi: 10.18653/v1/2020.acl-main.703 – ident: 20 – ident: 15 doi: 10.18653/v1/2021.naacl-main.272 |
SSID | ssj0018215 |
Score | 2.3722038 |
Snippet | To address the limitation of annotated datasets confined to fixed relation domains, which hampers the effective extraction of triplets, especially for novel... |
SourceID | proquest crossref jstage |
SourceType | Aggregation Database Index Database Publisher |
StartPage | 277 |
SubjectTerms | Datasets fine-grained matching Large language models prompt learning relation triplet extraction Template matching Zero-shot learning |
Title | Semantic Matching Template-Based Zero-Shot Relation Triplet Extraction |
URI | https://www.jstage.jst.go.jp/article/transinf/E108.D/3/E108.D_2024EDP7137/_article/-char/en https://www.proquest.com/docview/3190077135 |
Volume | E108.D |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
ispartofPNX | IEICE Transactions on Information and Systems, 2025/03/01, Vol.E108.D(3), pp.277-285 |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3db9MwELdg8AAPAwYTHQPlgbfK0MR26j6WNVW7rQO0VOp4seLEgSHWTl0qAX89568kExUCXtLIdZvo7ue78_k-EHrNY16oUhYYFmGJqVI6CCAucVGwLOYkLLippTc7iydzerxgi8ana7JLKvkm_7k1r-R_uApjwFedJfsPnK3_FAbgHvgLV-AwXP-Kx-fqCghzmXdnIFCNKylVV9ffwHzE70A7Fd1Par3C519WVR301k3X2rVedZPv1domNbTt02kyPUp03wjfRNycJrjqqpUPXb5plTnXTufJ0GY-zdRl7QmYDk0AwcWmlitu0sVGAVBqTI7mduIx4PTHyulR54aIWBOHZVUEqPZGHulXaY4GukOTRtr2PYYxBjvBimFlJW-fMhwSW5ndi-Yk7PFRC4Vkq9Bn2m0yNqSBcdjzRzQZfYDdd79Rcv5g_-y9GM9PT0WaLNK76F7U75vD_ZOPzdkTj2zfC_-SLuESnvJ22zNuGTT3v4JN__l3xW6slfQx2nXbjGBoMfME3VHLPfTIt_AIHAX30MNWPcqnaOwBFXhABbcBFdSACjygAgeooAHUMzQfJ-nRBLtGGzinvajCnKoez0g2KCmD7XhYyJIrLnkmM8kKCiNRKCOtNyNCSl1RjmdMMTmQsiRZFpF9tLNcLdVzFJQ5L3s0VHSgP8Dw4XmU04HkMbCW0KyDsKeWuLb1VITehwJ1haeuaFG3g04sSevZbrU1szVIxEgQf9P6dT1ZpzCCxOigQ88X4dbxjQAlpItahYQd_PnrF-hBg_xDtFOtN-olmKSVfGUQ9AvYtI5q |
linkProvider | Colorado Alliance of Research Libraries |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Semantic+Matching+Template-Based+Zero-Shot+Relation+Triplet+Extraction&rft.jtitle=IEICE+transactions+on+information+and+systems&rft.au=ZHANG%2C+Mei&rft.au=TIAN%2C+Yu&rft.au=YANG%2C+Yuechen&rft.au=DUAN%2C+Jianyong&rft.date=2025-03-01&rft.pub=Japan+Science+and+Technology+Agency&rft.issn=0916-8532&rft.eissn=1745-1361&rft.volume=E108D&rft.issue=3&rft_id=info:doi/10.1587%2Ftransinf.2024EDP7137&rft.externalDBID=NO_FULL_TEXT |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0916-8532&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0916-8532&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0916-8532&client=summon |