EFGCLS: A Cross-Lingual Summarization Method Based on Element Fact-Relationship Generation
Cross-lingual summarization (CLS) simplifies obtaining information across languages by generating summaries in the target language from source documents in another. State-of-the-art neural summarization models typically rely on training or fine-tuning with extensive corpora. Nonetheless, applying th...
Saved in:
Published in | IEICE Transactions on Information and Systems Vol. E108.D; no. 9; pp. 1108 - 1118 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
The Institute of Electronics, Information and Communication Engineers
01.09.2025
一般社団法人 電子情報通信学会 |
Subjects | |
Online Access | Get full text |
ISSN | 0916-8532 1745-1361 |
DOI | 10.1587/transinf.2024EDP7274 |
Cover
Abstract | Cross-lingual summarization (CLS) simplifies obtaining information across languages by generating summaries in the target language from source documents in another. State-of-the-art neural summarization models typically rely on training or fine-tuning with extensive corpora. Nonetheless, applying these approaches in practical industrial scenarios poses challenges due to the scarcity of annotated data. Recent research utilizes large language models (LLMs) to generate superior summaries by extracting fine-grained elements (entities, dates, events, and results) from source documents based on the Chain of Thought (CoT). Such an approach inevitably leads to the loss of fact-relationship across elements in the original document, thus hurting the performance of summary generation. In this paper, we not only substantiate the importance of the fact-relationship across elements for summary generation on the element-aware test sets CNN/DailyMail and BBC XSum but also propose a novel Cross-Lingual Summarization method based on Element Fact-relationship Generation (EFGCLS). Specifically, we break down the CLS task into three simple subtasks: though element fact-relationship generation extracts fine-grained elements in source documents and the fact-relationship across them; afterwards the monolingual document summarization leverages the fact-relationship and source documents to generate the monolingual summary; ultimately, the cross-lingual summarization via Cross-lingual Prompting (CLP) enhance the alignment between source language summaries and target language summaries. Experimental results on the element-aware datasets show that our method outperforms state-of-the-art fine-tuned PLMs and zero-shot LLMs by +6.28/+1.22 in ROUGE-L, respectively. |
---|---|
AbstractList | Cross-lingual summarization (CLS) simplifies obtaining information across languages by generating summaries in the target language from source documents in another. State-of-the-art neural summarization models typically rely on training or fine-tuning with extensive corpora. Nonetheless, applying these approaches in practical industrial scenarios poses challenges due to the scarcity of annotated data. Recent research utilizes large language models (LLMs) to generate superior summaries by extracting fine-grained elements (entities, dates, events, and results) from source documents based on the Chain of Thought (CoT). Such an approach inevitably leads to the loss of fact-relationship across elements in the original document, thus hurting the performance of summary generation. In this paper, we not only substantiate the importance of the fact-relationship across elements for summary generation on the element-aware test sets CNN/DailyMail and BBC XSum but also propose a novel Cross-Lingual Summarization method based on Element Fact-relationship Generation (EFGCLS). Specifically, we break down the CLS task into three simple subtasks: though element fact-relationship generation extracts fine-grained elements in source documents and the fact-relationship across them; afterwards the monolingual document summarization leverages the fact-relationship and source documents to generate the monolingual summary; ultimately, the cross-lingual summarization via Cross-lingual Prompting (CLP) enhance the alignment between source language summaries and target language summaries. Experimental results on the element-aware datasets show that our method outperforms state-of-the-art fine-tuned PLMs and zero-shot LLMs by +6.28/+1.22 in ROUGE-L, respectively. |
ArticleNumber | 2024EDP7274 |
Author | Yan XIANG Yantuan XIAN Zhengtao YU Tianxu LI Jiushun MA Yuxin HUANG |
Author_xml | – sequence: 1 givenname: Yuxin surname: HUANG fullname: HUANG, Yuxin – sequence: 2 givenname: Jiushun surname: MA fullname: MA, Jiushun – sequence: 3 givenname: Tianxu surname: LI fullname: LI, Tianxu – sequence: 4 givenname: Zhengtao surname: YU fullname: YU, Zhengtao – sequence: 5 givenname: Yantuan surname: XIAN fullname: XIAN, Yantuan – sequence: 6 givenname: Yan surname: XIANG fullname: XIANG, Yan |
BackLink | https://cir.nii.ac.jp/crid/1390866345579736704$$DView record in CiNii |
BookMark | eNpNUE1PAjEQbYwmAvoPPOzB62K7bbetN4QFTTAa0IuXpnRnoWTpku1y0F8vH4JcZuYlb968eW106SsPCN0R3CVcioemNj44X3QTnLBs8C4SwS5QiwjGY0JTcolaWJE0lpwm16gdwhJjIhPCW-grG4764-lj1Iv6dRVCPHZ-vjFlNN2sVqZ2P6ZxlY9eoVlUefRkAuTRFmclrMA30dDYJp5AuWeFhVtHI_BQ7-ENuipMGeD2r3fQ5zD76D_H47fRS783ji1VmMUW6GwmGeVWCcVZnqZmxojKE0LEDAjmIAqO00JxMBYzLik3khrJbU6YUoZ2EDvo2t0DNRR6Xbut929NsN7lo4_56LN8tmv3hzXvnLZuV8nWkExTyjgXStBU4B1tcqAtQ2PmcNI2deNsCf_aGcFSD7Q6Dme3TmS7MLUGT38BaSeDuQ |
ContentType | Journal Article |
Copyright | 2025 The Institute of Electronics, Information and Communication Engineers |
Copyright_xml | – notice: 2025 The Institute of Electronics, Information and Communication Engineers |
DBID | RYH AAYXX CITATION |
DOI | 10.1587/transinf.2024EDP7274 |
DatabaseName | CiNii Complete CrossRef |
DatabaseTitle | CrossRef |
DatabaseTitleList | |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering Computer Science |
EISSN | 1745-1361 |
EndPage | 1118 |
ExternalDocumentID | 10_1587_transinf_2024EDP7274 article_transinf_E108_D_9_E108_D_2024EDP7274_article_char_en |
GroupedDBID | -~X 5GY ABJNI ABZEH ACGFS ADNWM AENEX ALMA_UNASSIGNED_HOLDINGS CS3 DU5 EBS EJD F5P ICE JSF JSH KQ8 OK1 P2P RJT RZJ TN5 ZKX 1TH AFFNX C1A CKLRP H13 RIG RYH RYL VOH ZE2 ZY4 AAYXX CITATION |
ID | FETCH-LOGICAL-c3904-ce3bb8435c97954d66ab419d2117be105e7f506f95eac045835a83a85cd1499a3 |
ISSN | 0916-8532 |
IngestDate | Wed Sep 03 16:41:09 EDT 2025 Thu Jun 26 22:04:48 EDT 2025 Mon Sep 01 00:08:30 EDT 2025 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 9 |
Language | English |
LinkModel | OpenURL |
MergedId | FETCHMERGED-LOGICAL-c3904-ce3bb8435c97954d66ab419d2117be105e7f506f95eac045835a83a85cd1499a3 |
OpenAccessLink | https://www.jstage.jst.go.jp/article/transinf/E108.D/9/E108.D_2024EDP7274/_article/-char/en |
PageCount | 11 |
ParticipantIDs | crossref_primary_10_1587_transinf_2024EDP7274 nii_cinii_1390866345579736704 jstage_primary_article_transinf_E108_D_9_E108_D_2024EDP7274_article_char_en |
PublicationCentury | 2000 |
PublicationDate | 2025-09-01 |
PublicationDateYYYYMMDD | 2025-09-01 |
PublicationDate_xml | – month: 09 year: 2025 text: 2025-09-01 day: 01 |
PublicationDecade | 2020 |
PublicationTitle | IEICE Transactions on Information and Systems |
PublicationTitleAlternate | IEICE Trans. Inf. & Syst. |
PublicationTitle_FL | IEICE Trans. Inf. & Syst |
PublicationYear | 2025 |
Publisher | The Institute of Electronics, Information and Communication Engineers 一般社団法人 電子情報通信学会 |
Publisher_xml | – name: The Institute of Electronics, Information and Communication Engineers – name: 一般社団法人 電子情報通信学会 |
References | [22] A. Galiano-Jiménez, F. Sánchez-Martínez, V.M. Sánchez-Cartagena, and J.A. Pérez-Ortiz, “Exploiting large pre-trained models for low-resource neural machine translation,” Proc. 24th Annual Conference of the European Association for Machine Translation, pp.59-68, 2023. [26] J. Maynez, S. Narayan, B. Bohnet, and R. McDonald, “On faithfulness and factuality in abstractive summarization,” Proc. 58th Annual Meeting of the Association for Computational Linguistics, pp.1906-1919, 2020. 10.18653/v1/2020.acl-main.173 [52] Z. Chen and H. Lin, “CATAMARAN: A cross-lingual long text abstractive summarization dataset,” Proc. Thirteenth Language Resources and Evaluation Conference, pp.6932-6937, 2022. [42] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P.J. Liu, “Exploring the limits of transfer learning with a unified text-to-text transformer,” Journal of Machine Learning Research, vol.21, no.140, pp.1-67, 2020. [28] A.R. Fabbri, W. Kryściński, B. McCann, C. Xiong, R. Socher, and D. Radev, “SummEval: Re-evaluating summarization evaluation,” Transactions of the Association for Computational Linguistics, vol.9, pp.391-409, 2021. 10.1162/tacl_a_00373 [17] Ayana, S.-q. Shen, Y. Chen, C. Yang, Z.-y. Liu, and M.-s. Sun, “Zero-shot cross-lingual neural headline generation,” IEEE/ACM Trans. Audio, Speech, Language Process., vol.26, no.12, pp.2319-2327, 2018. 10.1109/TASLP.2018.2842432 [30] T. Khot, H. Trivedi, M. Finlayson, Y. Fu, K. Richardson, P. Clark, and A. Sabharwal, “Decomposed prompting: A modular approach for solving complex tasks,” Proc. Eleventh International Conference on Learning Representations, 2023. [24] J. Wang, F. Meng, Z. Lu, D. Zheng, Z. Li, J. Qu, and J. Zhou, “Clidsum: A benchmark dataset for cross-lingual dialogue summarization,” Proc. 2022 Conference on Empirical Methods in Natural Language Processing, pp.7716-7729, 2022. 10.18653/v1/2022.emnlp-main.526 [33] H. Qian, Y. Zhu, Z. Dou, H. Gu, X. Zhang, Z. Liu, R. Lai, Z. Cao, J.-Y. Nie, and J.-R. Wen, “Webbrain: Learning to generate factually correct articles for queries by grounding on large web corpus,” arXiv preprint arXiv:2304.04358, 2023. [9] Y. Chen, H. Zhang, Y. Zhou, X. Bai, Y. Wang, M. Zhong, J. Yan, Y. Li, J. Li, X. Zhu, and Y. Zhang, “Revisiting cross-lingual summarization: A corpus-based study and a new benchmark with improved annotation,” Proc. 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp.9332-9351, 2023. 10.18653/v1/2023.acl-long.519 [29] J. Wei, X. Wang, D. Schuurmans, M. Bosma, F. Xia, E. Chi, Q.V. Le, D. Zhou, et al., “Chain-of-thought prompting elicits reasoning in large language models,” Advances in Neural Information Processing Systems, vol.35, pp.24824-24837, 2022. [46] L. Qin, Q. Chen, F. Wei, S. Huang, and W. Che, “Cross-lingual prompting: Improving zero-shot chain-of-thought reasoning across languages,” Proc. 2023 Conference on Empirical Methods in Natural Language Processing, pp.2695-2709, 2023. [31] L. Wang, W. Xu, Y. Lan, Z. Hu, Y. Lan, R.K.-W. Lee, and E.-P. Lim, “Plan-and-solve prompting: Improving zero-shot chain-of-thought reasoning by large language models,” Proc. 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp.2609-2634, 2023. 10.18653/v1/2023.acl-long.147 [2] M.T.R. Laskar, X.-Y. Fu, C. Chen, and S. Bhushan TN, “Building real-world meeting summarization systems using large language models: A practical perspective,” Proc. 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track, pp.343-352, 2023. 10.18653/v1/2023.emnlp-industry.33 [25] W. Kryscinski, N.S. Keskar, B. McCann, C. Xiong, and R. Socher, “Neural text summarization: A critical evaluation,” Proc. 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp.540-551, 2019. [19] X. Duan, M. Yin, M. Zhang, B. Chen, and W. Luo, “Zero-shot cross-lingual abstractive sentence summarization through teaching generation and attention,” Proc. 57th Annual Meeting of the Association for Computational Linguistics, pp.3162-3172, 2019. 10.18653/v1/p19-1305 [50] L. Xue, N. Constant, A. Roberts, M. Kale, R. Al-Rfou, A. Siddhant, A. Barua, and C. Raffel, “mt5: A massively multilingual pre-trained text-to- text transformer,” Proc. 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp.483-498, 2021. 10.18653/v1/2021.naacl-main.41 [11] X. Wan, H. Li, and J. Xiao, “Cross-language document summarization based on machine translation quality prediction,” Proc. 48th Annual Meeting of the Association for Computational Linguistics, pp.917-926, 2010. [23] Y. Bai, Y. Gao, and H.-Y. Huang, “Cross-lingual abstractive summarization with limited parallel resources,” Proc. 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp.6910-6924, 2021. [37] Y. Liu, D. Iter, Y. Xu, S. Wang, R. Xu, and C. Zhu, “G-eval: Nlg evaluation using gpt-4 with better human alignment,” Proc. 2023 Conference on Empirical Methods in Natural Language Processing, pp.2511-2522, 2023. [13] J. Ouyang, B. Song, and K. McKeown, “A robust abstractive system for cross-lingual summarization,” Proc. 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp.2025-2031, 2019. 10.18653/v1/n19-1204 [43] J. Zhang, Y. Zhao, M. Saleh, and P. Liu, “Pegasus: Pre-training with extracted gap-sentences for abstractive summarization,” Proc. International Conference on Machine Learning, pp.11328-11339, 2020. [16] S. Takase and N. Okazaki, “Multi-task learning for cross-lingual abstractive summarization,” Proc. Thirteenth Language Resources and Evaluation Conference, pp.3008-3016, 2022. [7] J. Wang, Y. Liang, F. Meng, B. Zou, Z. Li, J. Qu, and J. Zhou, “Zero-shot cross-lingual summarization via large language models,” Proc. 4th New Frontiers in Summarization Workshop, pp.12-23, 2023. 10.18653/v1/2023.newsum-1.2 [21] J. Zhu, Y. Zhou, J. Zhang, and C. Zong, “Attend, translate and summarize: An efficient method for neural cross-lingual summarization,” Proc. 58th Annual Meeting of the Association for Computational Linguistics, pp.1309-1321, 2020. 10.18653/v1/2020.acl-main.121 [41] M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy, V. Stoyanov, and L. Zettlemoyer, “BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension,” Proc. 58th Annual Meeting of the Association for Computational Linguistics, pp.7871-7880, 2020. [18] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A.N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Proc. 31st International Conference on Neural Information Processing Systems, pp.6000-6010, 2017. [4] A. Pagnoni, V. Balachandran, and Y. Tsvetkov, “Understanding factuality in abstractive summarization with frank: A benchmark for factuality metrics,” Proc. 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp.4812-4829, 2021. [48] B. Zhang, B. Haddow, and A. Birch, “Prompting large language model for machine translation: A case study,” Proc. International Conference on Machine Learning, pp.41092-41110, 2023. [36] W. Sun, Z. Shi, S. Gao, P. Ren, M. de Rijke, and Z. Ren, “Contrastive learning reduces hallucination in conversations,” Proc. AAAI Conference on Artificial Intelligence, vol.37, no.11, pp.13618-13626, 2023. 10.1609/aaai.v37i11.26596 [49] Y. Liu, J. Gu, N. Goyal, X. Li, S. Edunov, M. Ghazvininejad, M. Lewis, and L. Zettlemoyer, “Multilingual denoising pre-training for neural machine translation,” Transactions of the Association for Computational Linguistics, vol.8, pp.726-742, 2020. 10.1162/tacl_a_00343 [35] Y. Liu, K. Shi, K. He, L. Ye, A. Fabbri, P. Liu, D. Radev, and A. Cohan, “On learning to summarize with large language models as references,” Proc. 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pp.8647-8664, 2024. 10.18653/v1/2024.naacl-long.478 [47] Y. Ye, X. Feng, X. Feng, W. Ma, L. Qin, D. Xu, Q. Yang, H. Liu, and B. Qin, “GlobeSumm: A Challenging Benchmark Towards Unifying Multi-lingual, Cross-lingual and Multi-document News Summarization,” Proc. 2024 Conference on Empirical Methods in Natural Language Processing, pp.10803-10821, 2024. 10.18653/v1/2024.emnlp-main.603 [44] C.-Y. Lin, “Rouge: A package for automatic evaluation of summaries,” Proc. Text Summarization Branches Out, pp.74-81, 2004. [8] M. Gao, W. Wang, X. Wan, and Y. Xu, “Evaluating factuality in cross-lingual summarization,” Proc. Association for Computational Linguistics: ACL 2023, pp.12415-12431, 2023. 10.18653/v1/2023.findings-acl.786 [39] N.F. Liu, K. Lin, J. Hewitt, A. Paranjape, M. Bevilacqua, F. Petroni, and P. Liang, “Lost in the middle: How language models use long contexts,” Transactions of the Association for Computational Linguistics, vol.12, pp.157-173, 2024. 10.1162/tacl_a_00638 [51] A. Bhattacharjee, T. Hasan, W.U. Ahmad, Y.-F. Li, Y.-B. Kang, and R. Shahriyar, “CrossSum: Beyond English-centric cross-lingual summarization for 1,500+ language pairs,” Proc. 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp.2541-2564, 2023. 10.18653/v1/2023.acl-long.143 [27] N. Stiennon, L. Ouyang, J. Wu, D. Ziegler, R. Lowe, C. Voss, A. Radford, D. Amodei, and P.F. Christiano, “Learning to summarize with human feedback,” Advances in Neural Information Processing Systems, vol.33, pp.3008-3021, 2020. [34] K. Lv, S. Zhang, T. Gu, S. Xing, J. Hong, K. Chen, X. Liu, Y. Yang, H. Guo, T. Liu, Y. Sun, Q. Guo, H. Yan, and X. Qiu, “Collie: C |
References_xml | – reference: [17] Ayana, S.-q. Shen, Y. Chen, C. Yang, Z.-y. Liu, and M.-s. Sun, “Zero-shot cross-lingual neural headline generation,” IEEE/ACM Trans. Audio, Speech, Language Process., vol.26, no.12, pp.2319-2327, 2018. 10.1109/TASLP.2018.2842432 – reference: [24] J. Wang, F. Meng, Z. Lu, D. Zheng, Z. Li, J. Qu, and J. Zhou, “Clidsum: A benchmark dataset for cross-lingual dialogue summarization,” Proc. 2022 Conference on Empirical Methods in Natural Language Processing, pp.7716-7729, 2022. 10.18653/v1/2022.emnlp-main.526 – reference: [42] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P.J. Liu, “Exploring the limits of transfer learning with a unified text-to-text transformer,” Journal of Machine Learning Research, vol.21, no.140, pp.1-67, 2020. – reference: [46] L. Qin, Q. Chen, F. Wei, S. Huang, and W. Che, “Cross-lingual prompting: Improving zero-shot chain-of-thought reasoning across languages,” Proc. 2023 Conference on Empirical Methods in Natural Language Processing, pp.2695-2709, 2023. – reference: [43] J. Zhang, Y. Zhao, M. Saleh, and P. Liu, “Pegasus: Pre-training with extracted gap-sentences for abstractive summarization,” Proc. International Conference on Machine Learning, pp.11328-11339, 2020. – reference: [47] Y. Ye, X. Feng, X. Feng, W. Ma, L. Qin, D. Xu, Q. Yang, H. Liu, and B. Qin, “GlobeSumm: A Challenging Benchmark Towards Unifying Multi-lingual, Cross-lingual and Multi-document News Summarization,” Proc. 2024 Conference on Empirical Methods in Natural Language Processing, pp.10803-10821, 2024. 10.18653/v1/2024.emnlp-main.603 – reference: [48] B. Zhang, B. Haddow, and A. Birch, “Prompting large language model for machine translation: A case study,” Proc. International Conference on Machine Learning, pp.41092-41110, 2023. – reference: [29] J. Wei, X. Wang, D. Schuurmans, M. Bosma, F. Xia, E. Chi, Q.V. Le, D. Zhou, et al., “Chain-of-thought prompting elicits reasoning in large language models,” Advances in Neural Information Processing Systems, vol.35, pp.24824-24837, 2022. – reference: [11] X. Wan, H. Li, and J. Xiao, “Cross-language document summarization based on machine translation quality prediction,” Proc. 48th Annual Meeting of the Association for Computational Linguistics, pp.917-926, 2010. – reference: [51] A. Bhattacharjee, T. Hasan, W.U. Ahmad, Y.-F. Li, Y.-B. Kang, and R. Shahriyar, “CrossSum: Beyond English-centric cross-lingual summarization for 1,500+ language pairs,” Proc. 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp.2541-2564, 2023. 10.18653/v1/2023.acl-long.143 – reference: [14] J. Zhu, Q. Wang, Y. Wang, Y. Zhou, J. Zhang, S. Wang, and C. Zong, “Ncls: Neural cross-lingual summarization,” Proc. 2019 Con-ference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp.3054-3064, 2019. 10.18653/v1/D19-1302 – reference: [13] J. Ouyang, B. Song, and K. McKeown, “A robust abstractive system for cross-lingual summarization,” Proc. 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp.2025-2031, 2019. 10.18653/v1/n19-1204 – reference: [3] S. She, X. Geng, S. Huang, and J. Chen, “Cop: Factual inconsistency detection by controlling the preference,” Proc. AAAI Conference on Artificial Intelligence, pp.13556-13563, 2023. – reference: [28] A.R. Fabbri, W. Kryściński, B. McCann, C. Xiong, R. Socher, and D. Radev, “SummEval: Re-evaluating summarization evaluation,” Transactions of the Association for Computational Linguistics, vol.9, pp.391-409, 2021. 10.1162/tacl_a_00373 – reference: [25] W. Kryscinski, N.S. Keskar, B. McCann, C. Xiong, and R. Socher, “Neural text summarization: A critical evaluation,” Proc. 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp.540-551, 2019. – reference: [26] J. Maynez, S. Narayan, B. Bohnet, and R. McDonald, “On faithfulness and factuality in abstractive summarization,” Proc. 58th Annual Meeting of the Association for Computational Linguistics, pp.1906-1919, 2020. 10.18653/v1/2020.acl-main.173 – reference: [32] L. Gao, A. Madaan, S. Zhou, U. Alon, P. Liu, Y. Yang, J. Callan, and G. Neubig, “Pal: Program-aided language models,” Proc. International Conference on Machine Learning, pp.10764-10799, 2023. – reference: [50] L. Xue, N. Constant, A. Roberts, M. Kale, R. Al-Rfou, A. Siddhant, A. Barua, and C. Raffel, “mt5: A massively multilingual pre-trained text-to- text transformer,” Proc. 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp.483-498, 2021. 10.18653/v1/2021.naacl-main.41 – reference: [15] Y. Cao, H. Liu, and X. Wan, “Jointly learning to align and summarize for neural cross-lingual summarization,” Proc. 58th annual meeting of the association for computational linguistics, pp.6220-6231, 2020. 10.18653/v1/2020.acl-main.554 – reference: [20] T.T. Nguyen and A.T. Luu, “Improving neural cross-lingual abstractive summarization via employing optimal transport distance for knowledge distillation,” Proc. AAAI Conference on Artificial Intelligence, vol.36, pp.11103-11111, 2022. 10.1609/aaai.v36i10.21359 – reference: [45] T. Zhang, V. Kishore, F. Wu, K.Q. Weinberger, and Y. Artzi, “Bertscore: Evaluating text generation with bert,” Proc. International Conference on Learning Representations, 2020. – reference: [10] D. Tam, A. Mascarenhas, S. Zhang, S. Kwan, M. Bansal, and C. Raffel, “Evaluating the factual consistency of large language models through news summarization,” Proc. Association for Computational Linguistics: ACL 2023, pp.5220-5255, 2023. 10.18653/v1/2023.findings-acl.322 – reference: [33] H. Qian, Y. Zhu, Z. Dou, H. Gu, X. Zhang, Z. Liu, R. Lai, Z. Cao, J.-Y. Nie, and J.-R. Wen, “Webbrain: Learning to generate factually correct articles for queries by grounding on large web corpus,” arXiv preprint arXiv:2304.04358, 2023. – reference: [18] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A.N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Proc. 31st International Conference on Neural Information Processing Systems, pp.6000-6010, 2017. – reference: [8] M. Gao, W. Wang, X. Wan, and Y. Xu, “Evaluating factuality in cross-lingual summarization,” Proc. Association for Computational Linguistics: ACL 2023, pp.12415-12431, 2023. 10.18653/v1/2023.findings-acl.786 – reference: [4] A. Pagnoni, V. Balachandran, and Y. Tsvetkov, “Understanding factuality in abstractive summarization with frank: A benchmark for factuality metrics,” Proc. 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp.4812-4829, 2021. – reference: [41] M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy, V. Stoyanov, and L. Zettlemoyer, “BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension,” Proc. 58th Annual Meeting of the Association for Computational Linguistics, pp.7871-7880, 2020. – reference: [34] K. Lv, S. Zhang, T. Gu, S. Xing, J. Hong, K. Chen, X. Liu, Y. Yang, H. Guo, T. Liu, Y. Sun, Q. Guo, H. Yan, and X. Qiu, “Collie: Collaborative training of large lan- guage models in an efficient way,” Proc. 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp.527-542, 2023. 10.18653/v1/2023.emnlp-demo.48 – reference: [35] Y. Liu, K. Shi, K. He, L. Ye, A. Fabbri, P. Liu, D. Radev, and A. Cohan, “On learning to summarize with large language models as references,” Proc. 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pp.8647-8664, 2024. 10.18653/v1/2024.naacl-long.478 – reference: [7] J. Wang, Y. Liang, F. Meng, B. Zou, Z. Li, J. Qu, and J. Zhou, “Zero-shot cross-lingual summarization via large language models,” Proc. 4th New Frontiers in Summarization Workshop, pp.12-23, 2023. 10.18653/v1/2023.newsum-1.2 – reference: [52] Z. Chen and H. Lin, “CATAMARAN: A cross-lingual long text abstractive summarization dataset,” Proc. Thirteenth Language Resources and Evaluation Conference, pp.6932-6937, 2022. – reference: [19] X. Duan, M. Yin, M. Zhang, B. Chen, and W. Luo, “Zero-shot cross-lingual abstractive sentence summarization through teaching generation and attention,” Proc. 57th Annual Meeting of the Association for Computational Linguistics, pp.3162-3172, 2019. 10.18653/v1/p19-1305 – reference: [2] M.T.R. Laskar, X.-Y. Fu, C. Chen, and S. Bhushan TN, “Building real-world meeting summarization systems using large language models: A practical perspective,” Proc. 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track, pp.343-352, 2023. 10.18653/v1/2023.emnlp-industry.33 – reference: [27] N. Stiennon, L. Ouyang, J. Wu, D. Ziegler, R. Lowe, C. Voss, A. Radford, D. Amodei, and P.F. Christiano, “Learning to summarize with human feedback,” Advances in Neural Information Processing Systems, vol.33, pp.3008-3021, 2020. – reference: [6] Y. Wang, Z. Zhang, and R. Wang, “Element-aware summarization with large language models: Expert-aligned evaluation and chain-of-thought method,” Proc. 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp.8640-8665, 2023. 10.18653/v1/2023.acl-long.482 – reference: [37] Y. Liu, D. Iter, Y. Xu, S. Wang, R. Xu, and C. Zhu, “G-eval: Nlg evaluation using gpt-4 with better human alignment,” Proc. 2023 Conference on Empirical Methods in Natural Language Processing, pp.2511-2522, 2023. – reference: [44] C.-Y. Lin, “Rouge: A package for automatic evaluation of summaries,” Proc. Text Summarization Branches Out, pp.74-81, 2004. – reference: [1] J. Wang, F. Meng, D. Zheng, Y. Liang, Z. Li, J. Qu, and J. Zhou, “Towards unifying multi-lingual and cross-lingual summarization,” Proc. 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp.15127-15143, 2023. – reference: [9] Y. Chen, H. Zhang, Y. Zhou, X. Bai, Y. Wang, M. Zhong, J. Yan, Y. Li, J. Li, X. Zhu, and Y. Zhang, “Revisiting cross-lingual summarization: A corpus-based study and a new benchmark with improved annotation,” Proc. 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp.9332-9351, 2023. 10.18653/v1/2023.acl-long.519 – reference: [12] A. Leuski, C.-Y. Lin, L. Zhou, U. Germann, F.J. Och, and E. Hovy, “Cross-lingual c*st*rd: English access to hindi information,” ACM Transac- tions on Asian Language Information Processing (TALIP), vol.2, no.3, pp.245-269, 2003. 10.1145/979872.979877 – reference: [39] N.F. Liu, K. Lin, J. Hewitt, A. Paranjape, M. Bevilacqua, F. Petroni, and P. Liang, “Lost in the middle: How language models use long contexts,” Transactions of the Association for Computational Linguistics, vol.12, pp.157-173, 2024. 10.1162/tacl_a_00638 – reference: [38] Y. Liu, P. Liu, D. Radev, and G. Neubig, “Brio: Bringing order to abstractive summarization,” Proc. 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp.2890-2903, 2022. 10.18653/v1/2022.acl-long.207 – reference: [49] Y. Liu, J. Gu, N. Goyal, X. Li, S. Edunov, M. Ghazvininejad, M. Lewis, and L. Zettlemoyer, “Multilingual denoising pre-training for neural machine translation,” Transactions of the Association for Computational Linguistics, vol.8, pp.726-742, 2020. 10.1162/tacl_a_00343 – reference: [30] T. Khot, H. Trivedi, M. Finlayson, Y. Fu, K. Richardson, P. Clark, and A. Sabharwal, “Decomposed prompting: A modular approach for solving complex tasks,” Proc. Eleventh International Conference on Learning Representations, 2023. – reference: [5] T. Brown, B. Mann, N. Ryder, M. Subbiah, J.D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al., “Language models are few-shot learners,” Advances in Neural Information Processing Systems, vol.33, pp.1877-1901, 2020. – reference: [16] S. Takase and N. Okazaki, “Multi-task learning for cross-lingual abstractive summarization,” Proc. Thirteenth Language Resources and Evaluation Conference, pp.3008-3016, 2022. – reference: [23] Y. Bai, Y. Gao, and H.-Y. Huang, “Cross-lingual abstractive summarization with limited parallel resources,” Proc. 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp.6910-6924, 2021. – reference: [36] W. Sun, Z. Shi, S. Gao, P. Ren, M. de Rijke, and Z. Ren, “Contrastive learning reduces hallucination in conversations,” Proc. AAAI Conference on Artificial Intelligence, vol.37, no.11, pp.13618-13626, 2023. 10.1609/aaai.v37i11.26596 – reference: [40] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray, et al., “Training language models to follow instructions with human feedback,” Advances in Neural Information Processing Systems, vol.35, pp.27730-27744, 2022. – reference: [22] A. Galiano-Jiménez, F. Sánchez-Martínez, V.M. Sánchez-Cartagena, and J.A. Pérez-Ortiz, “Exploiting large pre-trained models for low-resource neural machine translation,” Proc. 24th Annual Conference of the European Association for Machine Translation, pp.59-68, 2023. – reference: [31] L. Wang, W. Xu, Y. Lan, Z. Hu, Y. Lan, R.K.-W. Lee, and E.-P. Lim, “Plan-and-solve prompting: Improving zero-shot chain-of-thought reasoning by large language models,” Proc. 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp.2609-2634, 2023. 10.18653/v1/2023.acl-long.147 – reference: [21] J. Zhu, Y. Zhou, J. Zhang, and C. Zong, “Attend, translate and summarize: An efficient method for neural cross-lingual summarization,” Proc. 58th Annual Meeting of the Association for Computational Linguistics, pp.1309-1321, 2020. 10.18653/v1/2020.acl-main.121 |
SSID | ssj0018215 |
Score | 2.3883464 |
Snippet | Cross-lingual summarization (CLS) simplifies obtaining information across languages by generating summaries in the target language from source documents in... |
SourceID | crossref nii jstage |
SourceType | Index Database Publisher |
StartPage | 1108 |
SubjectTerms | Chain-of-Thought Cross-lingual summarization Element fact-relationship Fine-grained elements |
Title | EFGCLS: A Cross-Lingual Summarization Method Based on Element Fact-Relationship Generation |
URI | https://www.jstage.jst.go.jp/article/transinf/E108.D/9/E108.D_2024EDP7274/_article/-char/en https://cir.nii.ac.jp/crid/1390866345579736704 |
Volume | E108.D |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
ispartofPNX | IEICE Transactions on Information and Systems, 2025/09/01, Vol.E108.D(9), pp.1108-1118 |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1Jj9MwFLY6Awc4sAyMKDDIB26VSxY7C7fSpkwZGIGmlSouke04qBwKQok04mfzC3iOncQRPbBdotRyWsfv61v8NoSeM6WdmSohYSQ9QkVUEi6ClJS84DHjUSlFE21xGZ1v6Jst245GP5yopboSU_n9YF7J31AVxoCuOkv2DyjbfSkMwD3QF65AYbj-Fo2z5ev52yubXK7FHQHT8pNOCLlqctJsjuXkXdMmevIKJFahvQOZiRmfLLmsSBcOp8O2TBXqjlhWa11lq3lm6qCbPIjGx2Azmao2oNktfq75CHCO7Wp22TXvWgMSr2tgl86MqrazurH6Wlff2rTP2QOJgHURVwZC6ybo0wl0yLp-PpbvDRc3SITpqjAOTir9iIBWYZi2Mnw6poz4oanj3jLyzPeS6cIBbepwZp3u4Eh5YPHJQQnC9BnMstI7CuNTeEGaLd6Dlkd7idnFMVrK5-30XC8hX-Rpe-M8nreTdTodoPcI3QjiuAktuPjQe76SwHTdaF_apnvCsl4cWtRAnbr5GSwKXSriaL_bOWrS-h66Y-0bPDOruI9Gan-C7ra9Q7AVJSfotlMI8wH6aJD8Es_wAMd4gGNscIwbHGP4bHGMf8Ex7nH8EG2W2Xp-TmzXDyLD1KNEqlCIBLR4mcYpo0UUcUH9tAh8PxYKzAEVl8yLypSBztC4_RlPQp4wWYC1n_LwFB3vv-zVI4Rj7Tr0WFAq6lFZFLzkjBdRIhVnMhR0jEi7eflXU9wl10YxbHZPUmezx-jC7HA3-18AMEZnQKZc7vQVDDEvAe2fMhansa6qSB__1197gm71f9an6Lj6Vqsz0Kcr8awB4E_0B8-r |
linkProvider | Colorado Alliance of Research Libraries |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=EFGCLS%3A+A+Cross-Lingual+Summarization+Method+Based+on+Element+Fact-Relationship+Generation&rft.jtitle=IEICE+Transactions+on+Information+and+Systems&rft.au=Yan+XIANG&rft.au=Tianxu+LI&rft.au=Yantuan+XIAN&rft.au=Yuxin+HUANG&rft.date=2025-09-01&rft.pub=The+Institute+of+Electronics%2C+Information+and+Communication+Engineers&rft.issn=0916-8532&rft.eissn=1745-1361&rft.volume=E108.D&rft.issue=9&rft.spage=1108&rft.epage=1118&rft_id=info:doi/10.1587%2Ftransinf.2024EDP7274&rft.externalDocID=article_transinf_E108_D_9_E108_D_2024EDP7274_article_char_en |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0916-8532&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0916-8532&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0916-8532&client=summon |