Generating Textual Entailment Using Residual LSTMs
Generating textual entailment (GTE) is a recently proposed task to study how to infer a sentence from a given premise. Current sequence-to-sequence GTE models are prone to produce invalid sentences when facing with complex enough premises. Moreover, the lack of appropriate evaluation criteria hinder...
Saved in:
Published in | Chinese Computational Linguistics and Natural Language Processing Based on Naturally Annotated Big Data Vol. 10565; pp. 263 - 272 |
---|---|
Main Authors | , , , |
Format | Book Chapter |
Language | English |
Published |
Switzerland
Springer International Publishing AG
2017
Springer International Publishing |
Series | Lecture Notes in Computer Science |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Generating textual entailment (GTE) is a recently proposed task to study how to infer a sentence from a given premise. Current sequence-to-sequence GTE models are prone to produce invalid sentences when facing with complex enough premises. Moreover, the lack of appropriate evaluation criteria hinders researches on GTE. In this paper, we conjecture that the unpowerful encoder is the major bottleneck in generating more meaningful sequences, and improve this by employing the residual LSTM network. With the extended model, we obtain state-of-the-art results. Furthermore, we propose a novel metric for GTE, namely EBR (Evaluated By Recognizing textual entailment), which could evaluate different GTE approaches in an objective and fair way without human effort while also considering the diversity of inferences. In the end, we point out the limitation of adapting a general sequence-to-sequence framework under GTE settings, with some proposals for future research, hoping to generate more public discussion. |
---|---|
AbstractList | Generating textual entailment (GTE) is a recently proposed task to study how to infer a sentence from a given premise. Current sequence-to-sequence GTE models are prone to produce invalid sentences when facing with complex enough premises. Moreover, the lack of appropriate evaluation criteria hinders researches on GTE. In this paper, we conjecture that the unpowerful encoder is the major bottleneck in generating more meaningful sequences, and improve this by employing the residual LSTM network. With the extended model, we obtain state-of-the-art results. Furthermore, we propose a novel metric for GTE, namely EBR (Evaluated By Recognizing textual entailment), which could evaluate different GTE approaches in an objective and fair way without human effort while also considering the diversity of inferences. In the end, we point out the limitation of adapting a general sequence-to-sequence framework under GTE settings, with some proposals for future research, hoping to generate more public discussion. |
Author | Zhao, Dezhi Zhang, Yu Guo, Maosheng Liu, Ting |
Author_xml | – sequence: 1 givenname: Maosheng orcidid: 0000-0002-3829-1179 surname: Guo fullname: Guo, Maosheng email: msguo@ir.hit.edu.cn organization: School of Computer Science and Technology, Harbin Institute of Technology, Harbin, China – sequence: 2 givenname: Yu surname: Zhang fullname: Zhang, Yu email: zhangyu@ir.hit.edu.cn organization: School of Computer Science and Technology, Harbin Institute of Technology, Harbin, China – sequence: 3 givenname: Dezhi surname: Zhao fullname: Zhao, Dezhi email: dzzhao@ir.hit.edu.cn organization: School of Computer Science and Technology, Harbin Institute of Technology, Harbin, China – sequence: 4 givenname: Ting surname: Liu fullname: Liu, Ting email: tliu@ir.hit.edu.cn organization: School of Computer Science and Technology, Harbin Institute of Technology, Harbin, China |
BookMark | eNpFkM1OwzAQhA0URFv6Bhz6Aoa11z_xEVWlIBUhQXu28rOBQkhCnEo8Pk6LxGmlGc1o55uwUd3UxNi1gBsBYG-dTThyFI4bB6C58VKesAlG5SCIUzYWRgiOqNzZv6FwxMaAILmzCi_YJLZpJyVqdclmIXwAgHAIYNSYyRXV1KX9rn6bb-in36fVfFn36a76orqfb8NgvFDYFYOzft08hSt2XqZVoNnfnbLt_XKzeODr59Xj4m7NW6mw54XNyrQwJqMyl1QAuKJQGeokd2VuSZdWoCWDUFqbaqUzUplJhJEIOSSJwimTx97QdvEL6nzWNJ_BC_ADHx_5ePRxtD_g8AOfGFLHUNs133sKvachlccxXVrl72nbUxe8jkA0xjwqL63GX2ZfZOA |
ContentType | Book Chapter |
Copyright | Springer International Publishing AG 2017 |
Copyright_xml | – notice: Springer International Publishing AG 2017 |
DBID | FFUUA |
DEWEY | 6.35 |
DOI | 10.1007/978-3-319-69005-6_22 |
DatabaseName | ProQuest Ebook Central - Book Chapters - Demo use only |
DatabaseTitleList | |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Languages & Literatures Computer Science |
EISBN | 3319690051 9783319690056 |
EISSN | 1611-3349 |
Editor | Xiong, Deyi Chang, Baobao Sun, Maosong Wang, Xiaojie |
Editor_xml | – sequence: 1 fullname: Chang, Baobao – sequence: 2 fullname: Xiong, Deyi – sequence: 3 fullname: Sun, Maosong – sequence: 4 fullname: Wang, Xiaojie |
EndPage | 272 |
ExternalDocumentID | EBC5592538_334_275 |
GroupedDBID | 0D6 0DA 38. AABBV AALVI ABBVZ ABHTH ABQUB ACDJR ADCXD AEDXK AEJLV AEKFX AETDV AEZAY AGIGN AGYGE AIODD ALBAV ALMA_UNASSIGNED_HOLDINGS AZZ BATQV BBABE CVWCR CZZ FFUUA I4C IEZ SBO SWYDZ TPJZQ TSXQS Z81 Z83 Z88 -DT -GH -~X 1SB 29L 2HA 2HV 5QI 875 AASHB ABMNI ACGFS AEFIE EJD F5P FEDTE HVGLF LAS LDH P2P RIG RNI RSU SVGTG VI1 ~02 |
ID | FETCH-LOGICAL-p243t-d7bfad66befc2ed009dd4b358c9fc7e5f7137e630f77a545be4b6816230c08843 |
ISBN | 3319690043 9783319690049 |
ISSN | 0302-9743 |
IngestDate | Tue Jul 29 20:08:43 EDT 2025 Wed May 28 23:23:48 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
LCCallNum | QA76.9.N38 |
Language | English |
LinkModel | OpenURL |
MergedId | FETCHMERGED-LOGICAL-p243t-d7bfad66befc2ed009dd4b358c9fc7e5f7137e630f77a545be4b6816230c08843 |
OCLC | 1005922354 |
ORCID | 0000-0002-3829-1179 |
PQID | EBC5592538_334_275 |
PageCount | 10 |
ParticipantIDs | springer_books_10_1007_978_3_319_69005_6_22 proquest_ebookcentralchapters_5592538_334_275 |
PublicationCentury | 2000 |
PublicationDate | 2017 |
PublicationDateYYYYMMDD | 2017-01-01 |
PublicationDate_xml | – year: 2017 text: 2017 |
PublicationDecade | 2010 |
PublicationPlace | Switzerland |
PublicationPlace_xml | – name: Switzerland – name: Cham |
PublicationSeriesSubtitle | Lecture Notes in Artificial Intelligence |
PublicationSeriesTitle | Lecture Notes in Computer Science |
PublicationSeriesTitleAlternate | Lect.Notes Computer |
PublicationSubtitle | 16th China National Conference, CCL 2017, and 5th International Symposium, NLP-NABD 2017, Nanjing, China, October 13-15, 2017, Proceedings |
PublicationTitle | Chinese Computational Linguistics and Natural Language Processing Based on Naturally Annotated Big Data |
PublicationYear | 2017 |
Publisher | Springer International Publishing AG Springer International Publishing |
Publisher_xml | – name: Springer International Publishing AG – name: Springer International Publishing |
RelatedPersons | Kleinberg, Jon M. Mattern, Friedemann Naor, Moni Mitchell, John C. Terzopoulos, Demetri Steffen, Bernhard Pandu Rangan, C. Kanade, Takeo Kittler, Josef Weikum, Gerhard Hutchison, David Tygar, Doug |
RelatedPersons_xml | – sequence: 1 givenname: David surname: Hutchison fullname: Hutchison, David organization: Lancaster University, Lancaster, United Kingdom – sequence: 2 givenname: Takeo surname: Kanade fullname: Kanade, Takeo organization: Carnegie Mellon University, Pittsburgh, USA – sequence: 3 givenname: Josef surname: Kittler fullname: Kittler, Josef organization: University of Surrey, Guildford, United Kingdom – sequence: 4 givenname: Jon M. surname: Kleinberg fullname: Kleinberg, Jon M. organization: Cornell University, Ithaca, USA – sequence: 5 givenname: Friedemann surname: Mattern fullname: Mattern, Friedemann organization: CNB H 104.2, ETH Zurich, Zürich, Switzerland – sequence: 6 givenname: John C. surname: Mitchell fullname: Mitchell, John C. organization: Stanford, USA – sequence: 7 givenname: Moni surname: Naor fullname: Naor, Moni organization: Weizmann Institute of Science, Rehovot, Israel – sequence: 8 givenname: C. surname: Pandu Rangan fullname: Pandu Rangan, C. organization: Madras, Indian Institute of Technology, Chennai, India – sequence: 9 givenname: Bernhard surname: Steffen fullname: Steffen, Bernhard organization: Fakultät Informatik, TU Dortmund, Dortmund, Germany – sequence: 10 givenname: Demetri surname: Terzopoulos fullname: Terzopoulos, Demetri organization: University of California, Los Angeles, USA – sequence: 11 givenname: Doug surname: Tygar fullname: Tygar, Doug organization: University of California, Berkeley, USA – sequence: 12 givenname: Gerhard surname: Weikum fullname: Weikum, Gerhard organization: Max Planck Institute for Informatics, Saarbrücken, Germany |
SSID | ssj0001930064 ssj0002792 |
Score | 1.8361984 |
Snippet | Generating textual entailment (GTE) is a recently proposed task to study how to infer a sentence from a given premise. Current sequence-to-sequence GTE models... |
SourceID | springer proquest |
SourceType | Publisher |
StartPage | 263 |
SubjectTerms | Artificial intelligence Generating textual entailment Natural language generation Natural language processing |
Title | Generating Textual Entailment Using Residual LSTMs |
URI | http://ebookcentral.proquest.com/lib/SITE_ID/reader.action?docID=5592538&ppg=275 http://link.springer.com/10.1007/978-3-319-69005-6_22 |
Volume | 10565 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1Lb9NAEF6FcEEcgPJoy0M-IC6WUeJdP3Lg0KJAVZkcIEXltPJ6d5tIyEbYPrR_jL_HjHfXdkIv5WJF1iZ25lvNa-ebIeStyLVMC1oEEH3kAZOpCvJ5zoIw1jlLIoFkTqy2WMVnF-z8MrqcTP6MqpbaRrwvbm7llfwPqnAPcEWW7B2Q7X8UbsBnwBeugDBc95zf3TSr6Suw6aZH-mYug8vpQWx51Y6aL69y01gjs3lJxwzADMEpWDCJpwV20c9r_6QsK3Q_pX-6vYIt0fRa-3NbGW5PVW-UNXfjhPOPdnSrMqrsZrPtC362rdka7qt25EDX9borvV6DmUAuy7LEotauRsHUM3xVtWGMZd_WX0wIgNJV9YfMHoCsqqarK_PdjAqnssY5jXmyl9NwOc29rOiQmNsJgilqkQWGOiPdSUHRQ6hkdKcyuj3Gjo3ULrP62mpXY_pDM0XoH6syLiRB0hc-LQpiHoLtv5ek0ZTcP1meZ9-H5N6Coq_XuwTYpdEcZ5m3QpKRe2tq2kAN_2JE8LztkTuh0N7pfecUrR-Th0iU8ZDBAvJ7QiaqPCCPHASeheCAvHB7r_beeVnfxrt-SsIBfs_C7w3wex38noPf6-B_Ri4-LdcfzwI7xCP4FTLaBDIROpdxLJQuQiXBpZeSCRqlxUIXiYp0MqeJiulMJ0kO7rxQTMTpHLzyWQEWkNHnZFpWpTokXjQTmqYKPHKpGYsXIhX5fKZ1KqJUg6o5IoGTDO9KDWx9c2HkUHOInkMw8Bw2AQ-T6Ij4Tnwcl9fc9fAGuXPKQe68kztHuR_fafVL8mDY2K_ItPndqtfgvjbijd0sfwEAA5P5 |
linkProvider | Library Specific Holdings |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=bookitem&rft.title=Chinese+Computational+Linguistics+and+Natural+Language+Processing+Based+on+Naturally+Annotated+Big+Data&rft.au=Guo%2C+Maosheng&rft.au=Zhang%2C+Yu&rft.au=Zhao%2C+Dezhi&rft.au=Liu%2C+Ting&rft.atitle=Generating+Textual+Entailment+Using+Residual+LSTMs&rft.series=Lecture+Notes+in+Computer+Science&rft.date=2017-01-01&rft.pub=Springer+International+Publishing&rft.isbn=9783319690049&rft.issn=0302-9743&rft.eissn=1611-3349&rft.spage=263&rft.epage=272&rft_id=info:doi/10.1007%2F978-3-319-69005-6_22 |
thumbnail_s | http://utb.summon.serialssolutions.com/2.0.0/image/custom?url=https%3A%2F%2Febookcentral.proquest.com%2Fcovers%2F5592538-l.jpg |