Text Information Aggregation with Centrality Attention
A lot of natural language processing problems need to encode the text sequence as a fix-length vector, which usually involves aggregation process of combining the representations of all the words, such as pooling or self-attention. However, these widely used aggregation approaches did not take highe...
Saved in:
Published in | arXiv.org |
---|---|
Main Authors | , , , , |
Format | Paper |
Language | English |
Published |
Ithaca
Cornell University Library, arXiv.org
16.11.2020
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | A lot of natural language processing problems need to encode the text sequence as a fix-length vector, which usually involves aggregation process of combining the representations of all the words, such as pooling or self-attention. However, these widely used aggregation approaches did not take higher-order relationship among the words into consideration. Hence we propose a new way of obtaining aggregation weights, called eigen-centrality self-attention. More specifically, we build a fully-connected graph for all the words in a sentence, then compute the eigen-centrality as the attention score of each word. The explicit modeling of relationships as a graph is able to capture some higher-order dependency among words, which helps us achieve better results in 5 text classification tasks and one SNLI task than baseline models such as pooling, self-attention and dynamic routing. Besides, in order to compute the dominant eigenvector of the graph, we adopt power method algorithm to get the eigen-centrality measure. Moreover, we also derive an iterative approach to get the gradient for the power method process to reduce both memory consumption and computation requirement.} |
---|---|
AbstractList | A lot of natural language processing problems need to encode the text sequence as a fix-length vector, which usually involves aggregation process of combining the representations of all the words, such as pooling or self-attention. However, these widely used aggregation approaches did not take higher-order relationship among the words into consideration. Hence we propose a new way of obtaining aggregation weights, called eigen-centrality self-attention. More specifically, we build a fully-connected graph for all the words in a sentence, then compute the eigen-centrality as the attention score of each word. The explicit modeling of relationships as a graph is able to capture some higher-order dependency among words, which helps us achieve better results in 5 text classification tasks and one SNLI task than baseline models such as pooling, self-attention and dynamic routing. Besides, in order to compute the dominant eigenvector of the graph, we adopt power method algorithm to get the eigen-centrality measure. Moreover, we also derive an iterative approach to get the gradient for the power method process to reduce both memory consumption and computation requirement.} |
Author | Qiu, Xipeng Gong, Jingjing Huang, Xuanjing Yan, Hang Zheng, Yining |
Author_xml | – sequence: 1 givenname: Jingjing surname: Gong fullname: Gong, Jingjing – sequence: 2 givenname: Hang surname: Yan fullname: Yan, Hang – sequence: 3 givenname: Yining surname: Zheng fullname: Zheng, Yining – sequence: 4 givenname: Xipeng surname: Qiu fullname: Qiu, Xipeng – sequence: 5 givenname: Xuanjing surname: Huang fullname: Huang, Xuanjing |
BookMark | eNrjYmDJy89LZWLgNDI2NtS1MDEy4mDgLS7OMjAwMDIzNzI1NeZkMAtJrShR8MxLyy_KTSzJzM9TcExPL0pNh7DLM0syFJxT80qKEnMySyoVHEtKgBygDA8Da1piTnEqL5TmZlB2cw1x9tAtKMovLE0tLonPyi8tygNKxRuZmBkampkaWpoZE6cKADp4N48 |
ContentType | Paper |
Copyright | 2020. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. |
Copyright_xml | – notice: 2020. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. |
DBID | 8FE 8FG ABJCF ABUWG AFKRA AZQEC BENPR BGLVJ CCPQU DWQXO HCIFZ L6V M7S PIMPY PQEST PQQKQ PQUKI PRINS PTHSS |
DatabaseName | ProQuest SciTech Collection ProQuest Technology Collection Materials Science & Engineering Collection ProQuest Central (Alumni) ProQuest Central ProQuest Central Essentials ProQuest Central Technology Collection ProQuest One Community College ProQuest Central Korea SciTech Premium Collection (Proquest) (PQ_SDU_P3) ProQuest Engineering Collection Engineering Database Publicly Available Content Database ProQuest One Academic Eastern Edition (DO NOT USE) ProQuest One Academic ProQuest One Academic UKI Edition ProQuest Central China Engineering Collection |
DatabaseTitle | Publicly Available Content Database Engineering Database Technology Collection ProQuest Central Essentials ProQuest One Academic Eastern Edition ProQuest Central (Alumni Edition) SciTech Premium Collection ProQuest One Community College ProQuest Technology Collection ProQuest SciTech Collection ProQuest Central China ProQuest Central ProQuest Engineering Collection ProQuest One Academic UKI Edition ProQuest Central Korea Materials Science & Engineering Collection ProQuest One Academic Engineering Collection |
DatabaseTitleList | Publicly Available Content Database |
Database_xml | – sequence: 1 dbid: 8FG name: ProQuest Technology Collection url: https://search.proquest.com/technologycollection1 sourceTypes: Aggregation Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Physics |
EISSN | 2331-8422 |
Genre | Working Paper/Pre-Print |
GroupedDBID | 8FE 8FG ABJCF ABUWG AFKRA ALMA_UNASSIGNED_HOLDINGS AZQEC BENPR BGLVJ CCPQU DWQXO FRJ HCIFZ L6V M7S M~E PIMPY PQEST PQQKQ PQUKI PRINS PTHSS |
ID | FETCH-proquest_journals_24611651963 |
IEDL.DBID | 8FG |
IngestDate | Thu Oct 10 15:49:25 EDT 2024 |
IsOpenAccess | true |
IsPeerReviewed | false |
IsScholarly | false |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-proquest_journals_24611651963 |
OpenAccessLink | https://www.proquest.com/docview/2461165196?pq-origsite=%requestingapplication% |
PQID | 2461165196 |
PQPubID | 2050157 |
ParticipantIDs | proquest_journals_2461165196 |
PublicationCentury | 2000 |
PublicationDate | 20201116 |
PublicationDateYYYYMMDD | 2020-11-16 |
PublicationDate_xml | – month: 11 year: 2020 text: 20201116 day: 16 |
PublicationDecade | 2020 |
PublicationPlace | Ithaca |
PublicationPlace_xml | – name: Ithaca |
PublicationTitle | arXiv.org |
PublicationYear | 2020 |
Publisher | Cornell University Library, arXiv.org |
Publisher_xml | – name: Cornell University Library, arXiv.org |
SSID | ssj0002672553 |
Score | 3.3047528 |
SecondaryResourceType | preprint |
Snippet | A lot of natural language processing problems need to encode the text sequence as a fix-length vector, which usually involves aggregation process of combining... |
SourceID | proquest |
SourceType | Aggregation Database |
SubjectTerms | Agglomeration Algorithms Eigenvectors Iterative methods Natural language processing Power consumption Words (language) |
Title | Text Information Aggregation with Centrality Attention |
URI | https://www.proquest.com/docview/2461165196 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwY2BQAdYKlgaJwNLP3CglGdhBSTbXBUazoa5lkkVaqqVRmqkF-DgGXz8zj1ATrwjTCOiAWzF0WSWsTAQX1Cn5yaAxcn3QuWeGZsD2hpl9QaEu6NYo0Owq9AoNZgZWQyNzc1CqtnBzh4-xGJmZA1vMxhjFLLjucBNkYA1ILEgtEmJgSs0TZmAHL7lMLhZhMAsBlosK0O1AoOBRcEwHdn7TIWzQ-KgCdOgV2FBWcCwpgSxMFGVQdnMNcfbQhVkWD00OxfEIxxuLMbAA-_WpEgwKRpap5kmWRhamqSnAboOFkSUwO1gYGptbAhtBliamKZIMMvhMksIvLc3AZQTqGoJWrJnJMLCUFJWmygLrz5IkOXAgyTGwOrn6BQQBeb51rgAkdnkb |
link.rule.ids | 783,787,12779,21402,33387,33758,43614,43819 |
linkProvider | ProQuest |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwY2BQAdYKlgaJwNLP3CglGdhBSTbXBUazoa5lkkVaqqVRmqkF-DgGXz8zj1ATrwjTCOiAWzF0WSWsTAQX1Cn5yaAxcn3QuWeGZsD2hpl9QaEu6NYo0Owq9AoNZgZWE2NgRQPaKe7mDh9jMTIzB7aYjTGKWXDd4SbIwBqQWJBaJMTAlJonzMAOXnKZXCzCYBYCLBcVoNuBQMGj4JgO7PymQ9ig8VEF6NArsKGs4FhSAlmYKMqg7OYa4uyhC7MsHpociuMRjjcWY2AB9utTJRgUjCxTzZMsjSxMU1OA3QYLI0tgdrAwNDa3BDaCLE1MUyQZZPCZJIVfWp6B0yPE1yfex9PPW5qBywjUTQStXjOTYWApKSpNlQXWpSVJcuAAAwA-THky |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Text+Information+Aggregation+with+Centrality+Attention&rft.jtitle=arXiv.org&rft.au=Gong%2C+Jingjing&rft.au=Yan%2C+Hang&rft.au=Zheng%2C+Yining&rft.au=Qiu%2C+Xipeng&rft.date=2020-11-16&rft.pub=Cornell+University+Library%2C+arXiv.org&rft.eissn=2331-8422 |