HiRes-LLaVA: Restoring Fragmentation Input in High-Resolution Large Vision-Language Models
High-resolution inputs enable Large Vision-Language Models (LVLMs) to discern finer visual details, enhancing their comprehension capabilities. To reduce the training and computation costs caused by high-resolution input, one promising direction is to use sliding windows to slice the input into unif...
Saved in:
Published in | arXiv.org |
---|---|
Main Authors | , , , , , , , , , |
Format | Paper |
Language | English |
Published |
Ithaca
Cornell University Library, arXiv.org
11.07.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | High-resolution inputs enable Large Vision-Language Models (LVLMs) to discern finer visual details, enhancing their comprehension capabilities. To reduce the training and computation costs caused by high-resolution input, one promising direction is to use sliding windows to slice the input into uniform patches, each matching the input size of the well-trained vision encoder. Although efficient, this slicing strategy leads to the fragmentation of original input, i.e., the continuity of contextual information and spatial geometry is lost across patches, adversely affecting performance in cross-patch context perception and position-specific tasks. To overcome these shortcomings, we introduce HiRes-LLaVA, a novel framework designed to efficiently process any size of high-resolution input without altering the original contextual and geometric information. HiRes-LLaVA comprises two innovative components: (i) a SliceRestore adapter that reconstructs sliced patches into their original form, efficiently extracting both global and local features via down-up-sampling and convolution layers, and (ii) a Self-Mining Sampler to compresses the vision tokens based on themselves, preserving the original context and positional information while reducing training overhead. To assess the ability of handling context fragmentation, we construct a new benchmark, EntityGrid-QA, consisting of edge-related and position-related tasks. Our comprehensive experiments demonstrate the superiority of HiRes-LLaVA on both existing public benchmarks and on EntityGrid-QA, particularly on document-oriented tasks, establishing new standards for handling high-resolution inputs. |
---|---|
AbstractList | High-resolution inputs enable Large Vision-Language Models (LVLMs) to discern finer visual details, enhancing their comprehension capabilities. To reduce the training and computation costs caused by high-resolution input, one promising direction is to use sliding windows to slice the input into uniform patches, each matching the input size of the well-trained vision encoder. Although efficient, this slicing strategy leads to the fragmentation of original input, i.e., the continuity of contextual information and spatial geometry is lost across patches, adversely affecting performance in cross-patch context perception and position-specific tasks. To overcome these shortcomings, we introduce HiRes-LLaVA, a novel framework designed to efficiently process any size of high-resolution input without altering the original contextual and geometric information. HiRes-LLaVA comprises two innovative components: (i) a SliceRestore adapter that reconstructs sliced patches into their original form, efficiently extracting both global and local features via down-up-sampling and convolution layers, and (ii) a Self-Mining Sampler to compresses the vision tokens based on themselves, preserving the original context and positional information while reducing training overhead. To assess the ability of handling context fragmentation, we construct a new benchmark, EntityGrid-QA, consisting of edge-related and position-related tasks. Our comprehensive experiments demonstrate the superiority of HiRes-LLaVA on both existing public benchmarks and on EntityGrid-QA, particularly on document-oriented tasks, establishing new standards for handling high-resolution inputs. |
Author | Ding, Xinpeng Huang, Runhui Hou, Lu Han, Jianhua Zhang, Wei Wang, Chunwei Xu, Hang Liang, Xiaodan Zhao, Hengshuang Liu, Yulong |
Author_xml | – sequence: 1 givenname: Runhui surname: Huang fullname: Huang, Runhui – sequence: 2 givenname: Xinpeng surname: Ding fullname: Ding, Xinpeng – sequence: 3 givenname: Chunwei surname: Wang fullname: Wang, Chunwei – sequence: 4 givenname: Jianhua surname: Han fullname: Han, Jianhua – sequence: 5 givenname: Yulong surname: Liu fullname: Liu, Yulong – sequence: 6 givenname: Hengshuang surname: Zhao fullname: Zhao, Hengshuang – sequence: 7 givenname: Hang surname: Xu fullname: Xu, Hang – sequence: 8 givenname: Lu surname: Hou fullname: Hou, Lu – sequence: 9 givenname: Wei surname: Zhang fullname: Zhang, Wei – sequence: 10 givenname: Xiaodan surname: Liang fullname: Liang, Xiaodan |
BookMark | eNqNjU0KwjAYRIMoWLV3CLgO1MS26k7EUiFuRLpwUwLGmFK_1Pzc3yAewNXM8B7MDI3BgByhhDK2Ips1pVOUOtdlWUaLkuY5S9Ct1hfpCOei2e9wrN5YDQpXVqiXBC-8NoBPMASPNeBaqyeJlunDF3BhlcSNdnEQLkAFEffZ3GXvFmjyEL2T6S_naFkdr4eaDNa8Q3xqOxMsRNSyrNzmJSuKgv1nfQAt_kPD |
ContentType | Paper |
Copyright | 2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. |
Copyright_xml | – notice: 2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. |
DBID | 8FE 8FG ABJCF ABUWG AFKRA AZQEC BENPR BGLVJ CCPQU DWQXO HCIFZ L6V M7S PIMPY PQEST PQQKQ PQUKI PRINS PTHSS |
DatabaseName | ProQuest SciTech Collection ProQuest Technology Collection Materials Science & Engineering Collection ProQuest Central (Alumni) ProQuest Central UK/Ireland ProQuest Central Essentials AUTh Library subscriptions: ProQuest Central Technology Collection ProQuest One Community College ProQuest Central SciTech Premium Collection (Proquest) (PQ_SDU_P3) ProQuest Engineering Collection ProQuest Engineering Database Publicly Available Content Database ProQuest One Academic Eastern Edition (DO NOT USE) ProQuest One Academic ProQuest One Academic UKI Edition ProQuest Central China Engineering Collection |
DatabaseTitle | Publicly Available Content Database Engineering Database Technology Collection ProQuest Central Essentials ProQuest One Academic Eastern Edition ProQuest Central (Alumni Edition) SciTech Premium Collection ProQuest One Community College ProQuest Technology Collection ProQuest SciTech Collection ProQuest Central China ProQuest Central ProQuest Engineering Collection ProQuest One Academic UKI Edition ProQuest Central Korea Materials Science & Engineering Collection ProQuest One Academic Engineering Collection |
DatabaseTitleList | Publicly Available Content Database |
Database_xml | – sequence: 1 dbid: 8FG name: ProQuest Technology Collection url: https://search.proquest.com/technologycollection1 sourceTypes: Aggregation Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Physics |
EISSN | 2331-8422 |
Genre | Working Paper/Pre-Print |
GroupedDBID | 8FE 8FG ABJCF ABUWG AFKRA ALMA_UNASSIGNED_HOLDINGS AZQEC BENPR BGLVJ CCPQU DWQXO FRJ HCIFZ L6V M7S M~E PIMPY PQEST PQQKQ PQUKI PRINS PTHSS |
ID | FETCH-proquest_journals_30795736663 |
IEDL.DBID | 8FG |
IngestDate | Thu Oct 10 22:52:02 EDT 2024 |
IsOpenAccess | true |
IsPeerReviewed | false |
IsScholarly | false |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-proquest_journals_30795736663 |
OpenAccessLink | https://www.proquest.com/docview/3079573666?pq-origsite=%requestingapplication% |
PQID | 3079573666 |
PQPubID | 2050157 |
ParticipantIDs | proquest_journals_3079573666 |
PublicationCentury | 2000 |
PublicationDate | 20240711 |
PublicationDateYYYYMMDD | 2024-07-11 |
PublicationDate_xml | – month: 07 year: 2024 text: 20240711 day: 11 |
PublicationDecade | 2020 |
PublicationPlace | Ithaca |
PublicationPlace_xml | – name: Ithaca |
PublicationTitle | arXiv.org |
PublicationYear | 2024 |
Publisher | Cornell University Library, arXiv.org |
Publisher_xml | – name: Cornell University Library, arXiv.org |
SSID | ssj0002672553 |
Score | 3.5568335 |
SecondaryResourceType | preprint |
Snippet | High-resolution inputs enable Large Vision-Language Models (LVLMs) to discern finer visual details, enhancing their comprehension capabilities. To reduce the... |
SourceID | proquest |
SourceType | Aggregation Database |
SubjectTerms | Benchmarks Context Fragmentation High resolution Samplers Vision |
Title | HiRes-LLaVA: Restoring Fragmentation Input in High-Resolution Large Vision-Language Models |
URI | https://www.proquest.com/docview/3079573666 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV1bS8MwFD7oiuCbV7zMEdDXYO8XX0SltUo3RtExfBlpk8pAa127V3-7JzHTB2GPJVCScDjfly_fyQG4QIhzQrOKqM_tgrpFxWhUCUYtj5shcwXnpixOHo789Nl9nHpTLbi12la5yokqUfOPUmrklxiLkRc4yLavm08qu0bJ21XdQmMTDMsOAhnVYXL_q7HYfoCM2fmXZhV2JDtgjFkjFruwIeo92FKWy7Ldh5d0nouWZhmb3FyRXHV4QRwhSCVf33VFUE0e6mbZkXlNpCGDSrH9J1RIJi3cZKJKw2mmVUciW5u9tQdwnsRPdyldTWimQ6ad_S3QOYQenv3FERDf8wQvucPkM-VuYBcFMogKOYuLyOo55jH01_3pZP3wKWzbiNFSqrSsPvS6xVKcIcZ2xUBt5ACM23g0zvFr-BV_AyyWhwk |
link.rule.ids | 783,787,12779,21402,33387,33758,43614,43819 |
linkProvider | ProQuest |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV1dS8MwFL3oiuibn_gxNaCvwbbpx-qLqGx0mpVR5hi-lLZJZaC1rt3_96Zm-iDsORCScLnn5OTeHIBrhDjWM4uAesLOqJMVKQ0KmVLLFWYvdaQQpmpOHkVe-OI8zdyZFtxqXVa5yoltohafudLIbzAWA9dnyLbvqi-qXKPU66q20NgEQ31VhVFtPPSjcfyrstiej5yZ_Uu0LXoMdsEYp5Vc7MGGLPdhqy26zOsDeA3nsawp5-n0_pbErccLIglBMvn2oXuCSjIsq2VD5iVRJRlUye0_wUK4KuIm07Y5nHKtOxJlbvZeH8LVoD95DOlqQYkOmjr52yI7gg7e_uUxEM91pcgFS9VH5Y5vZxlyiAJZi4PY6jLzBLrrZjpdP3wJ2-FkxBM-jJ7PYMdGxFbCpWV1odMslvIcEbfJLvSxfgN_roiP |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=HiRes-LLaVA%3A+Restoring+Fragmentation+Input+in+High-Resolution+Large+Vision-Language+Models&rft.jtitle=arXiv.org&rft.au=Huang%2C+Runhui&rft.au=Ding%2C+Xinpeng&rft.au=Wang%2C+Chunwei&rft.au=Han%2C+Jianhua&rft.date=2024-07-11&rft.pub=Cornell+University+Library%2C+arXiv.org&rft.eissn=2331-8422 |