Video-XL: Extra-Long Vision Language Model for Hour-Scale Video Understanding
Although current Multi-modal Large Language Models (MLLMs) demonstrate promising results in video understanding, processing extremely long videos remains an ongoing challenge. Typically, MLLMs struggle with handling thousands of visual tokens that exceed the maximum context length, and they suffer f...
Saved in:
Main Authors | , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
22.09.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Although current Multi-modal Large Language Models (MLLMs) demonstrate
promising results in video understanding, processing extremely long videos
remains an ongoing challenge. Typically, MLLMs struggle with handling thousands
of visual tokens that exceed the maximum context length, and they suffer from
the information decay due to token aggregation. Another challenge is the high
computational cost stemming from the large number of video tokens. To tackle
these issues, we propose Video-XL, an extra-long vision language model designed
for efficient hour-scale video understanding. Specifically, we argue that LLMs
can be adapted as effective visual condensers and propose Visual Context Latent
Summarization which condenses visual contexts into highly compact forms.
Extensive experiments demonstrate that our model achieves promising results on
popular long video understanding benchmarks. For example, Video-XL outperforms
the current state-of-the-art method on VNBench by nearly 10\% in accuracy.
Moreover, Video-XL presents an impressive balance between efficiency and
effectiveness, processing 2048 frames on a single 80GB GPU while achieving
nearly 95% accuracy in the Needle-in-a-Haystack evaluation. |
---|---|
DOI: | 10.48550/arxiv.2409.14485 |