Enhancing Long Video Understanding via Hierarchical Event-Based Memory

Recently, integrating visual foundation models into large language models (LLMs) to form video understanding systems has attracted widespread attention. Most of the existing models compress diverse semantic information within the whole video and feed it into LLMs for content comprehension. While thi...

Full description

Saved in:
Bibliographic Details
Main Authors Cheng, Dingxin, Li, Mingda, Liu, Jingyu, Guo, Yongxin, Jiang, Bin, Liu, Qingbin, Chen, Xi, Zhao, Bo
Format Journal Article
LanguageEnglish
Published 10.09.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Recently, integrating visual foundation models into large language models (LLMs) to form video understanding systems has attracted widespread attention. Most of the existing models compress diverse semantic information within the whole video and feed it into LLMs for content comprehension. While this method excels in short video understanding, it may result in a blend of multiple event information in long videos due to coarse compression, which causes information redundancy. Consequently, the semantics of key events might be obscured within the vast information that hinders the model's understanding capabilities. To address this issue, we propose a Hierarchical Event-based Memory-enhanced LLM (HEM-LLM) for better understanding of long videos. Firstly, we design a novel adaptive sequence segmentation scheme to divide multiple events within long videos. In this way, we can perform individual memory modeling for each event to establish intra-event contextual connections, thereby reducing information redundancy. Secondly, while modeling current event, we compress and inject the information of the previous event to enhance the long-term inter-event dependencies in videos. Finally, we perform extensive experiments on various video understanding tasks and the results show that our model achieves state-of-the-art performances.
DOI:10.48550/arxiv.2409.06299