PiTe: Pixel-Temporal Alignment for Large Video-Language Model
Fueled by the Large Language Models (LLMs) wave, Large Visual-Language Models (LVLMs) have emerged as a pivotal advancement, bridging the gap between image and text. However, video making it challenging for LVLMs to perform adequately due to the complexity of the relationship between language and sp...
Saved in:
Main Authors | , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
11.09.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Fueled by the Large Language Models (LLMs) wave, Large Visual-Language Models
(LVLMs) have emerged as a pivotal advancement, bridging the gap between image
and text. However, video making it challenging for LVLMs to perform adequately
due to the complexity of the relationship between language and spatial-temporal
data structure. Recent Large Video-Language Models (LVidLMs) align feature of
static visual data like image into latent space of language feature, by general
multi-modal tasks to leverage abilities of LLMs sufficiently. In this paper, we
explore fine-grained alignment approach via object trajectory for different
modalities across both spatial and temporal dimensions simultaneously. Thus, we
propose a novel LVidLM by trajectory-guided Pixel-Temporal Alignment, dubbed
PiTe, that exhibits promising applicable model property. To achieve
fine-grained video-language alignment, we curate a multi-modal pre-training
dataset PiTe-143k, the dataset provision of moving trajectories in pixel level
for all individual objects, that appear and mention in the video and caption
both, by our automatic annotation pipeline. Meanwhile, PiTe demonstrates
astounding capabilities on myriad video-related multi-modal tasks through beat
the state-of-the-art methods by a large margin. |
---|---|
DOI: | 10.48550/arxiv.2409.07239 |