Spatial-Temporal Knowledge-Embedded Transformer for Video Scene Graph Generation
Video scene graph generation (VidSGG) aims to identify objects in visual scenes and infer their relationships for a given video. It requires not only a comprehensive understanding of each object scattered on the whole scene but also a deep dive into their temporal motions and interactions. Inherentl...
Saved in:
Main Authors | , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
22.09.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Video scene graph generation (VidSGG) aims to identify objects in visual
scenes and infer their relationships for a given video. It requires not only a
comprehensive understanding of each object scattered on the whole scene but
also a deep dive into their temporal motions and interactions. Inherently,
object pairs and their relationships enjoy spatial co-occurrence correlations
within each image and temporal consistency/transition correlations across
different images, which can serve as prior knowledge to facilitate VidSGG model
learning and inference. In this work, we propose a spatial-temporal
knowledge-embedded transformer (STKET) that incorporates the prior
spatial-temporal knowledge into the multi-head cross-attention mechanism to
learn more representative relationship representations. Specifically, we first
learn spatial co-occurrence and temporal transition correlations in a
statistical manner. Then, we design spatial and temporal knowledge-embedded
layers that introduce the multi-head cross-attention mechanism to fully explore
the interaction between visual representation and the knowledge to generate
spatial- and temporal-embedded representations, respectively. Finally, we
aggregate these representations for each subject-object pair to predict the
final semantic labels and their relationships. Extensive experiments show that
STKET outperforms current competing algorithms by a large margin, e.g.,
improving the mR@50 by 8.1%, 4.7%, and 2.1% on different settings over current
algorithms. |
---|---|
DOI: | 10.48550/arxiv.2309.13237 |