Video Visual Relation Detection with Contextual Knowledge Embedding

Video visual relation detection (VidVRD) aims at abstracting structured relations in the form of <inline-formula><tex-math notation="LaTeX">< </tex-math></inline-formula> subject-predicate-object <inline-formula><tex-math notation="LaTeX">&g...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on knowledge and data engineering Vol. 35; no. 12; pp. 1 - 14
Main Authors Cao, Qianwen, Huang, Heyan
Format Journal Article
LanguageEnglish
Published New York IEEE 01.12.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN1041-4347
1558-2191
DOI10.1109/TKDE.2023.3270328

Cover

Loading…
More Information
Summary:Video visual relation detection (VidVRD) aims at abstracting structured relations in the form of <inline-formula><tex-math notation="LaTeX">< </tex-math></inline-formula> subject-predicate-object <inline-formula><tex-math notation="LaTeX">></tex-math></inline-formula> from videos. The triple formation makes the search space extremely huge and the distribution unbalanced. Usually, existing works predict the relationships from visual, spatial, and semantic cues. Among them, semantic cues are responsible for exploring the semantic connections between objects, which is crucial to transfer knowledge across relations. However, most of these works extract semantic cues via simply mapping the object labels to classified features, which ignore the contextual surroundings, resulting in poor performance for low-frequency relations. To alleviate these issues, we propose a novel network, termed Contextual Knowledge Embedded Relation Network (CKERN), to facilitate VidVRD through establishing contextual knowledge embeddings for detected object pairs in relations from two aspects: commonsense attributes and prior linguistic dependencies. Specifically, we take the pair as a query to extract relational facts in the commonsense knowledge base, then encode them to explicitly construct semantic surroundings for relations. In addition, the statistics of object pairs with different predicates distilled from large-scale visual relations are taken into account to represent the linguistic regularity of relations. Extensive experimental results on benchmark datasets demonstrate the effectiveness and robustness of our proposed model.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1041-4347
1558-2191
DOI:10.1109/TKDE.2023.3270328