Context-aware network with foreground recalibration for grounding natural language in video

Grounding natural language in video aims at retrieving a matching moment in a long, untrimmed video described by a referring natural language query. It is a challenging issue due to the dominating influence from noise background in untrimmed video and the complex temporal relationships introduced by...

Full description

Saved in:
Bibliographic Details
Published inNeural computing & applications Vol. 33; no. 16; pp. 10485 - 10502
Main Authors Chen, Cheng, Gu, Xiaodong
Format Journal Article
LanguageEnglish
Published London Springer London 01.08.2021
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Grounding natural language in video aims at retrieving a matching moment in a long, untrimmed video described by a referring natural language query. It is a challenging issue due to the dominating influence from noise background in untrimmed video and the complex temporal relationships introduced by the query. Existing methods treat different candidate segments separately in a matching and aligning manner and thus neglect that different target segments require different levels of context information. In this paper, we present the semantic modulation residual module, a novel single-shot feed-forward residual network that explicitly integrates various temporal scale features and introduces less noise to the final moments representation with the guide of query semantic information. To establish more fine-grained interactions between different moments, a global interaction module is embedded in the network. Moreover, the data imbalance issue caused by the sparse annotated moments weakens the effect of binary cross-entropy criterion. Therefore, we design a foreground recalibration mechanism to enhance the intra-class consistency and highlight the positive moments. We evaluate our method on three benchmark datasets i.e., TACoS, Charades-STA and ActivityNet Captions, achieving state-of-the-art performance without any post-processing. In particular, we reach 32.17%, 45.11% and 43.76% under the metric Rank@1, IoU@0.5 on TACoS, Charades-STA and ActivityNet Captions, respectively. Furthermore, ablation studies were performed to show the effectiveness of individual components in our proposed method. We hope that the proposed method can serve as a strong and simple alternative for fine-grained video retrieval.
ISSN:0941-0643
1433-3058
DOI:10.1007/s00521-021-05807-z