Full-duplex strategy for video object segmentation

Previous video object segmentation approaches mainly focus on simplex solutions linking appearance and motion, limiting effective feature collaboration between these two cues. In this work, we study a novel and efficient full-duplex strategy network ( FSNet ) to address this issue, by considering a...

Full description

Saved in:
Bibliographic Details
Published inComputational visual media (Beijing) Vol. 9; no. 1; pp. 155 - 175
Main Authors Ji, Ge-Peng, Fan, Deng-Ping, Fu, Keren, Wu, Zhe, Shen, Jianbing, Shao, Ling
Format Journal Article
LanguageEnglish
Published Beijing Tsinghua University Press 01.03.2023
Springer Nature B.V
SpringerOpen
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Previous video object segmentation approaches mainly focus on simplex solutions linking appearance and motion, limiting effective feature collaboration between these two cues. In this work, we study a novel and efficient full-duplex strategy network ( FSNet ) to address this issue, by considering a better mutual restraint scheme linking motion and appearance allowing exploitation of cross-modal features from the fusion and decoding stage. Specifically, we introduce a relational cross-attention module (RCAM) to achieve bidirectional message propagation across embedding sub-spaces. To improve the model’s robustness and update inconsistent features from the spatiotemporal embeddings, we adopt a bidirectional purification module after the RCAM. Extensive experiments on five popular benchmarks show that our FSNet is robust to various challenging scenarios (e.g., motion blur and occlusion), and compares well to leading methods both for video object segmentation and video salient object detection. The project is publicly available at https://github.com/GewelsJI/FSNet .
ISSN:2096-0433
2096-0662
DOI:10.1007/s41095-021-0262-4