Learning to Segment Video Object With Accurate Boundaries

Video object segmentation has attracted considerable research interest these years. Top-performing video object segmentation methods mainly rely on fully convolutional neural networks which are specifically trained for predicting high-performance masks, resulting in a lack of preciseness in boundary...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on multimedia Vol. 23; pp. 3112 - 3123
Main Authors Cheng, Jingchun, Yuan, Yuhui, Li, Yali, Wang, Jingdong, Wang, Shengjin
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Video object segmentation has attracted considerable research interest these years. Top-performing video object segmentation methods mainly rely on fully convolutional neural networks which are specifically trained for predicting high-performance masks, resulting in a lack of preciseness in boundary details. This paper tackles the problem of predicting both mask-accurate and boundary-precise segmentation masks in videos. To solve this problem, we propose a simple and efficient network structure: the Mask-boundAry-Consistent Network ( MAC-Net ). The MAC-Net is an end-to-end fully convolutional network, where both mask and boundaries are jointly optimized during training, enabling it to predict masks along with accurate boundaries. An inner-net boundary-computing module is incorporated in the MAC-Net for producing spontaneously mask-consistent boundaries. We analyze the influence of parameter settings, network constructions of the MAC-Net , and compare with state-of-the-art algorithms on three widely-adopted datasets. Experimental results show that the MAC-Net achieves state-of-the-art performance, demonstrating the effectiveness of its mask-boundary-consistent network structure. We also propose that the boundary module in MAC-Net has high compatibility, and can be easily adapted to other segmentation-related techniques.
ISSN:1520-9210
1941-0077
DOI:10.1109/TMM.2020.3020698