Video segmentation based on patch matching and enhanced Onecut

Video segmentation has been widely applied in many fields, such as motion identification, target tracking, video retrieval and video editing. We have proposed an approach of video segmentation, which combines the color feature, shape feature and motion information of the target together. Firstly, we...

Full description

Saved in:
Bibliographic Details
Published in2017 2nd International Conference on Image, Vision and Computing (ICIVC) pp. 346 - 350
Main Authors Yingchun Yang, Yuchen Peng, Shoudong Han
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.06.2017
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Video segmentation has been widely applied in many fields, such as motion identification, target tracking, video retrieval and video editing. We have proposed an approach of video segmentation, which combines the color feature, shape feature and motion information of the target together. Firstly, we introduce tiny amount of interactions into the processing of key frame, obtain the accurate contour, and then initialize the local classifiers. Secondly, we use the patch-based sparse matching, which's also called patch matching in the following context, to pass the contour of the last frame to the current frame, as a result, initial contour of the target is got estimated. Simultaneously, the position parameters are updated. Eventually, we calculate the foreground and background probability distributions of current frame through the global probability models and local classifiers, then construct the enhanced Onecut model to obtain its segmentation results. Compared with the state-of-art video segmentation methods, our proposed approach performs outstandingly on the DAVIS dataset.
DOI:10.1109/ICIVC.2017.7984575