Memory-Efficient Continual Learning Object Segmentation for Long Video
Recent state-of-the-art semi-supervised Video Object Segmentation (VOS) methods have shown significant improvements in target object segmentation accuracy when information from preceding frames is used in segmenting the current frame. In particular, such memory-based approaches can help a model to m...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
26.09.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Recent state-of-the-art semi-supervised Video Object Segmentation (VOS)
methods have shown significant improvements in target object segmentation
accuracy when information from preceding frames is used in segmenting the
current frame. In particular, such memory-based approaches can help a model to
more effectively handle appearance changes (representation drift) or
occlusions. Ideally, for maximum performance, Online VOS methods would need all
or most of the preceding frames (or their extracted information) to be stored
in memory and be used for online learning in later frames. Such a solution is
not feasible for long videos, as the required memory size grows without bound,
and such methods can fail when memory is limited and a target object
experiences repeated representation drifts throughout a video. We propose two
novel techniques to reduce the memory requirement of Online VOS methods while
improving modeling accuracy and generalization on long videos. Motivated by the
success of continual learning techniques in preserving previously-learned
knowledge, here we propose Gated-Regularizer Continual Learning (GRCL), which
improves the performance of any Online VOS subject to limited memory, and a
Reconstruction-based Memory Selection Continual Learning (RMSCL), which
empowers Online VOS methods to efficiently benefit from stored information in
memory. We also analyze the performance of a hybrid combination of the two
proposed methods. Experimental results show that the proposed methods are able
to improve the performance of Online VOS models by more than 8%, with improved
robustness on long-video datasets while maintaining comparable performance on
short-video datasets such as DAVIS16, DAVIS17, and YouTube-VOS18. |
---|---|
DOI: | 10.48550/arxiv.2309.15274 |