Deep background subtraction with scene-specific convolutional neural networks

Background subtraction is usually based on low-level or hand-crafted features such as raw color components, gradients, or local binary patterns. As an improvement, we present a background subtraction algorithm based on spatial features learned with convolutional neural networks (ConvNets). Our algor...

Full description

Saved in:
Bibliographic Details
Published inInternational Conference on Systems, Signals, and Image Processing (Online) pp. 1 - 4
Main Authors Braham, Marc, Van Droogenbroeck, Marc
Format Conference Proceeding Journal Article
LanguageEnglish
Published IEEE 01.05.2016
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Background subtraction is usually based on low-level or hand-crafted features such as raw color components, gradients, or local binary patterns. As an improvement, we present a background subtraction algorithm based on spatial features learned with convolutional neural networks (ConvNets). Our algorithm uses a background model reduced to a single background image and a scene-specific training dataset to feed ConvNets that prove able to learn how to subtract the background from an input image patch. Experiments led on 2014 ChangeDetection.net dataset show that our ConvNet based algorithm at least reproduces the performance of state-of-the-art methods, and that it even outperforms them significantly when scene-specific knowledge is considered.
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Conference-1
ObjectType-Feature-3
content type line 23
SourceType-Conference Papers & Proceedings-2
ISSN:2157-8702
DOI:10.1109/IWSSIP.2016.7502717