Deep neural network concepts for background subtraction:A systematic review and comparative evaluation
Conventional neural networks have been demonstrated to be a powerful framework for background subtraction in video acquired by static cameras. Indeed, the well-known Self-Organizing Background Subtraction (SOBS) method and its variants based on neural networks have long been the leading methods on t...
Saved in:
Published in | Neural networks Vol. 117; pp. 8 - 66 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
United States
Elsevier Ltd
01.09.2019
Elsevier |
Subjects | |
Online Access | Get full text |
ISSN | 0893-6080 1879-2782 1879-2782 |
DOI | 10.1016/j.neunet.2019.04.024 |
Cover
Loading…
Abstract | Conventional neural networks have been demonstrated to be a powerful framework for background subtraction in video acquired by static cameras. Indeed, the well-known Self-Organizing Background Subtraction (SOBS) method and its variants based on neural networks have long been the leading methods on the large-scale CDnet 2012 dataset during a long time. Convolutional neural networks, which are used in deep learning, have been recently and excessively employed for background initialization, foreground detection, and deep learned features. The top background subtraction methods currently used in CDnet 2014 are based on deep neural networks, and have demonstrated a large performance improvement in comparison to conventional unsupervised approaches based on multi-feature or multi-cue strategies. Furthermore, since the seminal work of Braham and Van Droogenbroeck in 2016, a large number of studies on convolutional neural networks applied to background subtraction have been published, and a continual gain of performance has been achieved. In this context, we provide the first review of deep neural network concepts in background subtraction for novices and experts in order to analyze this success and to provide further directions. To do so, we first surveyed the background initialization and background subtraction methods based on deep neural networks concepts, and also deep learned features. We then discuss the adequacy of deep neural networks for the task of background subtraction. Finally, experimental results are presented for the CDnet 2014 dataset. |
---|---|
AbstractList | Conventional neural networks have been demonstrated to be a powerful framework for background subtraction in video acquired by static cameras. Indeed, the well-known Self-Organizing Background Subtraction (SOBS) method and its variants based on neural networks have long been the leading methods on the large-scale CDnet 2012 dataset during a long time. Convolutional neural networks, which are used in deep learning, have been recently and excessively employed for background initialization, foreground detection, and deep learned features. The top background subtraction methods currently used in CDnet 2014 are based on deep neural networks, and have demonstrated a large performance improvement in comparison to conventional unsupervised approaches based on multi-feature or multi-cue strategies. Furthermore, since the seminal work of Braham and Van Droogenbroeck in 2016, a large number of studies on convolutional neural networks applied to background subtraction have been published, and a continual gain of performance has been achieved. In this context, we provide the first review of deep neural network concepts in background subtraction for novices and experts in order to analyze this success and to provide further directions. To do so, we first surveyed the background initialization and background subtraction methods based on deep neural networks concepts, and also deep learned features. We then discuss the adequacy of deep neural networks for the task of background subtraction. Finally, experimental results are presented for the CDnet 2014 dataset. Conventional neural networks have been demonstrated to be a powerful framework for background subtraction in video acquired by static cameras. Indeed, the well-known Self-Organizing Background Subtraction (SOBS) method and its variants based on neural networks have long been the leading methods on the large-scale CDnet 2012 dataset during a long time. Convolutional neural networks, which are used in deep learning, have been recently and excessively employed for background initialization, foreground detection, and deep learned features. The top background subtraction methods currently used in CDnet 2014 are based on deep neural networks, and have demonstrated a large performance improvement in comparison to conventional unsupervised approaches based on multi-feature or multi-cue strategies. Furthermore, since the seminal work of Braham and Van Droogenbroeck in 2016, a large number of studies on convolutional neural networks applied to background subtraction have been published, and a continual gain of performance has been achieved. In this context, we provide the first review of deep neural network concepts in background subtraction for novices and experts in order to analyze this success and to provide further directions. To do so, we first surveyed the background initialization and background subtraction methods based on deep neural networks concepts, and also deep learned features. We then discuss the adequacy of deep neural networks for the task of background subtraction. Finally, experimental results are presented for the CDnet 2014 dataset.Conventional neural networks have been demonstrated to be a powerful framework for background subtraction in video acquired by static cameras. Indeed, the well-known Self-Organizing Background Subtraction (SOBS) method and its variants based on neural networks have long been the leading methods on the large-scale CDnet 2012 dataset during a long time. Convolutional neural networks, which are used in deep learning, have been recently and excessively employed for background initialization, foreground detection, and deep learned features. The top background subtraction methods currently used in CDnet 2014 are based on deep neural networks, and have demonstrated a large performance improvement in comparison to conventional unsupervised approaches based on multi-feature or multi-cue strategies. Furthermore, since the seminal work of Braham and Van Droogenbroeck in 2016, a large number of studies on convolutional neural networks applied to background subtraction have been published, and a continual gain of performance has been achieved. In this context, we provide the first review of deep neural network concepts in background subtraction for novices and experts in order to analyze this success and to provide further directions. To do so, we first surveyed the background initialization and background subtraction methods based on deep neural networks concepts, and also deep learned features. We then discuss the adequacy of deep neural networks for the task of background subtraction. Finally, experimental results are presented for the CDnet 2014 dataset. |
Author | Javed, Sajid Sultana, Maryam Jung, Soon Ki Bouwmans, Thierry |
Author_xml | – sequence: 1 givenname: Thierry surname: Bouwmans fullname: Bouwmans, Thierry email: tbouwman@univ-lr.fr organization: Lab. MIA, University La Rochelle, France – sequence: 2 givenname: Sajid surname: Javed fullname: Javed, Sajid organization: Department of Computer Science, University of Warwick, UK – sequence: 3 givenname: Maryam surname: Sultana fullname: Sultana, Maryam organization: Department of Computer Science and Engineering, Kyungpook National University, Republic of Korea – sequence: 4 givenname: Soon Ki surname: Jung fullname: Jung, Soon Ki organization: Department of Computer Science and Engineering, Kyungpook National University, Republic of Korea |
BackLink | https://www.ncbi.nlm.nih.gov/pubmed/31129491$$D View this record in MEDLINE/PubMed https://hal.science/hal-02118618$$DView record in HAL |
BookMark | eNqFkU9v1DAQxS1URLeFb4BQjnBIsGPHcXpAWpU_RVqJC5ytiTMBb5M42M5W_fY4pPTAAS4z0uj3nkbvXZCzyU1IyEtGC0aZfHssJlwmjEVJWVNQUdBSPCE7puomL2tVnpEdVQ3PJVX0nFyEcKSUSiX4M3LOGSsb0bAd6d8jzlly8jCkFe-cv82MmwzOMWS981kL5va7d8vUZWFpowcTrZuu9lm4DxFHiNZkHk8W7zJIjHHjDD5dT5jhCYYFVvw5edrDEPDFw74k3z5--Hp9kx--fPp8vT_kpqIq5qA6wSXDvjaVEooJowzr2r6WbRpCAWDPZV0DbaBWpu8aQ3kFHa8kr2re80vyZvP9AYOevR3B32sHVt_sD3q90ZIxJZk6scS-3tjZu58LhqhHGwwOA0zolqDLkjMlGilFQl89oEs7Yvfo_CfHBIgNMN6F4LF_RBjVa136qLe69FqXpiJ9svpe_SUzNv4OLOVsh_-J321iTIGm_L0OxmJqrrMeTdSds_82-AVpo7Rp |
CitedBy_id | crossref_primary_10_3390_agronomy14081800 crossref_primary_10_1109_TIP_2021_3122102 crossref_primary_10_3390_electronics12153234 crossref_primary_10_1016_j_ijleo_2020_164563 crossref_primary_10_1186_s10033_024_01068_8 crossref_primary_10_1073_pnas_1915252117 crossref_primary_10_1109_ACCESS_2024_3452633 crossref_primary_10_3390_electronics11030408 crossref_primary_10_1016_j_cviu_2020_103101 crossref_primary_10_3390_electronics12153346 crossref_primary_10_1016_j_ecoenv_2023_115894 crossref_primary_10_1109_TII_2020_3009111 crossref_primary_10_3390_coatings11070845 crossref_primary_10_2478_ttj_2020_0010 crossref_primary_10_1016_j_cviu_2022_103584 crossref_primary_10_1109_TII_2021_3064845 crossref_primary_10_1016_j_imavis_2019_08_007 crossref_primary_10_1117_1_JEI_32_2_023021 crossref_primary_10_3390_s22093171 crossref_primary_10_1007_s11042_023_16344_3 crossref_primary_10_3390_app11041807 crossref_primary_10_1007_s11063_022_11055_6 crossref_primary_10_1016_j_knosys_2024_111765 crossref_primary_10_1109_TCSVT_2020_2991191 crossref_primary_10_1016_j_imavis_2024_105021 crossref_primary_10_1016_j_engappai_2020_104000 crossref_primary_10_1002_stc_2910 crossref_primary_10_1007_s00366_020_01272_9 crossref_primary_10_1016_j_scitotenv_2024_170698 crossref_primary_10_1142_S219688882350015X crossref_primary_10_3390_app13137671 crossref_primary_10_4218_etrij_2023_0115 crossref_primary_10_3389_fenrg_2023_1175102 crossref_primary_10_3390_sym13071158 crossref_primary_10_1016_j_cviu_2025_104290 crossref_primary_10_1007_s00138_024_01570_y crossref_primary_10_1016_j_autcon_2021_103862 crossref_primary_10_1109_JSEN_2021_3054940 crossref_primary_10_1007_s10462_020_09811_y crossref_primary_10_1007_s10661_019_7980_4 crossref_primary_10_1109_ACCESS_2021_3071163 crossref_primary_10_1109_JQE_2022_3177793 crossref_primary_10_1007_s00371_022_02417_1 crossref_primary_10_1016_j_future_2022_03_039 crossref_primary_10_3169_mta_9_247 crossref_primary_10_3390_electronics10101187 crossref_primary_10_25092_baunfbed_878224 crossref_primary_10_1007_s13735_022_00232_x crossref_primary_10_1098_rsif_2022_0676 crossref_primary_10_1002_aelm_202200876 crossref_primary_10_1007_s11071_024_10423_2 crossref_primary_10_1109_ACCESS_2020_3022818 crossref_primary_10_1016_j_cviu_2022_103560 crossref_primary_10_1093_chemle_upae181 crossref_primary_10_2200_S01127ED1V01Y202109COV019 crossref_primary_10_3390_make1030044 crossref_primary_10_1016_j_infrared_2025_105740 crossref_primary_10_1038_s41433_024_03184_0 crossref_primary_10_1016_j_neunet_2022_10_023 crossref_primary_10_1109_TSUSC_2022_3177688 crossref_primary_10_1016_j_jjimei_2022_100066 crossref_primary_10_1016_j_eswa_2022_117947 crossref_primary_10_1109_TIFS_2024_3447237 crossref_primary_10_1007_s11831_025_10250_7 crossref_primary_10_3390_app13063812 crossref_primary_10_1016_j_neucom_2022_07_081 crossref_primary_10_1145_3721136 crossref_primary_10_1039_D1MD00074H crossref_primary_10_1109_TCSVT_2020_3023175 crossref_primary_10_3390_jimaging8010009 crossref_primary_10_1007_s11042_020_09838_x crossref_primary_10_1007_s10851_020_00967_4 crossref_primary_10_1109_ACCESS_2022_3195053 crossref_primary_10_1109_JSEN_2024_3358181 crossref_primary_10_1007_s10278_024_01256_x crossref_primary_10_1109_TIP_2021_3055063 crossref_primary_10_1016_j_fmre_2024_08_009 crossref_primary_10_1109_ACCESS_2019_2927745 crossref_primary_10_3390_robotics13080114 crossref_primary_10_3390_electronics12132975 crossref_primary_10_3390_electronics11101590 crossref_primary_10_1016_j_cviu_2020_103022 crossref_primary_10_1109_TIP_2024_3378473 crossref_primary_10_1515_jisys_2023_0041 crossref_primary_10_3390_rs13040653 crossref_primary_10_1007_s00521_022_07141_4 crossref_primary_10_1109_TITS_2021_3077883 crossref_primary_10_4236_ojs_2021_115040 crossref_primary_10_1109_ACCESS_2020_2992494 crossref_primary_10_1016_j_cviu_2021_103355 crossref_primary_10_1007_s10044_024_01227_6 crossref_primary_10_1142_S0218001420500238 crossref_primary_10_1186_s13321_023_00781_1 crossref_primary_10_3390_s19061352 crossref_primary_10_3390_sym12050840 crossref_primary_10_1109_TIP_2022_3162961 crossref_primary_10_1109_MMUL_2021_3083701 crossref_primary_10_1049_iet_ipr_2019_0769 crossref_primary_10_1016_j_cviu_2020_103032 crossref_primary_10_1109_TNNLS_2021_3131406 crossref_primary_10_1109_TPAMI_2020_3042093 crossref_primary_10_3390_s20236973 crossref_primary_10_1109_TMM_2020_3006419 crossref_primary_10_1007_s11042_020_10249_1 crossref_primary_10_1016_j_asoc_2022_109799 crossref_primary_10_1109_ACCESS_2024_3381612 crossref_primary_10_1016_j_neunet_2021_03_024 crossref_primary_10_1016_j_patcog_2022_108719 crossref_primary_10_1007_s11633_022_1378_4 crossref_primary_10_1007_s11042_022_12832_0 crossref_primary_10_1016_j_energy_2023_127212 crossref_primary_10_1080_23311975_2025_2460627 crossref_primary_10_1016_j_imu_2023_101428 crossref_primary_10_1016_j_neucom_2019_08_087 crossref_primary_10_3233_ICA_200621 crossref_primary_10_3390_electronics11071122 crossref_primary_10_1021_acs_chemrev_3c00708 crossref_primary_10_1109_TIE_2020_3013747 crossref_primary_10_3390_math12162529 crossref_primary_10_1007_s00371_021_02286_0 crossref_primary_10_1016_j_egyr_2023_05_260 crossref_primary_10_1109_JSEN_2023_3263539 crossref_primary_10_1016_j_cviu_2022_103501 crossref_primary_10_1007_s00500_023_08208_7 crossref_primary_10_1109_TIP_2023_3336176 crossref_primary_10_1016_j_cosrev_2021_100379 crossref_primary_10_1109_LSP_2023_3236585 crossref_primary_10_1063_5_0151244 crossref_primary_10_5498_wjp_v12_i2_204 crossref_primary_10_3390_app13148073 crossref_primary_10_32604_cmc_2022_021850 crossref_primary_10_3390_su16188214 crossref_primary_10_1109_MITS_2021_3122926 crossref_primary_10_1007_s11227_022_05019_9 crossref_primary_10_1145_3524497 crossref_primary_10_1007_s11042_024_19180_1 crossref_primary_10_1016_j_neunet_2023_06_036 crossref_primary_10_1007_s00202_023_01952_x crossref_primary_10_1109_TNNLS_2022_3170789 crossref_primary_10_1109_ACCESS_2020_3021795 crossref_primary_10_1109_TCSS_2021_3137306 crossref_primary_10_2139_ssrn_3988170 crossref_primary_10_3389_fpsyg_2020_515531 crossref_primary_10_1016_j_patcog_2021_108350 crossref_primary_10_1111_liv_70038 crossref_primary_10_1109_TITS_2022_3184978 crossref_primary_10_1007_s11042_025_20753_x crossref_primary_10_3390_su141710994 crossref_primary_10_1016_j_neucom_2021_01_030 crossref_primary_10_1016_j_trc_2023_104205 crossref_primary_10_1016_j_mlwa_2021_100058 crossref_primary_10_1016_j_neucom_2022_06_104 crossref_primary_10_1109_TETCI_2021_3100641 crossref_primary_10_1007_s13735_023_00270_z crossref_primary_10_1007_s00362_022_01323_x crossref_primary_10_1109_TIP_2022_3172851 crossref_primary_10_1155_2022_5485284 crossref_primary_10_1007_s11760_022_02337_6 crossref_primary_10_1016_j_imavis_2022_104425 crossref_primary_10_1021_acsabm_3c00054 crossref_primary_10_1007_s12204_023_2603_1 crossref_primary_10_1016_j_eswa_2022_118873 crossref_primary_10_1364_OE_468648 crossref_primary_10_1109_OJITS_2022_3170075 crossref_primary_10_1371_journal_pcbi_1012327 crossref_primary_10_3390_s20123420 crossref_primary_10_1016_j_eswa_2022_118995 crossref_primary_10_1109_TCSVT_2021_3088130 crossref_primary_10_57062_ijpem_st_2023_00136 crossref_primary_10_1007_s00034_024_02808_w crossref_primary_10_1007_s11042_019_08506_z crossref_primary_10_1109_TII_2020_3017078 crossref_primary_10_1007_s10851_022_01068_0 crossref_primary_10_1016_j_engappai_2024_108873 crossref_primary_10_1016_j_mlwa_2021_100037 crossref_primary_10_1109_JPROC_2021_3117472 crossref_primary_10_1109_JSEN_2021_3121582 crossref_primary_10_1002_tal_70002 crossref_primary_10_1016_j_patrec_2020_10_011 crossref_primary_10_1109_TCSVT_2019_2951778 crossref_primary_10_1007_s00530_022_01014_5 crossref_primary_10_1016_j_knosys_2020_105650 crossref_primary_10_1109_TAI_2023_3299903 crossref_primary_10_1007_s11042_020_09386_4 crossref_primary_10_1109_TIP_2020_2983598 crossref_primary_10_3390_app12031289 crossref_primary_10_1007_s11356_023_26782_z crossref_primary_10_1016_j_cosrev_2019_100204 crossref_primary_10_3390_a12070128 crossref_primary_10_1007_s00024_022_03208_4 crossref_primary_10_1016_j_jvcir_2020_102907 crossref_primary_10_1016_j_resourpol_2020_101604 crossref_primary_10_1007_s12652_023_04686_7 crossref_primary_10_1007_s11042_023_15849_1 crossref_primary_10_1088_2632_2153_abd614 crossref_primary_10_1142_S2251171723400056 crossref_primary_10_1109_TVT_2020_3043575 crossref_primary_10_1016_j_jvcir_2021_103278 crossref_primary_10_1155_2022_7651539 crossref_primary_10_1016_j_imu_2023_101284 crossref_primary_10_1155_2020_7397169 crossref_primary_10_1007_s00530_023_01139_1 crossref_primary_10_1016_j_iswa_2022_200112 crossref_primary_10_1109_ACCESS_2021_3123975 crossref_primary_10_3390_app10082749 crossref_primary_10_1088_1361_6501_aca497 crossref_primary_10_1155_2022_7432615 crossref_primary_10_1088_1755_1315_539_1_012102 crossref_primary_10_3390_app14031045 crossref_primary_10_1016_j_ijar_2021_04_007 crossref_primary_10_1016_j_mlwa_2021_100124 crossref_primary_10_1155_2022_4331351 crossref_primary_10_3390_s23146492 |
Cites_doi | 10.1109/CVPR.2017.467 10.1109/ISCAS.2018.8351344 10.1007/978-3-319-70742-6_22 10.1016/j.imavis.2012.08.017 10.1109/IJCNN.2018.8489230 10.1007/978-3-642-33179-4_41 10.1109/IJCNN.2009.5178632 10.1016/j.neunet.2014.09.003 10.1109/TCSVT.2015.2424052 10.1109/ICPR.2016.7899623 10.1145/3019612.3019687 10.1016/j.patcog.2017.10.013 10.1109/FCV.2015.7103745 10.1109/CVPR.2017.790 10.1109/AVSS.2017.8078550 10.1109/CVPRW.2014.65 10.1109/ICASSP.2019.8682914 10.1109/TPAMI.2006.68 10.1109/ICME.2018.8486556 10.1016/0893-6080(91)90009-T 10.1109/TASLP.2018.2876171 10.1007/3-540-45053-X_48 10.1109/TITS.2017.2749964 10.1016/0893-6080(89)90020-8 10.1109/ICMLA.2013.43 10.1016/j.neucom.2016.12.038 10.1016/j.neucom.2015.05.082 10.1109/CVPR.1999.784637 10.1016/j.neunet.2018.08.019 10.1109/LGRS.2018.2841502 10.1109/JSTSP.2018.2869111 10.1145/2647868.2654889 10.1109/TNNLS.2018.2852738 10.1109/ICIP.2017.8297144 10.1007/978-3-030-00563-4_8 10.1109/TFUZZ.2016.2574915 10.1109/AVSS.2017.8078483 10.1504/IJSTDS.2019.097607 10.1007/978-3-540-76856-2_31 10.5772/38267 10.1109/CVPR.2016.90 10.1109/TFUZZ.2016.2639064 10.1109/ICCVW.2015.123 10.1109/ITSC.2004.1399038 10.1109/CVPRW.2016.109 10.1109/TITS.2011.2159266 10.1007/978-3-319-23234-8_32 10.1109/IWSSIP.2016.7502717 10.1016/j.cviu.2013.12.003 10.1109/TPAMI.2008.87 10.1007/978-3-642-35286-7_23 10.1007/978-3-319-23222-5_62 10.1109/TCSI.2017.2729787 10.1109/TIP.2008.924285 10.1109/CVPRW.2014.68 10.1109/TIP.2017.2728181 10.1109/ICPR.2016.7899619 10.1109/MWSCAS.2018.8623825 10.1109/ICIP.2018.8451816 10.1007/978-3-030-00776-8_48 10.1109/TASLP.2018.2842159 10.1109/TASLP.2018.2881912 10.1109/TPAMI.2012.230 10.1016/j.patrec.2018.05.018 10.1007/11553595_20 10.1109/ICCV.2017.548 10.1109/CVPR.2017.452 10.1109/WACV.2019.00193 10.3390/jimaging4070090 10.1109/TIP.2017.2746268 10.1007/978-3-319-24574-4_28 10.1109/CIS.2009.178 10.1109/CVPR.2015.7298965 10.1109/AVSS.2013.6636617 10.1145/1015706.1015718 10.1145/3194554 10.1007/978-3-319-70742-6_23 10.1109/IJCNN.2013.6706737 10.1109/CVPRW.2014.126 10.1007/978-3-642-33765-9_8 10.1109/34.598236 10.1016/j.patrec.2016.12.024 10.1145/2647868.2654914 10.1109/AVSS.2013.6636673 10.3390/s18124269 10.1145/3097983.3098052 10.1007/11919476_5 10.1016/j.neunet.2017.09.007 10.1016/j.patrec.2016.11.022 10.1109/TIP.2015.2427519 10.1016/j.patrec.2005.11.005 10.1109/ICPR.2016.7899965 10.1016/j.aeue.2009.05.004 10.3390/jimaging4060079 10.1016/j.patcog.2014.09.009 10.1109/ICAIPR.2016.7585207 10.1007/978-3-540-87536-9_67 10.1162/neco_a_00984 10.1109/ACCESS.2016.2551458 10.1109/TGRS.2018.2849692 10.1109/ICIP.2018.8451540 10.1109/CVPR.2014.81 10.1109/ICIP.2010.5653489 10.1162/neco.1997.9.8.1735 10.1109/SIU.2018.8404636 10.1016/j.neunet.2017.12.012 10.1109/ICCV.2009.5459202 10.1109/CVPRW.2012.6238925 10.1007/978-3-030-00374-6_32 10.1109/ISIE.2017.8001492 10.1109/ICASSP.2013.6638947 10.1109/IJCNN.2013.6706734 10.1109/ICCV.2015.169 10.1038/s41928-018-0023-2 10.1109/ICPR.2016.7899617 10.1007/978-3-540-89639-5_74 10.1016/j.neunet.2017.10.001 10.1145/3061639.3062326 10.1109/5.58323 10.1049/iet-ipr.2017.1055 10.1109/LGRS.2018.2859024 10.1109/TIP.2010.2101613 10.1109/SMC.2018.00289 10.1016/j.cosrev.2016.11.001 10.1080/15599612.2014.890686 10.1109/ICCVW.2015.125 10.1007/978-3-642-02282-1_33 10.1109/CVPR.2018.00084 10.1109/ACCESS.2018.2817129 10.1109/IJCNN.2011.6033261 10.1016/0031-3203(95)00163-8 10.1109/CVPR.2017.17 10.1109/TPAMI.2012.132 10.1016/j.patrec.2018.08.002 10.1109/MSP.2018.2826566 10.1016/j.cviu.2013.07.007 10.1109/ICCVW.2017.188 10.1007/978-3-319-10584-0_35 10.1109/TNNLS.2013.2242092 10.1016/j.neunet.2018.10.016 10.1109/TIP.2016.2598681 10.1109/IJCNN.2017.7966300 10.1109/CVPR.2018.00125 10.1109/TIP.2017.2695882 10.1109/ICCV.2015.123 10.1007/978-3-642-04146-4_63 10.1109/AVSS.2018.8639077 10.1109/CVPR.2016.517 10.1109/ICSPCC.2012.6335632 10.1109/ICIP.2012.6467087 10.1109/AVSS.2010.45 10.1007/s12530-012-9047-4 10.24963/ijcai.2018/378 10.1109/ICPR.2018.8545597 10.1016/j.cviu.2016.08.005 10.1109/ICPR.2016.7899616 10.1109/CVPR.2016.308 10.1111/2041-210X.13011 10.1109/CVPR.2015.7298594 10.1109/ICCVW.2017.221 10.1109/CVPR.2017.565 10.1109/AVSS.2017.8078475 10.1016/j.patrec.2017.01.011 10.1162/neco.2006.18.7.1527 10.1049/ic.2015.0105 10.1109/CVPR.2004.1315179 10.1109/ICPR.2008.4761047 10.1007/978-3-642-04697-1_39 10.1109/TCYB.2015.2419737 10.1109/TVCG.2018.2864500 10.1007/s00138-012-0421-9 10.1609/aaai.v33i01.33019176 10.1117/1.JEI.24.4.043011 10.1007/s11263-019-01151-x 10.1109/ICME.2014.6890140 10.1016/j.asoc.2018.05.018 10.5220/0005266303950402 10.1109/TIP.2004.836169 10.1109/ICCV.1999.791228 10.1109/CVPR.2009.5206848 10.1109/TPAMI.2016.2577031 10.1016/j.neucom.2019.01.079 10.1109/Cybermatics_2018.2018.00093 10.1007/11881599_110 10.1109/ICPR.2018.8545320 10.1109/AVSS.2015.7301753 10.1109/TENCONSpring.2016.7519418 10.1016/j.patcog.2014.10.020 10.1007/3-540-49256-9_16 10.1109/VCIP.2016.7805552 10.1007/s00138-013-0562-5 10.1109/CVPRW.2014.67 10.1007/BF00994018 10.1007/s10851-015-0610-z 10.1007/978-3-319-70742-6_24 10.1007/978-3-319-23222-5_60 10.1007/978-3-319-23222-5_57 10.1016/j.neucom.2018.01.092 10.1109/TITS.2017.2754099 10.1155/2011/972961 10.1109/ICOMIS.2018.8644960 10.1007/BF02551274 10.1109/WACV.2015.137 10.1609/aaai.v32i1.11668 10.1109/AVSS.2017.8078547 10.1109/CVPRW.2012.6238922 10.1007/978-3-642-31295-3_14 10.1007/978-3-319-70742-6_21 10.1109/ICIP.2015.7350841 10.1609/aaai.v30i1.10274 10.1016/j.cosrev.2018.01.004 10.1109/CVPR.2016.278 10.1117/1.JEI.28.1.013038 10.1109/SNSP.2018.00089 10.1109/TASLP.2018.2876169 10.1007/BF02478259 10.1609/aaai.v29i1.9481 10.1016/j.cosrev.2019.100204 10.1109/ICIP.2011.6116367 10.1007/978-3-642-33275-3_2 10.1016/j.cviu.2013.11.006 10.1016/j.patrec.2017.10.034 10.3390/jimaging4050071 10.1109/ICIP.2018.8451603 10.1109/ICASSP.2018.8461540 10.1109/ICME.2018.8486510 10.1109/WMVC.2009.5399242 10.1088/1361-6420/aa9a90 10.1109/ICCV.2013.419 10.1109/5.726791 10.1109/IJCNN.2014.6889404 10.3390/jimaging4060078 10.1109/MECO.2017.7977207 10.1145/2695664.2695863 |
ContentType | Journal Article |
Copyright | 2019 Elsevier Ltd Copyright © 2019 Elsevier Ltd. All rights reserved. Attribution - NonCommercial |
Copyright_xml | – notice: 2019 Elsevier Ltd – notice: Copyright © 2019 Elsevier Ltd. All rights reserved. – notice: Attribution - NonCommercial |
DBID | AAYXX CITATION NPM 7X8 1XC VOOES |
DOI | 10.1016/j.neunet.2019.04.024 |
DatabaseName | CrossRef PubMed MEDLINE - Academic Hyper Article en Ligne (HAL) Hyper Article en Ligne (HAL) (Open Access) |
DatabaseTitle | CrossRef PubMed MEDLINE - Academic |
DatabaseTitleList | PubMed MEDLINE - Academic |
Database_xml | – sequence: 1 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Computer Science |
EISSN | 1879-2782 |
EndPage | 66 |
ExternalDocumentID | oai_HAL_hal_02118618v1 31129491 10_1016_j_neunet_2019_04_024 S0893608019301303 |
Genre | Journal Article Review |
GroupedDBID | --- --K --M -~X .DC .~1 0R~ 123 186 1B1 1RT 1~. 1~5 29N 4.4 457 4G. 53G 5RE 5VS 6TJ 7-5 71M 8P~ 9JM 9JN AABNK AACTN AADPK AAEDT AAEDW AAIAV AAIKJ AAKOC AALRI AAOAW AAQFI AAQXK AAXLA AAXUO AAYFN ABAOU ABBOA ABCQJ ABEFU ABFNM ABFRF ABHFT ABIVO ABJNI ABLJU ABMAC ABXDB ABYKQ ACAZW ACDAQ ACGFO ACGFS ACIUM ACNNM ACRLP ACZNC ADBBV ADEZE ADGUI ADJOM ADMUD ADRHT AEBSH AECPX AEFWE AEKER AENEX AFKWA AFTJW AFXIZ AGHFR AGUBO AGWIK AGYEJ AHHHB AHJVU AHZHX AIALX AIEXJ AIKHN AITUG AJBFU AJOXV ALMA_UNASSIGNED_HOLDINGS AMFUW AMRAJ AOUOD ARUGR ASPBG AVWKF AXJTR AZFZN BJAXD BKOJK BLXMC CS3 DU5 EBS EFJIC EFLBG EJD EO8 EO9 EP2 EP3 F0J F5P FDB FEDTE FGOYB FIRID FNPLU FYGXN G-2 G-Q G8K GBLVA GBOLZ HLZ HMQ HVGLF HZ~ IHE J1W JJJVA K-O KOM KZ1 LG9 LMP M2V M41 MHUIS MO0 MOBAO MVM N9A O-L O9- OAUVE OZT P-8 P-9 P2P PC. Q38 R2- RIG ROL RPZ SBC SCC SDF SDG SDP SES SEW SNS SPC SPCBC SSN SST SSV SSW SSZ T5K TAE UAP UNMZH VOH WUQ XPP ZMT ~G- AATTM AAXKI AAYWO AAYXX ABDPE ABWVN ACRPL ACVFH ADCNI ADNMO AEIPS AEUPX AFJKZ AFPUW AGCQF AGQPQ AGRNS AIGII AIIUN AKBMS AKRWK AKYEP ANKPU APXCP BNPGV CITATION SSH NPM 7X8 EFKBS 1XC VOOES |
ID | FETCH-LOGICAL-c508t-a8d4361ef7c584814c8c1dbf76bbf748aaef3677a09a78cfd9c035ad3563573f3 |
IEDL.DBID | .~1 |
ISSN | 0893-6080 1879-2782 |
IngestDate | Thu Jun 19 06:40:31 EDT 2025 Fri Sep 05 04:32:08 EDT 2025 Thu Apr 03 07:09:00 EDT 2025 Tue Jul 01 01:24:33 EDT 2025 Thu Apr 24 22:54:10 EDT 2025 Fri Feb 23 02:28:38 EST 2024 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Keywords | Generative adversarial networks Restricted Boltzmann machines Convolutional neural networks Background subtraction Auto-encoders networks |
Language | English |
License | Copyright © 2019 Elsevier Ltd. All rights reserved. Attribution - NonCommercial: http://creativecommons.org/licenses/by-nc |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c508t-a8d4361ef7c584814c8c1dbf76bbf748aaef3677a09a78cfd9c035ad3563573f3 |
Notes | ObjectType-Article-2 SourceType-Scholarly Journals-1 ObjectType-Feature-1 content type line 23 ObjectType-Undefined-3 |
ORCID | 0000-0003-4018-8856 |
OpenAccessLink | https://hal.science/hal-02118618 |
PMID | 31129491 |
PQID | 2231849664 |
PQPubID | 23479 |
PageCount | 59 |
ParticipantIDs | hal_primary_oai_HAL_hal_02118618v1 proquest_miscellaneous_2231849664 pubmed_primary_31129491 crossref_primary_10_1016_j_neunet_2019_04_024 crossref_citationtrail_10_1016_j_neunet_2019_04_024 elsevier_sciencedirect_doi_10_1016_j_neunet_2019_04_024 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | September 2019 2019-09-00 2019-Sep 20190901 2019-09 |
PublicationDateYYYYMMDD | 2019-09-01 |
PublicationDate_xml | – month: 09 year: 2019 text: September 2019 |
PublicationDecade | 2010 |
PublicationPlace | United States |
PublicationPlace_xml | – name: United States |
PublicationTitle | Neural networks |
PublicationTitleAlternate | Neural Netw |
PublicationYear | 2019 |
Publisher | Elsevier Ltd Elsevier |
Publisher_xml | – name: Elsevier Ltd – name: Elsevier |
References | Li, D., Jiang, M., Fang, Y., Huang, Y., & Zhao, C. (2018). Deep video foreground target extraction with complex scenes. In Cohen, N., & Shashua, A. (2014). SimNets: A generalization of convolutional networks. In Shahbaz, A., Hernandez, D., & Jo, K. (2017). Optimal color space based probabilistic foreground detector for video surveillance systems. In (pp. 1140–1148). Zhao, Wang, Cham (b391) 2011 Vosters, Shan, Gritti (b333) 2012; 30 Zheng, S., Song, Y., Leung, T., & Goodfellow, I. (2018). Improving the robustness of deep neural networks via stability training. In Xiao, H., Feng, J., Lin, G., Liu, Y., & Zhang, M. (2018). MoNet: Deep motion exploitation for video object segmentation. In Vidal (b331) 2017 Farcas, D., & Bouwmans, T. (2010). Background modeling via a supervised subspace learning. In Wu, Lin, Tang (b360) 2015 Lee, Hedley (b189) 2002 Gu, Wang, Kuen, Ma, Shahroudy, Shuai (b122) 2018; 77 Javed, Mahmood, Bouwmans, Jung (b166) 2017; 26 Sasikumar (b283) 2018 Liang, X., Liao, S., Wang, X., Liu, W., Chen, Y., & Li, S. (2018). Deep background subtraction with guided learning. In Farcas, Marghes, Bouwmans (b96) 2012; 23 Chacon-Muguia, M., Gonzalez-Duarte, S., & Vega, P. (2009). Simplified SOM-neural model for video segmentation of moving objects. In Xu, P., Ye, M., Li, X., Liu, Q., Yang, Y., & Ding, J. (2014). Dynamic background learning through deep auto-encoder networks. In . Goel, Weng, Poupart (b113) 2018 Qu, Z., Yu, S., & Fu, M. (2016). Motion background modeling based on context-encoder. In Fischer, Dosovitskiy, Ilg, Hausser, Hazirbas, Golkov (b101) 2015 Cohen, N., & Shashua, A. (2016). Convolutional rectifier networks as generalized tensor decompositions. In Dumoulin, Visin (b90) 2018 Tavakkoli, A., Nicolescu, M., Nicolescu, M., & Bebis, G. (2008). Incremental SVDD training: Improving efficiency of background modeling in videos. In Yang, Cheng, Su, Li (b369) 2016; 26 Huynh (b155) 2017 Szegedy, C., Ioffe, S., Shlens, J., & Wojna, Z. (2016). Rethinking the inception architecture for computer vision. In Chan (b46) 2019; 28 Chen, Y., Wang, J., & Lu, H. (2015). Learning sharable models for robust background subtraction. In (pp. 772–781). Liang, D., Kaneko, S., Hashimoto, M., Iwata, K., Zhao, X., & Satoh, Y. (2013). Co-occurrence-based adaptive background model for robust object detection. In (pp. 187–190). Akilan, T., & Wu, J. (2018). Double encoding - slow decoding image to image CNN for foreground identification with application towards intelligent transportation. In Chen, Papandreou, Kokkinos, Murphy, Yuille (b52) 2016 Long, J., Shelhamer, E., & Darrell, T. (2015). Fully convolutional networks for semantic segmentation. In Yan, Y., Zhao, H., Kao, F., Vargas, V., Zhao, S., & Ren, J. (2018). Deep background subtraction of thermal and visible imagery for pedestrian detection in videos. In Chang, B., Meng, L., Haber, E., Ruthotto, L., Begert, D., & Holtham, E. (2018). Reversible architectures for arbitrarily deep residual neural networks. In Liang, Kaneko, Hashimoto, Iwata, Zhao, Satoh (b196) 2014 (pp. 63–68). Wang, Chen (b337) 2018; 26 Goyette, N., Jodoin, P., Porikli, F., Konrad, J., & Ishwar, P. (2012). Changedetection.net: A new change detection benchmark dataset. In Sultana, Jung (b307) 2019 Kawaguchi, K. (2016). Deep learning without poor local minima. In Zin, T., Tin, P., Toriu, T., & Hama, H. A new background subtraction method using bivariate Poisson process. In Xu, Ithapu, Mukherjee, Rehg, Singh (b364) 2013 Bouwmans, Zahzah (b30) 2014; 122 Javed, S., Sobral, A., Bouwmans, T., & Jung, S. (2015). OR-PCA with dynamic feature selection for robust background subtraction. In Sobral, A., Javed, S., Jung, S., Bouwmans, T., & Zahzah, E. (2015). Online stochastic tensor decomposition for background subtraction in multispectral video sequences. In Creswell, Bharath (b73) 2019; 30 Mehran, Bouwmans (b230) 2018; 28 Zhong, Wei, Lin, Zhang (b399) 2019; 110 Wang (b355) 2018; 1 Elgammal, A., & Davis, L. (2000). Non-parametric model for background subtraction. In Hornik (b150) 1991; 4 (pp. 163–170). Camplani, M., Maddalena, L., Alcover, G. M., Petrosino, A., & Salgado, L. (2017a). RGB-D dataset: Background learning for detection and tracking from RGBD videos. In Suykens (b310) 2017; 29 Chacon-Murguia, M., Ramirez-Alonso, G., & Gonzalez-Duarte, S. (2013). Improvement of a neural-fuzzy motion detection vision model for complex scenario conditions. In Basu, Mukhopadhyay, Karki, Biano, Ganguly, Nemani (b19) 2018; 97 Lopez-Rubio, F., Lopez-Rubio, E., Luque-Baena, R., Dominguez, E., & Palomo, E. (2014). Color space selection for self-organizing map based foreground detection in video sequences. In Liang, Kaneko, Hashimoto, Iwata, Zhao (b194) 2015; 48 (pp. 585–593). Krestinskaya, O., Salama, K., & James, A. (2018a). Analog back propagation learning circuits for memristive crossbar neural networks. In Vaswani, N., Bouwmans, T., Javed, S., & Narayanamurth, P. (2018a). Robust PCA and robust subspace tracking: A comparative evaluation. In Wang, Bebis, Nicolescu, Nicolescu, Miller (b335) 2008 Maddalena, L., & Petrosino, A. (2009a). Multivalued background/foreground separation for moving object detection. In (pp. 1670–1675). (pp. 410–414). Bouwmans, T., & Garcia-Garcia, B. (2019). Background Subtraction in Real Applications: Challenges, Current Models and Future Directions, Preprint. Javed, S., Bouwmans, T., & Jung, S. (2017). SBMI-LTD: Stationary background model initialization based on low-rank tensor decomposition. In Zhao, Zhou, Xie, Zhang, Cichocki (b394) 2016 Fernandez-Sanchez, Rubio, Diaz, Ros (b100) 2014 Wang, Z., Zhang, L., & Bao, H. PNN based motion detection with adaptive learning rate. In Gong, M., & Cheng, L. (2011). Incorporating estimated motion in real-time background subtraction. In Brunettia, Buongiorno, Trotta, Bevilacqua (b34) 2018; 300 (pp. 2296–2300). Chen, Zhang, Du (b55) 2018 (pp. 971–976). Jodoin (b173) 2015 Guo, X., Wang, X., Yang, L., Cao, X., & Ma, Y. (2014). Robust Foreground Detection using Smoothness and Arbitrariness Constraints. In Vacavant, A., Chateau, T., Wilhelm, A., & Lequievre, L. (2012). A benchmark dataset for foreground/background extraction. In Zhao, Zhang, Fang (b392) 2015 (pp. 242–253). Maddalena, Petrosino (b212) 2008 Yu, Wang, Davis (b376) 2018 Bianco, Ciocca, Schettini (b21) 2015 Cinelli (b65) 2017 Babaee, Dinh, Rigoll (b10) 2017 Stauffer, C., & Grimson, E. (1999). Adaptive background mixture models for real-time tracking. In Choo, S., Seo, W., Jeong, D., & Cho, N. (2018a). Multi-scale recurrent encoder-decoder network for dense temporal classification. In Laugraud, Pierard, Droogenbroeck (b185) 2017 Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., & Goodfellow, I., et al. (2014). Intriguing properties of neural networks. In Girshick, R. (2015). Fast R-CNN. In Bouwmans, El Baf (b25) 2009; 1 (pp. 234–241). Dai, Wang, Aston, Hua, Wipf (b81) 2018; 19 Huang, J., Huang, X., & Metaxas, D. (2009). Learning with dynamic group sparsity. In Toyama, K., Krumm, J., Brumiit, B., & Meyers, B. (1999). Wallflower: Principles and practice of background maintenance. In Guyon, C., Bouwmans, T., & Zahzah, E. (2012). Moving object detection by robust PCA solved via a linearized symmetric alternating direction method. In Cohen, N., Tamari, R., & Shashua, A. (2018). Boosting dilated convolutional networks with mixed tensor decompositions. In Schmidhuber (b284) 2015 Tran, D., Bourdev, L., Fergus, R., Torresani, L., & Paluri, M. (2015). C3D: generic features for video analysis. In Wang, Luo, Jodoin (b346) 2016 Gil-Jimenez, P., Maldonado-Bascon, S., Gil-Pita, R., & Gomez-Moreno, H. (2003). Background pixel classification for motion detection in video image sequences. In Zhou (b400) 2016 Maddalena, L., & Petrosino, A. (2012). The SOBS algorithm: What are the limits?. In Wren, Azarbayejani (b359) 1997; 19 Malladi, S., & Sharapov, I. (2018). FastNorm: Improving numerical stability of deep network training with efficient normalization. In Li, Huang (b191) 2004; 13 Lu, Shi, Jia (b209) 2011 Zheng, Wang, Wang (b398) 2018 Lin, H., Liu, T., & Chuang, J. (2002). A probabilistic SVM approach for background scene initialization (pp. 1770–1778). Zhang, Wang, Friedman (b385) 2018; 65 Maddalena, Petrosino (b213) 2008; 17 (pp. 5133–5141). (pp. 652–661). Silva, Bouwmans, Frelicot (b297) 2017 Braham, M., Pierard, S., & Droogenbroeck, M. V. (2017). Semantic background subtraction. In Camplani, M., Maddalena, L., Alcover, G. M., Petrosino, A., & Salgado, L. (2017b). A benchmarking framework for background subtraction in RGBD videos. In Z. Xu (b379) 2019; 71 Bakkay, M., Rashwan, H., Salmane, H., Khoudour, L., Puig, D., & Ruichek, Y. (2018). BSCGAN: Deep background subtraction with conditional generative adversarial networks. In Haines, T., & Xiang, T. (2012). Background subtraction with Dirichlet processes. In Patil, P., & Murala, S. (2019). FgGAN: A cascaded unpaired learning for background estimation and foreground segmentation. In Zheng, Wang, Wang (b397) 2018 Farnoosh, Rezaei, Ostadabbas (b97) 2019 Zhou, T., & Tao, D. (2011). GoDec: Randomized low-rank and sparse matrix decomposition in noisy case. In Zhou, Yang, Yu (b403) 2013; 35 Wang, Wan, Li (b349) 2018 Barron (b18) 1994; 14 Chiranjeevi, Sengupta (b59) 2016 Zhou, C., & Paffenroth, R. (2017). Anomaly detection with robust deep autoencoders. In Haeffele, Vidal (b134) 2015 Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., & Girshick, R., et al. (2014). Caffe: Convolutional architecture for fast feature embedding. In Candès, Li, Ma, Wright (b40) 2011; 58 Sultana, Mahmood, Javed, Jung (b309) 2018 Elfwing, Uchibe, Doya (b91) 2018; 107 (pp. 3274–3281). Chollet (b60) 2015 Krizhevsky, A., Sutskever, I., & Hinton, G. (2012). ImageNet: Classification with deep convolutional neural networks. In Javed, S., Bouwmans, T., Sultana, M., & Jung, S. (2017). Moving object detection on RGB-D videos using graph regularized spatiotemporal RPCA. In Nishani, E., & Cico, B. (2017). Computer vision approaches based on deep learning and neural networks: Deep neural networks for video analysis of human pose estimati Wu (10.1016/j.neunet.2019.04.024_b361) 2010; 64 10.1016/j.neunet.2019.04.024_b159 10.1016/j.neunet.2019.04.024_b158 Tavakkoli (10.1016/j.neunet.2019.04.024_b316) 2005 Babaee (10.1016/j.neunet.2019.04.024_b10) 2017 10.1016/j.neunet.2019.04.024_b270 Oreifej (10.1016/j.neunet.2019.04.024_b250) 2012 10.1016/j.neunet.2019.04.024_b274 10.1016/j.neunet.2019.04.024_b395 10.1016/j.neunet.2019.04.024_b152 10.1016/j.neunet.2019.04.024_b273 10.1016/j.neunet.2019.04.024_b272 10.1016/j.neunet.2019.04.024_b157 10.1016/j.neunet.2019.04.024_b154 10.1016/j.neunet.2019.04.024_b396 10.1016/j.neunet.2019.04.024_b149 10.1016/j.neunet.2019.04.024_b268 10.1016/j.neunet.2019.04.024_b389 Maddalena (10.1016/j.neunet.2019.04.024_b210) 2007; 4729 Wang (10.1016/j.neunet.2019.04.024_b346) 2016 Zhong (10.1016/j.neunet.2019.04.024_b399) 2019; 110 10.1016/j.neunet.2019.04.024_b142 10.1016/j.neunet.2019.04.024_b141 10.1016/j.neunet.2019.04.024_b140 10.1016/j.neunet.2019.04.024_b388 10.1016/j.neunet.2019.04.024_b266 10.1016/j.neunet.2019.04.024_b387 10.1016/j.neunet.2019.04.024_b144 10.1016/j.neunet.2019.04.024_b265 10.1016/j.neunet.2019.04.024_b386 10.1016/j.neunet.2019.04.024_b143 10.1016/j.neunet.2019.04.024_b139 10.1016/j.neunet.2019.04.024_b138 10.1016/j.neunet.2019.04.024_b137 10.1016/j.neunet.2019.04.024_b258 10.1016/j.neunet.2019.04.024_b136 10.1016/j.neunet.2019.04.024_b257 10.1016/j.neunet.2019.04.024_b378 Cai (10.1016/j.neunet.2019.04.024_b36) 2016; 25 Bouwmans (10.1016/j.neunet.2019.04.024_b25) 2009; 1 Lee (10.1016/j.neunet.2019.04.024_b189) 2002 Wang (10.1016/j.neunet.2019.04.024_b344) 2018; 97 10.1016/j.neunet.2019.04.024_b131 10.1016/j.neunet.2019.04.024_b130 10.1016/j.neunet.2019.04.024_b251 10.1016/j.neunet.2019.04.024_b372 10.1016/j.neunet.2019.04.024_b135 10.1016/j.neunet.2019.04.024_b377 10.1016/j.neunet.2019.04.024_b254 10.1016/j.neunet.2019.04.024_b375 Zeng (10.1016/j.neunet.2019.04.024_b382) 2018 10.1016/j.neunet.2019.04.024_b132 10.1016/j.neunet.2019.04.024_b253 10.1016/j.neunet.2019.04.024_b374 10.1016/j.neunet.2019.04.024_b128 10.1016/j.neunet.2019.04.024_b249 10.1016/j.neunet.2019.04.024_b127 10.1016/j.neunet.2019.04.024_b126 10.1016/j.neunet.2019.04.024_b247 10.1016/j.neunet.2019.04.024_b368 10.1016/j.neunet.2019.04.024_b367 Srivastava (10.1016/j.neunet.2019.04.024_b303) 2014; 15 10.1016/j.neunet.2019.04.024_b129 Petersen (10.1016/j.neunet.2019.04.024_b255) 2018; 108 Z. Xu (10.1016/j.neunet.2019.04.024_b379) 2019; 71 10.1016/j.neunet.2019.04.024_b120 10.1016/j.neunet.2019.04.024_b241 10.1016/j.neunet.2019.04.024_b124 10.1016/j.neunet.2019.04.024_b366 10.1016/j.neunet.2019.04.024_b244 10.1016/j.neunet.2019.04.024_b365 10.1016/j.neunet.2019.04.024_b243 Farcas (10.1016/j.neunet.2019.04.024_b96) 2012; 23 10.1016/j.neunet.2019.04.024_b121 10.1016/j.neunet.2019.04.024_b363 Wu (10.1016/j.neunet.2019.04.024_b362) 2019 Silva (10.1016/j.neunet.2019.04.024_b297) 2017 10.1016/j.neunet.2019.04.024_b1 10.1016/j.neunet.2019.04.024_b2 Cuevas (10.1016/j.neunet.2019.04.024_b75) 2016 Moya-Alcover (10.1016/j.neunet.2019.04.024_b240) 2016 10.1016/j.neunet.2019.04.024_b6 10.1016/j.neunet.2019.04.024_b9 10.1016/j.neunet.2019.04.024_b192 10.1016/j.neunet.2019.04.024_b197 10.1016/j.neunet.2019.04.024_b195 Hinton (10.1016/j.neunet.2019.04.024_b146) 2007 Zhang (10.1016/j.neunet.2019.04.024_b385) 2018; 65 10.1016/j.neunet.2019.04.024_b198 Moosavi-Dezfooli (10.1016/j.neunet.2019.04.024_b237) 2017 Dumoulin (10.1016/j.neunet.2019.04.024_b90) 2018 Wang (10.1016/j.neunet.2019.04.024_b335) 2008 Bouwmans (10.1016/j.neunet.2019.04.024_b28) 2018; 28 10.1016/j.neunet.2019.04.024_b182 Wang (10.1016/j.neunet.2019.04.024_b338) 2019 10.1016/j.neunet.2019.04.024_b180 Cohen (10.1016/j.neunet.2019.04.024_b66) 2005; 2 10.1016/j.neunet.2019.04.024_b184 10.1016/j.neunet.2019.04.024_b183 Yang (10.1016/j.neunet.2019.04.024_b370) 2019 10.1016/j.neunet.2019.04.024_b188 Elguebaly (10.1016/j.neunet.2019.04.024_b93) 2013 Goel (10.1016/j.neunet.2019.04.024_b113) 2018 Chen (10.1016/j.neunet.2019.04.024_b51) 2007; 10 Ramirez-Alonso (10.1016/j.neunet.2019.04.024_b263) 2015 Vosters (10.1016/j.neunet.2019.04.024_b333) 2012; 30 Cun (10.1016/j.neunet.2019.04.024_b78) 1998; 86 Wang (10.1016/j.neunet.2019.04.024_b343) 2018 Javed (10.1016/j.neunet.2019.04.024_b166) 2017; 26 Bianco (10.1016/j.neunet.2019.04.024_b21) 2015 Gast (10.1016/j.neunet.2019.04.024_b107) 2018 Agarwala (10.1016/j.neunet.2019.04.024_b3) 2004; 23 10.1016/j.neunet.2019.04.024_b171 10.1016/j.neunet.2019.04.024_b292 10.1016/j.neunet.2019.04.024_b170 10.1016/j.neunet.2019.04.024_b291 10.1016/j.neunet.2019.04.024_b175 10.1016/j.neunet.2019.04.024_b296 10.1016/j.neunet.2019.04.024_b295 10.1016/j.neunet.2019.04.024_b172 10.1016/j.neunet.2019.04.024_b178 10.1016/j.neunet.2019.04.024_b299 10.1016/j.neunet.2019.04.024_b177 10.1016/j.neunet.2019.04.024_b176 Ramirez-Alonso (10.1016/j.neunet.2019.04.024_b264) 2017 Sultana (10.1016/j.neunet.2019.04.024_b309) 2018 Lim (10.1016/j.neunet.2019.04.024_b201) 2018 Weinstein (10.1016/j.neunet.2019.04.024_b356) 2018 10.1016/j.neunet.2019.04.024_b169 Yang (10.1016/j.neunet.2019.04.024_b371) 2018; 19 Sasikumar (10.1016/j.neunet.2019.04.024_b283) 2018 Shafiee (10.1016/j.neunet.2019.04.024_b290) 2016 10.1016/j.neunet.2019.04.024_b160 10.1016/j.neunet.2019.04.024_b281 Maddalena (10.1016/j.neunet.2019.04.024_b212) 2008 10.1016/j.neunet.2019.04.024_b280 Patil (10.1016/j.neunet.2019.04.024_b252) 2018 Chang (10.1016/j.neunet.2019.04.024_b50) 2019 10.1016/j.neunet.2019.04.024_b163 10.1016/j.neunet.2019.04.024_b162 Garcia-Garcia (10.1016/j.neunet.2019.04.024_b105) 2018; 70 10.1016/j.neunet.2019.04.024_b161 Culloch (10.1016/j.neunet.2019.04.024_b77) 1943; 5 10.1016/j.neunet.2019.04.024_b168 10.1016/j.neunet.2019.04.024_b288 10.1016/j.neunet.2019.04.024_b165 10.1016/j.neunet.2019.04.024_b286 10.1016/j.neunet.2019.04.024_b315 10.1016/j.neunet.2019.04.024_b313 10.1016/j.neunet.2019.04.024_b312 10.1016/j.neunet.2019.04.024_b92 10.1016/j.neunet.2019.04.024_b319 10.1016/j.neunet.2019.04.024_b318 10.1016/j.neunet.2019.04.024_b317 Lim (10.1016/j.neunet.2019.04.024_b202) 2018; 112 Vaswani (10.1016/j.neunet.2019.04.024_b329) 2018; 35 Wang (10.1016/j.neunet.2019.04.024_b351) 2018 Vidal (10.1016/j.neunet.2019.04.024_b332) 2018 10.1016/j.neunet.2019.04.024_b89 10.1016/j.neunet.2019.04.024_b311 Wang (10.1016/j.neunet.2019.04.024_b349) 2018 10.1016/j.neunet.2019.04.024_b304 10.1016/j.neunet.2019.04.024_b301 10.1016/j.neunet.2019.04.024_b308 10.1016/j.neunet.2019.04.024_b80 Wang (10.1016/j.neunet.2019.04.024_b339) 2018; 19 10.1016/j.neunet.2019.04.024_b83 10.1016/j.neunet.2019.04.024_b306 10.1016/j.neunet.2019.04.024_b82 10.1016/j.neunet.2019.04.024_b305 Zhou (10.1016/j.neunet.2019.04.024_b400) 2016 Bahri (10.1016/j.neunet.2019.04.024_b14) 2018 10.1016/j.neunet.2019.04.024_b74 10.1016/j.neunet.2019.04.024_b76 Bouwmans (10.1016/j.neunet.2019.04.024_b29) 2017; 23 Rezaei (10.1016/j.neunet.2019.04.024_b269) 2018 10.1016/j.neunet.2019.04.024_b300 Akilan (10.1016/j.neunet.2019.04.024_b4) 2018 Chan (10.1016/j.neunet.2019.04.024_b46) 2019; 28 Chen (10.1016/j.neunet.2019.04.024_b54) 2017 Kingma (10.1016/j.neunet.2019.04.024_b179) 2013 Zhao (10.1016/j.neunet.2019.04.024_b394) 2016 Ren (10.1016/j.neunet.2019.04.024_b267) 2017; 39 Bouwmans (10.1016/j.neunet.2019.04.024_b24) 2014 Maddalena (10.1016/j.neunet.2019.04.024_b225) 2018 10.1016/j.neunet.2019.04.024_b402 10.1016/j.neunet.2019.04.024_b401 Zhang (10.1016/j.neunet.2019.04.024_b384) 2015 10.1016/j.neunet.2019.04.024_b405 Fan (10.1016/j.neunet.2019.04.024_b94) 2012 Suykens (10.1016/j.neunet.2019.04.024_b310) 2017; 29 Zhao (10.1016/j.neunet.2019.04.024_b393) 2018 Fischer (10.1016/j.neunet.2019.04.024_b102) 2012; 7441 Vidal (10.1016/j.neunet.2019.04.024_b331) 2017 Maddalena (10.1016/j.neunet.2019.04.024_b226) 2018 Nouiehed (10.1016/j.neunet.2019.04.024_b248) 2018 Farnoosh (10.1016/j.neunet.2019.04.024_b97) 2019 Brunettia (10.1016/j.neunet.2019.04.024_b34) 2018; 300 Barnich (10.1016/j.neunet.2019.04.024_b17) 2011; 20 10.1016/j.neunet.2019.04.024_b95 Munteanu (10.1016/j.neunet.2019.04.024_b242) 2015; 60 Tan (10.1016/j.neunet.2019.04.024_b314) 2019; 27 10.1016/j.neunet.2019.04.024_b238 10.1016/j.neunet.2019.04.024_b116 10.1016/j.neunet.2019.04.024_b115 10.1016/j.neunet.2019.04.024_b236 Cao (10.1016/j.neunet.2019.04.024_b42) 2016; 46 10.1016/j.neunet.2019.04.024_b114 10.1016/j.neunet.2019.04.024_b235 10.1016/j.neunet.2019.04.024_b119 10.1016/j.neunet.2019.04.024_b118 10.1016/j.neunet.2019.04.024_b239 Feng (10.1016/j.neunet.2019.04.024_b99) 2018; 26 10.1016/j.neunet.2019.04.024_b49 10.1016/j.neunet.2019.04.024_b48 10.1016/j.neunet.2019.04.024_b41 Zhou (10.1016/j.neunet.2019.04.024_b403) 2013; 35 10.1016/j.neunet.2019.04.024_b43 Maddalena (10.1016/j.neunet.2019.04.024_b217) 2010 Radford (10.1016/j.neunet.2019.04.024_b260) 2015 Wang (10.1016/j.neunet.2019.04.024_b354) 2019; 27 10.1016/j.neunet.2019.04.024_b44 10.1016/j.neunet.2019.04.024_b112 10.1016/j.neunet.2019.04.024_b233 10.1016/j.neunet.2019.04.024_b47 10.1016/j.neunet.2019.04.024_b111 10.1016/j.neunet.2019.04.024_b353 10.1016/j.neunet.2019.04.024_b110 10.1016/j.neunet.2019.04.024_b231 10.1016/j.neunet.2019.04.024_b106 10.1016/j.neunet.2019.04.024_b227 10.1016/j.neunet.2019.04.024_b348 10.1016/j.neunet.2019.04.024_b103 10.1016/j.neunet.2019.04.024_b224 10.1016/j.neunet.2019.04.024_b345 10.1016/j.neunet.2019.04.024_b109 10.1016/j.neunet.2019.04.024_b108 10.1016/j.neunet.2019.04.024_b229 10.1016/j.neunet.2019.04.024_b228 Hinton (10.1016/j.neunet.2019.04.024_b147) 2006; 18 Sajid (10.1016/j.neunet.2019.04.024_b277) 2017; 26 Chen (10.1016/j.neunet.2019.04.024_b55) 2018 Akilan (10.1016/j.neunet.2019.04.024_b8) 2019 Camplani (10.1016/j.neunet.2019.04.024_b37) 2014 10.1016/j.neunet.2019.04.024_b38 10.1016/j.neunet.2019.04.024_b39 Lim (10.1016/j.neunet.2019.04.024_b199) 2018 10.1016/j.neunet.2019.04.024_b340 Zhao (10.1016/j.neunet.2019.04.024_b392) 2015 Prativadibhayankaram (10.1016/j.neunet.2019.04.024_b256) 2018 Radford (10.1016/j.neunet.2019.04.024_b259) 2015 10.1016/j.neunet.2019.04.024_b32 Akilan (10.1016/j.neunet.2019.04.024_b5) 2018 10.1016/j.neunet.2019.04.024_b31 10.1016/j.neunet.2019.04.024_b223 Maddalena (10.1016/j.neunet.2019.04.024_b221) 2014 10.1016/j.neunet.2019.04.024 |
References_xml | – year: 2016 ident: b75 article-title: Labeled dataset for integral evaluation of moving object detection algorithms: LASIESTA publication-title: Computer Vision and Image Understanding – year: 2011 ident: b209 article-title: Online robust dictionary learning publication-title: EURASIP Journal on Image and Video Processing, IVP 2011 – start-page: 16010 year: 2018 end-page: 16021 ident: b381 article-title: Background subtraction using multiscale fully convolutional network publication-title: IEEE Access – reference: Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T., & Efros, A. (2016). Context encoders: Feature learning by inpainting. In – reference: Zhou, T., & Tao, D. (2011). GoDec: Randomized low-rank and sparse matrix decomposition in noisy case. In – year: 2015 ident: b384 article-title: Deep learning driven blockwise moving object detection with binary scene modeling publication-title: Neurocomputing – reference: Guo, H., Qiu, C., & Vaswani, N. (2013). Practical ReProCS for Separating Sparse and Low-dimensional Signal Sequences from their Sum. Preprint. – year: 2016 ident: b400 article-title: Robust auto-encoders – reference: Ramirez-Quintana, J., & Chacon-Murguia, M. (2013). Self-organizing retinotopic maps applied to background modeling for dynamic object segmentation in video sequences. In – reference: Baf, F. E., Bouwmans, T., & Vachon, B. (2008a). Foreground detection using the Choquet integral. In – year: 2017 ident: b297 article-title: Superpixel-based online wagging one-class ensemble for feature selection in background/foreground separation publication-title: Pattern Recognition Letters – volume: 9 start-page: 1735 year: 1997 end-page: 1780 ident: b148 article-title: Long short-term memory publication-title: Neural Computation – volume: 77 start-page: 354 year: 2018 end-page: 377 ident: b122 article-title: Recent advances in convolutional neural networks publication-title: Pattern Recognition – volume: 26 start-page: 3249 year: 2017 end-page: 3260 ident: b277 article-title: Universal multimode background subtraction publication-title: IEEE Transactions on Image Processing – reference: Silva, C., Bouwmans, T., & Frelicot, C. (2016). Online weighted one-class ensemble for feature selection in background/foreground separation. In – reference: (pp. 887–893). – year: 2018 ident: b325 article-title: About pyramid structure in convolutional neural networks – year: 2019 ident: b307 article-title: Illumination invariant foreground object segmentation using foregan – reference: Tran, D., Bourdev, L., Fergus, R., Torresani, L., & Paluri, M. (2015). C3D: generic features for video analysis. In – reference: Cohen, N., & Shashua, A. (2014). SimNets: A generalization of convolutional networks. In – year: 2019 ident: b57 article-title: A survey of model compression and accelerationfor deep neural networks – year: 2018 ident: b187 article-title: Survey on deep learning techniques for person re-identification task publication-title: Neurocomputing – year: 2015 ident: b360 article-title: Adjustable bounded rectifiers: Towards deep binary representations – reference: Rosell-Ortega, J., Andreu, G., Atienza, V., & Lopez-Garcia, F. (2010). Background modeling with motion criterion and multi-modal support. In – reference: Mittal, A. (2004). Motion-based background subtraction using adaptive kernel density estimation. In – volume: 18 start-page: 1527 year: 2006 end-page: 1554 ident: b147 article-title: A fast learning algorithm for deep belief nets publication-title: Neural Computation – volume: 46 start-page: 1014 year: 2016 end-page: 1027 ident: b42 article-title: Total variation regularized RPCA for irregularly moving object detection under dynamic background publication-title: IEEE Transactions on Cybernetics – reference: . (pp. 1637–1641). – year: 2018 ident: b246 article-title: Change detection by training a triplet network for motion feature extraction publication-title: IEEE Transactions on Circuits and Systems for Video Technology – reference: Zhao, C., Cham, T., Ren, X., Cai, J., & Zhu, H. (2018). Background subtraction based on deep pixel distribution learning. In – volume: 300 start-page: 17 year: 2018 end-page: 33 ident: b34 article-title: Computer vision and deep learning techniques for pedestrian detection and tracking: A survey publication-title: Neurocomputing – reference: Pulgarin-Giraldo, J., Alvarez-Meza, A., Insuasti-Ceballos, D., Bouwmans, T., & Castellanos-Dominguez, G. (2016). GMM background modeling using divergence-based weight updating. In – reference: (pp. 1097–1105). – reference: Sedky, M., Moniri, M., & Chibelushi, C. (2014). Spectral-360: A physics-based technique for change detection. In – volume: 19 start-page: 254 year: 2018 end-page: 262 ident: b371 article-title: Deep background modeling using fully convolutional network publication-title: IEEE Transactions on Intelligent Transportation Systems – reference: Malladi, S., & Sharapov, I. (2018). FastNorm: Improving numerical stability of deep network training with efficient normalization. In – reference: Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., & Goodfellow, I., et al. (2014). Intriguing properties of neural networks. In – reference: Graszka, P. (2014). Median mixture model for background-foreground segmentation in video sequences. In – volume: 48 start-page: 1374 year: 2015 end-page: 1390 ident: b194 article-title: Co-occurrence probability based pixel pairs background model for robust object detection in dynamic scenes publication-title: Pattern Recognition – year: 2019 ident: b282 article-title: Automatic fish detection in underwater videos by a deep neural network-based hybrid motion learning system publication-title: ICES Journal of Marine Science – reference: Caelles, S., et al. One-shot video object segmentation. In – year: 2019 ident: b97 article-title: DeepPBM: Deep probabilistic background model estimation from video sequences – reference: Shakeri, M., & Zhang, H. (2017). Moving object detection in time-lapse or motion trigger image sequences using low-rank and invariant sparse decomposition. In – reference: Chang, F., Tran, A., Hassner, T., Masi, I., Nevatia, R., & Medioni, G. (2017). FacePoseNet: Making a case for landmark-free face alignment. In – reference: Marghes, C., Bouwmans, T., & Vasiu, R. (2012). Background modeling and foreground detection via a reconstructive and discriminative subspace learning approach. In – volume: 2 start-page: 359 year: 1989 end-page: 366 ident: b151 article-title: Multilayer feedforward networks are universal approximators publication-title: Neural Networks – reference: Krizhevsky, A., Sutskever, I., & Hinton, G. (2012). ImageNet: Classification with deep convolutional neural networks. In – reference: Cohen, N., Sharir, O., & Shashua, A. (2016a). Deep SimNets. In – volume: 30 start-page: 968 year: 2019 end-page: 984 ident: b73 article-title: Denoising adversarial autoencoders publication-title: IEEE Transactions on Neural Networks and Learning Systems – reference: Ronneberger, O., Fischer, P., & Brox, T. (2015). U-Net: Convolutional networks for biomedical image segmentation. In – reference: Tavakkoli, A., Nicolescu, M., Nicolescu, M., & Bebis, G. (2008). Incremental SVDD training: Improving efficiency of background modeling in videos. In – reference: Guyon, C., Bouwmans, T., & Zahzah, E. (2012). Foreground detection by robust PCA solved via a linearized alternating direction method. In – reference: (pp. 63–68). – reference: Javed, S., Oh, S., Sobral, A., Bouwmans, T., & Jung, S. (2015). Background subtraction via superpixel-based online matrix decomposition with structured foreground constraints. In – volume: 4729 start-page: 181 year: 2007 end-page: 190 ident: b210 article-title: A self-organizing approach to detection of moving patterns for real-time applications publication-title: Advances in Brain, Vision, and Artificial Intelligence – reference: . (pp. 1440–1448). – year: 2018 ident: b356 article-title: Scene-specific convolutional neural networks for video-based biodiversity detection publication-title: Methods in Ecology and Evolution – reference: (pp. 3265–3268). – year: 2016 ident: b85 article-title: Tutorial on variational autoencoders – reference: Kawaguchi, K. (2016). Deep learning without poor local minima. In – reference: (pp. 971–976). – year: 2017 ident: b104 article-title: A review on deep learning techniques applied to semantic segmentation – start-page: 1 year: 2017 end-page: 19 ident: b279 article-title: End-to-end video background subtraction with 3D convolutional neural networks publication-title: Multimedia Tools and Applications – reference: Mukherjee, D., & Wu, J. (2012). Real-time video segmentation using Student’s t mixture model. In – reference: (pp. 163–170). – reference: Toyama, K., Krumm, J., Brumiit, B., & Meyers, B. (1999). Wallflower: Principles and practice of background maintenance. In – reference: Xu, P., Ye, M., Liu, Q., Li, X., Pei, L., & Ding, J. (2014). Motion detection via a couple of auto-encoder networks. In – volume: 24 start-page: 723 year: 2013 end-page: 735 ident: b219 article-title: Stopped object detection by learning foreground model in videos publication-title: IEEE Transactions on Neural Networks and Learning Systems – reference: Tombari, F., Lanza, A., Stefano, L. D., & Mattoccia, S. (2009). Non-linear parametric bayesian regression for robust background subtraction. In – year: 2018 ident: b287 article-title: How robust are deep neural networks? – reference: Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., & Anguelov, D., et al. (2015). Going deeper with convolutions. In – year: 2015 ident: b263 article-title: Self-adaptive SOM-CNN neural system for dynamic object detection in normal and complex scenarios publication-title: Pattern Recognition – year: 2014 ident: b24 article-title: Traditional approaches in background modeling for video surveillance publication-title: Handbook background modeling and foreground detection for video surveillance – reference: (pp. 301–306). – reference: (pp. 2818–2826). – reference: (pp. 889–892). – reference: Mopuri, K., Ojha, U., Garg, U., & Babu, R. (2018). NAG: Network for adversary generation. In – reference: Jang, W., & Kim, C. (2017). Online video object segmentation via convolutional trident network. In – reference: Salehinejad, H., & Valaee, S. (2019). Ising-dropout: a regularization method for training and compression of deep neural networks. In – reference: Akilan, T., Wu, J., Jiang, W., Safaei, A., & Huo, J. (2018). New trend in video foreground detection using deep learning. In – reference: . (pp. 209–214). – reference: Gong, M., & Cheng, L. (2011). Incorporating estimated motion in real-time background subtraction. In – volume: 26 start-page: 903 year: 2016 end-page: 916 ident: b369 article-title: Pixel-to-model distance for robust background reconstruction publication-title: IEEE Transactions on Circuits Systems and Video Technology – reference: Javed, S., Mahmood, A., Bouwmans, T., & Jung, S. (2017c). Superpixels based manifold structured sparse RPCA for moving object detectio. In – reference: Vaswani, N., Bouwmans, T., Javed, S., & Narayanamurth, P. (2018a). Robust PCA and robust subspace tracking: A comparative evaluation. In – volume: 57 start-page: 3 year: 2019 end-page: 13 ident: b352 article-title: GETNET: A general end-to-end 2-d CNN framework for hyperspectral image change detection publication-title: IEEE Transactions on Geoscience and Remote Sensing – reference: (pp. 395–403). – volume: 11 year: 2014 ident: b23 article-title: Traditional and recent approaches in background modeling for foreground detection: An overview publication-title: Computer Science Review – volume: 26 start-page: 117 year: 2018 end-page: 130 ident: b99 article-title: A fuzzy restricted Boltzmann machine: Novel learning algorithms based on the crisp possibilistic mean value of fuzzy numbers publication-title: IEEE Transactions on Fuzzy Systems – reference: (pp. 223–238. – reference: Cinar, G., & Principe, J. (2011). Adaptive background estimation using an information theoretic cost for hidden state estimation. In – reference: Maddalena, L., & Petrosino, A. (2017). Exploiting color and depth for background subtraction. In – start-page: 1 year: 2008 end-page: 19 ident: b335 article-title: Improving target detection by coupling it with tracking publication-title: Machine Vision and Application – reference: Guyon, C., Bouwmans, T., & Zahzah, E. (2012). Moving object detection by robust PCA solved via a linearized symmetric alternating direction method. In – reference: Maddalena, L., & Petrosino, A. (2016). Extracting a background image by a multi-modal scene background model. In – year: 2018 ident: b256 article-title: Compressive online video background–Foreground separation using multiple prior information and optical flow publication-title: MDPI Journal of Imaging – volume: 2 start-page: 1034 year: 2005 end-page: 1041 ident: b66 article-title: Background estimation as a labeling problem publication-title: International Conference on Computer Vision, ICCV 2005 – year: 2015 ident: b298 article-title: Very deep convolutional networks for large-scale image recognition publication-title: International Conference on Learning Representation, ICLR 2015 – reference: Sultana, M., Mahmood, A., Javed, S., & Jung, S. (2018a). Unsupervised RGBD video object segmentation using GANs. ACCV-Workshops 2018. – reference: Qu, Z., Yu, S., & Fu, M. (2016). Motion background modeling based on context-encoder. In – reference: Zhao, Z., Bouwmans, T., Zhang, X., & Fang, Y. (2012). A fuzzy background modeling approach for motion detection in dynamic backgrounds. In – volume: 7441 start-page: 14 year: 2012 end-page: 36 ident: b102 article-title: An introduction to restricted Boltzmann machines publication-title: Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications – year: 2016 ident: b394 article-title: Tensor ring decomposition – reference: Girshick, R. (2015). Fast R-CNN. In – reference: St-Charles, P., Bilodeau, G., & Bergevin, R. (2014). Flexible background subtraction with self-balanced local sensitivity. In – reference: Wang, Y., Jodoin, P., Porikli, F., Konrad, J., Benezeth, Y., & Ishwar, P. (2014). CDnet 2014: An expanded change detection benchmark dataset. In – year: 2012 ident: b94 article-title: Online variational learning of finite Dirichlet mixture models publication-title: Evolving Systems – reference: Dai, J., Li, Y., He, K., & Sun, J. (2016). R-FCN: Object detection via region-based fully convolutional networks. In – reference: (pp. 9329–9338). – reference: Marghes, C., & Bouwmans, T. (2010). Background modeling via incremental maximum margin criterion. In – volume: 35 start-page: 597 year: 2013 end-page: 610 ident: b403 article-title: Moving object detection by detecting contiguous outliers in the low-rank representation publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence – volume: 71 start-page: 1 year: 2019 end-page: 12 ident: b379 article-title: A robust background initialization algorithm with superpixel motion detection publication-title: Signal Processing: Image Communication – start-page: 315 year: 2002 end-page: 320 ident: b189 article-title: Background estimation for video surveillance publication-title: Image and Vision Computing New Zealand, IVCNZ 2002 – reference: Patil, P., Murala, S., Dhall, A., & Chaudhary, S. (2018). MsEDNet: Multi-scale deep saliency learning for moving object detection. In – volume: 70 start-page: 41 year: 2018 end-page: 65 ident: b105 article-title: A survey on deep learning techniques for image and video semantic segmentation publication-title: Applied Soft Computing – reference: Szegedy, C., Ioffe, S., Shlens, J., & Wojna, Z. (2016). Rethinking the inception architecture for computer vision. In – year: 2016 ident: b164 article-title: Spatiotemporal low-rank modeling for complex scene background initialization publication-title: IEEE Transactions on Circuits and Systems for Video Technology – reference: Camplani, M., Maddalena, L., Alcover, G. M., Petrosino, A., & Salgado, L. (2017b). A benchmarking framework for background subtraction in RGBD videos. In – year: 2018 ident: b245 article-title: Stable tensor neural networks for rapid deep learning – year: 2019 ident: b88 article-title: Tensor robust principal component analysis: Better recovery with atomic norm regularization – volume: 65 start-page: 677 year: 2018 end-page: 686 ident: b385 article-title: Memristor-based circuit design for multilayer neural networks publication-title: IEEE Transactions on Circuits and Systems. I. Regular Papers – reference: Lin, C., Yan, B., & Tan, W. (2018). Foreground detection in surveillance video with fully convolutional semantic network. In – reference: (pp. 187–190). – reference: Maddalena, L., & Petrosino, A. (2008a). A self-organizing neural system for background and foreground modeling. In – reference: Sobral, A., Javed, S., Jung, S., Bouwmans, T., & Zahzah, E. (2015). Online stochastic tensor decomposition for background subtraction in multispectral video sequences. In – reference: St-Charles, P., Bilodeau, G., & Bergevin, R. (2015). A self-adjusting approach to change detection based on background word consensus. In – reference: Shahbaz, A., Hernandez, D., & Jo, K. (2017). Optimal color space based probabilistic foreground detector for video surveillance systems. In – volume: 23 start-page: 294 year: 2004 end-page: 302 ident: b3 article-title: Interactive digital photomontage publication-title: ACM Transactions on Graphics – reference: (pp. 440–445). – reference: Lim, K., Jang, W., & Kim, C. (2017). Background subtraction using encoder-decoder structured convolutional neural network. In – reference: Goodfellow, I., et al. Generative adversarial networks. In – reference: Abadi, M., et al. (2016). TensorFlow: Large-scale machine learning on heterogeneous distributed systems. In – reference: (pp. 585–593). – start-page: 57 year: 2008 end-page: 64 ident: b212 article-title: Neural model-based segmentation of image motion publication-title: KES 2008 – year: 2015 ident: b101 article-title: Flownet: Learning optical flow with convolutional networks – reference: . (pp. 772–781). – reference: (pp. 40–49). – volume: 2 start-page: 303 year: 1989 end-page: 314 ident: b79 article-title: Approximation by superpositions of a sigmoidal function publication-title: Mathematics of Control Signals and Systems – reference: (pp. 1–7). – reference: (pp. 2296–2300). – volume: 29 start-page: 2123 year: 2017 end-page: 2163 ident: b310 article-title: Deep restricted kernel machines using conjugate feature duality publication-title: Neural Computation – reference: Girshick, R., Donahue, J., Darrell, T., & Malik, J. (2014). Rich feature hierarchies for accurate object detection and semantic segmentation. In – reference: Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., & Girshick, R., et al. (2014). Caffe: Convolutional architecture for fast feature embedding. In – reference: Javed, S., Sobral, A., Bouwmans, T., & Jung, S. (2015). OR-PCA with dynamic feature selection for robust background subtraction. In – reference: (pp. 1–6). – reference: (pp. 524–533). – year: 2005 ident: b316 article-title: Foreground-background segmentation in video sequences using neural networks publication-title: Intelligent Systems: Neural Networks and Applications – volume: 107 start-page: 3 year: 2018 end-page: 11 ident: b91 article-title: Sigmoid-weighted linear units for neural network function approximation in reinforcement learning publication-title: Neural Networks – year: 2017 ident: b156 article-title: Image to- image translation with conditional adversarial networks – reference: Wang, Z., Zhang, L., & Bao, H. PNN based motion detection with adaptive learning rate. In – volume: 28 year: 2018 ident: b230 article-title: New trends on moving object detection in video images Captured by a moving Camera: A survey publication-title: Computer Science Review – year: 2017 ident: b289 article-title: Real-time embedded motion detection via neural response mixture modeling publication-title: Journal of Signal Processing Systems – reference: (pp. 718–725). – year: 2015 ident: b60 article-title: Keras – year: 2015 ident: b134 article-title: Global optimality in tensor factorization, deep learning, and beyond. – reference: Gil-Jimenez, P., Maldonado-Bascon, S., Gil-Pita, R., & Gomez-Moreno, H. (2003). Background pixel classification for motion detection in video image sequences. In – reference: He, K., Zhang, X., Ren, S., & Sun, J. Delving deep into rectifiers: Surpassing human-level performance on Imagenet classification. In – reference: Javed, S., Bouwmans, T., & Jung, S. (2015a). Combining ARF and OR-PCA background subtraction of noisy videos. In – volume: 30 start-page: 1004 year: 2012 end-page: 1015 ident: b333 article-title: Real-time robust background subtraction under rapidly changing illumination conditions publication-title: Image Vision and Computing – reference: Bautista, C., Dy, C., Manalac, M., Orbe, R., & Cordel, M. (2016). Convolutional neural network for vehicle detection in low resolution traffic videos. In – year: 2016 ident: b59 article-title: Interval-valued model level Fuzzy aggregation-based background subtraction publication-title: IEEE Transactions on Cybernetics – reference: Liao, J., Guo, G., Yan, Y., & Wang, H. (2018). Multiscale cascaded scene-specific convolutional neural networks for background subtraction. In – year: 2015 ident: b21 article-title: How far Can you get by combining change detection algorithms? – reference: Vacavant, A., Chateau, T., Wilhelm, A., & Lequievre, L. (2012). A benchmark dataset for foreground/background extraction. In – year: 2019 ident: b7 article-title: An improved video foreground extraction strategy using multi-view receptive field and EnDec CNN publication-title: IEEE Transactions on Industrial Informatics – volume: 110 start-page: 104 year: 2019 end-page: 115 ident: b399 article-title: ADA-Tucker: Compressing deep neural networks via adaptive dimension adjustment tucker decomposition, neural networks publication-title: Neural Networks – year: 2019 ident: b208 article-title: Tensor robust principal component analysis with a new tensor nuclear norm publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence – year: 2018 ident: b190 article-title: Background subtraction using the factored 3-way restricted Boltzmann machines – reference: (pp. 419–422). – year: 2018 ident: b262 article-title: Attention-based few-shot person re-identification using meta learning – year: 2018 ident: b226 article-title: Self-organizing background subtraction using color and depth data publication-title: Multimedia Tools and Applications – year: 2015 ident: b260 article-title: Unsupervised representation learning with deep convolutional generative adversarial networks publication-title: Computer Science – reference: (pp. 234–241). – year: 2015 ident: b167 article-title: Robust background subtraction to global illumination changes via multiple features based OR-PCA with MRF publication-title: Journal of Electronic Imaging – year: 2018 ident: b283 article-title: Investigating the application of deep convolutional neural networks in semi-supervised video object segmentation – volume: 15 year: 2018 ident: b232 article-title: On the implicit bias of dropout publication-title: International Conference on Machine Learning, ICML 2018 – volume: 27 start-page: 773 year: 2006 end-page: 780 ident: b406 article-title: Efficient adaptive density estimation per image pixel for the task of background subtraction publication-title: Pattern Recognition Letters – year: 2018 ident: b383 article-title: Multiscale fully convolutional network for foreground object detection in infrared videos publication-title: IEEE Geoscience and Remote Sensing Letters – reference: Gregorio, M., & Giordano, M. (2015). Background modeling by weightless neural networks. In – year: 2012 ident: b250 article-title: Simultaneous video stabilization and moving object detection in turbulence publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI 2012 – year: 2017 ident: b331 article-title: Mathematics of deep learning – year: 2015 ident: b173 article-title: Motion detection: Unsolved issues and [potential] solutions – reference: (pp. 1–5). – reference: Wang, J., Bebis, G., & Miller, R. (2006). Robust video-based surveillance by integrating target detection with tracking. In – reference: Yi, H., Shiyu, S., Xiusheng, D., & Zhigang, C. (2016). A study on deep neural networks framework. In – year: 2018 ident: b269 article-title: Moving object detection through robust matrix completion augmented with objectness publication-title: IEEE Journal of Selected Topics in Signal Processing – reference: Davies, R., Mihaylova, L., Pavlidis, N., & Eckley, I. (2013). The effect of recovery algorithms on compressive sensing background subtraction. In – reference: (pp. 242–253). – reference: Choo, S., Seo, W., Jeong, D., & Cho, N. (2018a). Multi-scale recurrent encoder-decoder network for dense temporal classification. In – volume: 108 start-page: 296 year: 2018 end-page: 330 ident: b255 article-title: Optimal approximation of piecewise smooth functions using deep relu neural networks publication-title: Neural Networks – year: 2017 ident: b54 article-title: Pixel-wise deep sequence learning for moving object detection publication-title: IEEE Transactions on Circuits and Systems for Video Technology – reference: Varadarajan, S., Miller, P., & Zhou, H. (2013). Spatial mixture of Gaussians for dynamic background modelling. In – reference: (pp. 103–108). – reference: Javed, S., Bouwmans, T., & Jung, S. (2017). SBMI-LTD: Stationary background model initialization based on low-rank tensor decomposition. In – reference: Braham, M., & Droogenbroeck, M. V. (2016). Deep background subtraction with scene-specific convolutional neural networks. In – year: 2018 ident: b181 article-title: Learning in memristive neural network architectures using analog backpropagation circuits – volume: 31 start-page: 539 year: 2009 end-page: 555 ident: b347 article-title: Unsupervised activity perception in crowded and complicated scenes using hierarchical Bayesian models publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence – reference: (pp. 410–414). – year: 2019 ident: b370 article-title: FSNet: Compression of deep convolutional neural networks by filter summary – volume: 1 start-page: 137 year: 2018 end-page: 145 ident: b355 article-title: Fully memristive neural networks for pattern classification with unsupervised learning publication-title: Nature Electronics – year: 2018 ident: b14 article-title: Online illumination invariant moving object detection by generative neural network – volume: 12 start-page: 1398 year: 2011 end-page: 1412 ident: b98 article-title: Adaptive background modeling integrated with luminosity sensors and occlusion processing for reliable vehicle detection publication-title: IEEE Transactions on Intelligent Transportation Systems – reference: Le, D., & Pham, T. (2018). Encoder-decoder convolutional neural network for change detection. In – reference: Chacon-Muguia, M., Gonzalez-Duarte, S., & Vega, P. (2009). Simplified SOM-neural model for video segmentation of moving objects. In – reference: Kahng, M., Thorat, N., Chau, D., Viegas, F., & Wattenberg, M. (2019). GAN Lab: Understanding complex deep generative models using interactive visual experimentation. In – volume: 118 start-page: 14 year: 2019 end-page: 22 ident: b373 article-title: A review of convolutional-neural-network-based action recognition publication-title: Pattern Recognition Letters – volume: 122 start-page: 65 year: 2014 end-page: 73 ident: b220 article-title: The 3dSOBS+ algorithm for moving object detection publication-title: Computer Vision and Image Understanding, CVIU 2014 – reference: Tavakkoli, A., Nicolescu, M., & Bebis, G. (2006). Novelty detection approach for foreground region detection in videos with quasi-stationary backgrounds. In – reference: Guyon, C., Bouwmans, T., & Zahzah, E. (2012). Foreground detection based on low-rank and block-sparse matrix decomposition. In – reference: Cohen, N., Tamari, R., & Shashua, A. (2018). Boosting dilated convolutional networks with mixed tensor decompositions. In – reference: Li, D., Jiang, M., Fang, Y., Huang, Y., & Zhao, C. (2018). Deep video foreground target extraction with complex scenes. In – reference: Minematsu, T., Shimada, A., & Taniguchi, R. (2017). Analytics of deep neural network in change detection. In – year: 2017 ident: b185 article-title: A method based on motion detection for generating the background of a scene publication-title: Pattern Recognition Letters – year: 2018 ident: b90 article-title: A guide to convolution arithmetic for deep learning – volume: 27 start-page: 457 year: 2019 end-page: 468 ident: b350 article-title: Combining spectral and spatial features for deep learning based blind speaker separation publication-title: IEEE/ACM Transactions on Audio, Speech, and Language Processing – reference: Lin, H., Liu, T., & Chuang, J. (2002). A probabilistic SVM approach for background scene initialization, – year: 2014 ident: b221 article-title: Background model initialization for static Cameras publication-title: Handbook on background modeling and foreground detection for video surveillance – reference: Rezaei, B., & Ostadabbas, S. (2017). Background subtraction via fast robust matrix completion. In – reference: (pp. 1–9). – reference: Kim, J., Rivera, A., Kim, B., Roy, K., & Chae, O. (2017). Background modeling using adaptive properties of hybrid features. In – start-page: 1 year: 2019 end-page: 21 ident: b117 article-title: Dynamic background modeling using deep learning autoencoder network publication-title: Multimedia Tools and Applications – start-page: 14 year: 2014 end-page: 29 ident: b196 article-title: Robust object detection in severe imaging conditions using co-occurrence background model publication-title: International Journal of Optomechatronics – volume: 4 start-page: 251 year: 1991 end-page: 257 ident: b150 article-title: Approximation capabilities of multilayer feedforwardnetworks publication-title: Neural Networks – year: 2015 ident: b259 article-title: Unsupervised representation learning with deep convolutional generative adversarial networks – reference: (pp. 770–778). – reference: Javed, S., Bouwmans, T., & Jung, S. (2015b). Depth extended online RPCA with spatiotemporal constraints for robust background subtraction. In – year: 2017 ident: b193 article-title: Adaptive deep convolutional neural networks for scene-specific object detection publication-title: IEEE Transactions on Circuits and Systems for Video Technology – reference: Gregorio, M., & Giordano, M. (2017). CwisarDH+: Background detection in RGBD videos by learning of weightless neural networks. In – reference: (pp. 5133–5141). – reference: (pp. 675–678). – year: 2018 ident: b393 article-title: Object detection with deep learning: A review – reference: Silva, C., Bouwmans, T., & Frelicot, C. (2015). An eXtended center-symmetric local binary pattern for background modeling and subtraction in videos. In – year: 2018 ident: b278 article-title: Illumination-aware multi-task GANs for foreground segmentation publication-title: IEEE Access – reference: Xu, P., Ye, M., Li, X., Liu, Q., Yang, Y., & Ding, J. (2014). Dynamic background learning through deep auto-encoder networks. In – volume: 28 start-page: 013038 year: 2019 ident: b46 article-title: Deep learning-based scene-awareness approach for intelligent change detection in videos publication-title: Journal of Electronic Imaging – year: 2018 ident: b201 article-title: Foreground segmentation using a triplet convolutional neural network for multiscale feature encoding – reference: Maddalena, L., & Petrosino, A. (2009a). Multivalued background/foreground separation for moving object detection. In – volume: 19 start-page: 780 year: 1997 end-page: 785 ident: b359 article-title: Pfinder: Real-time tracking of the human body publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence – reference: Deng, J., Dong, W., Socher, R., Li, L., Li, K., & Fei-Fei, L. (2009). ImageNet: A large-scale hierarchical image database. In – reference: Xu, L., Li, Y., Wang, Y., & Chen, E. (2015). Temporally adaptive restricted Boltzmann machine for background modeling. In – reference: Choromanska, A., Henaff, M., Mathieu, M., Arous, G., & LeCun, Y. (2015). The loss surfaces of multilayer networks. In – year: 2010 ident: b86 article-title: Adaptive learning of multi-subspace for foreground detection under illumination changes publication-title: Computer Vision and Image Understanding – reference: Farcas, D., & Bouwmans, T. (2010). Background modeling via a supervised subspace learning. In – volume: 23 start-page: 1 year: 2017 end-page: 71 ident: b29 article-title: Decomposition into low-rank plus additive matrices for background/foreground separation: A review for a comparative evaluation with a large-scale dataset publication-title: Computer Science Review – year: 2018 ident: b397 article-title: Background subtraction algorithm based on Bayesian generative adversarial networks publication-title: Acta Automatica Sinica – reference: Yang, J., Yang, J., Yang, X., & Yue, H. (2016). Background recovery from video sequences via online motion-assisted RPCA. In – year: 2014 ident: b100 article-title: Background subtraction model based on color and depth cues publication-title: Machine Vision and Applications – year: 2019 ident: b362 article-title: Deep learning-based methods for person re-identification: A comprehensive review publication-title: Neurocomputing – reference: (pp. 1519–1522). – reference: (pp. 2724–2730). – year: 1962 ident: b357 article-title: Generalization and information storage in networks of ADALINE publication-title: Self Organizing Systems – reference: Shafiee, M., Siva, P., Fieguth, P., & Wong, A. (2016). Embedded motion detection via neural response mixture background modeling. In – volume: 122 start-page: 74 year: 2014 end-page: 83 ident: b302 article-title: A texton-based kernel density estimation approach for background modeling under extreme conditions publication-title: Computer Vision and Image Understanding, CVIU 2014 – volume: 234 start-page: 11 year: 2017 end-page: 26 ident: b205 article-title: A survey of deep neural network architectures and their applications publication-title: Neurocomputing – year: 2017 ident: b27 article-title: Scene background initialization: a taxonomy publication-title: Pattern Recognition Letters – volume: 26 start-page: 5840 year: 2017 end-page: 5854 ident: b166 article-title: Background-foreground modeling based on spatio-temporal sparse subspace clustering publication-title: IEEE Transactions on Image Processing – reference: (pp. 469–476). – reference: Nishani, E., & Cico, B. (2017). Computer vision approaches based on deep learning and neural networks: Deep neural networks for video analysis of human pose estimation. In – reference: Narayanamurthy, P., & Vaswani, N. (2018). A fast and memory-efficient algorithm for robust PCA (MEROP). In – volume: 86 start-page: 2278 year: 1998 end-page: 2324 ident: b78 article-title: Gradient-based learning applied to document recognition publication-title: Proceedings of IEEE – volume: 15 start-page: 1929 year: 2014 end-page: 1958 ident: b303 article-title: Dropout: A simple way to prevent neural networks from overfitting publication-title: Journal of Machine Learning Research (JMLR) – reference: Cheng, Y., Diakonikolas, I., Kane, D., & Stewart, A. (2018). Robust learning of fixed-structure Bayesian networks. In – volume: 23 start-page: 1083 year: 2012 end-page: 1101 ident: b96 article-title: Background subtraction via incremental maximum margin criterion: A discriminative approach publication-title: Machine Vision and Applications – reference: (pp. 422–433). – year: 2018 ident: b332 article-title: Mathematics of deep learning – reference: Javed, S., Mahmood, A., Bouwmans, T., & Jung, S. (2016a). Motion-aware graph regularized RPCA for background modeling of complex scenes. In – volume: 55 start-page: 1 year: 2016 end-page: 18 ident: b271 article-title: Incremental principal component pursuit for video background modeling publication-title: Journal of Mathematical Imaging and Vision – reference: Rodriguez, P., & Wohlberg, B. 2015. Translational and rotational jitter invariant incremental principalcomponent pursuit for video background modeling. In – reference: Yan, Y., Zhao, H., Kao, F., Vargas, V., Zhao, S., & Ren, J. (2018). Deep background subtraction of thermal and visible imagery for pedestrian detection in videos. In – reference: Lopez-Rubio, F., Lopez-Rubio, E., Luque-Baena, R., Dominguez, E., & Palomo, E. (2014). Color space selection for self-organizing map based foreground detection in video sequences. In – year: 2017 ident: b237 article-title: Analysis of universal adversarial perturbations – reference: (pp. 742–751). – year: 2019 ident: b404 article-title: Deconstructing generative adversarial networks – reference: (pp. 751–767). – reference: Haines, T., & Xiang, T. (2012). Background subtraction with Dirichlet processes. In – year: 2016 ident: b346 article-title: Interactive deep learning method for segmenting moving objects publication-title: Pattern Recognition Letters – reference: Patil, P., & Murala, S. (2019). FgGAN: A cascaded unpaired learning for background estimation and foreground segmentation. In – reference: Haeffele, B., & Vidal, R. (2017). Global optimality in neural network training. In – reference: Chacon-Murguia, M., Ramirez-Alonso, G., & Gonzalez-Duarte, S. (2013). Improvement of a neural-fuzzy motion detection vision model for complex scenario conditions. In – reference: Cane, T., & Ferryman, J. Evaluating deep semantic segmentation networks for object detection in maritime surveillance. In – reference: Baf, F. E., Bouwmans, T., & Vachon, B. (2008b). Fuzzy integral for moving object detection. In – reference: (pp. 2811–2818). – reference: (pp. 1–4). – reference: Zheng, S., Song, Y., Leung, T., & Goodfellow, I. (2018). Improving the robustness of deep neural networks via stability training. In – reference: Nair, V., & Hinton, G. (2010). Rectified linear units improve restricted Boltzmann machines. In – reference: . (pp. 1729–1736). – reference: Cuevas, C., & Garcia, N. (2010). Tracking-based non-parametric background-foreground classification in a chromaticity-gradient space. In – year: 2018 ident: b380 article-title: Background subtraction with real-time semantic segmentation – volume: 25 start-page: 1 year: 2016 end-page: 13 ident: b36 article-title: DehazeNet: AN end-to-end system for single image haze removal publication-title: IEEE Transactions on Image Processing – reference: (pp. 192–204). – reference: Wang, X., Liu, L., Li, G., Dong, X., Zhao, P., & Feng, X. (2018). Background subtraction on depth videos with convolutional neural networks. In – reference: Mopuri, K., Garg, U., & Babu, R. (2017). Fast feature fool: A data independent approach to universal adversarial perturbations. In – reference: Camplani, M., Maddalena, L., Alcover, G. M., Petrosino, A., & Salgado, L. (2017a). RGB-D dataset: Background learning for detection and tracking from RGBD videos. In – volume: 27 start-page: 178 year: 2019 end-page: 188 ident: b354 article-title: Robust speaker localization guided by deep learning-based time-frequency masking publication-title: IEEE/ACM Transactions on Audio, Speech, and Language Processing – reference: Hasan, R., Taha, T., & Yakopcic, C. (2017). On-chip training of memristor based deep neural networks. In – reference: Long, J., Shelhamer, E., & Darrell, T. (2015). Fully convolutional networks for semantic segmentation. In – reference: He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In – reference: (pp. 3274–3281). – reference: Zheng, Z., & Hong, P. (2018). Robust detection of adversarial attacks by modeling the intrinsic properties of deep neural networks. In – reference: Cheng, M., Xia, L., Zhu, Z., Cai, Y., Xie, Y., & Wang, Y., et al. (2017). Time: A training-in-memory architecture for memristor-based deep neural networks. In – volume: 13 start-page: 1459 year: 2004 end-page: 1472 ident: b191 article-title: Statistical modeling of complex background for foreground object detection publication-title: IEEE Transaction on Image Processing – volume: 29 start-page: 1421 year: 1996 end-page: 1428 ident: b285 article-title: A system for counting people in video images using neural networks to identify the background scene publication-title: Pattern Recognition – reference: Krestinskaya, O., Salama, K., & James, A. (2018a). Analog back propagation learning circuits for memristive crossbar neural networks. In – reference: Chang, B., Meng, L., Haber, E., Ruthotto, L., Begert, D., & Holtham, E. (2018). Reversible architectures for arbitrarily deep residual neural networks. In – reference: Zin, T., Tin, P., Toriu, T., & Hama, H. A new background subtraction method using bivariate Poisson process. In – reference: Giryes, R., Sapiro, G., & Bronstein, A. (2015). On the stability of deep networks. In – year: 2018 ident: b343 article-title: Scene classification with recurrent attention of VHR remote sensing images publication-title: IEEE Transactions on Geoscience and Remote Sensing – volume: 78 start-page: 1415 year: 1990 end-page: 1442 ident: b358 article-title: 30 years of adaptive neural networks: perceptron, madaline, and backpropagation publication-title: Proceedings of the IEEE – start-page: 103 year: 2012 end-page: 139 ident: b22 article-title: Background subtraction for visual surveillance: A fuzzy approach publication-title: Handbook on soft computing for video surveillance – reference: Moosavi-Dezfooli, S., Fawzi, A., Fawzi, O., & Frossard, P. (2017). Universal adversarial perturbations. In – reference: Xiao, H., Feng, J., Lin, G., Liu, Y., & Zhang, M. (2018). MoNet: Deep motion exploitation for video object segmentation. In – reference: Yoon, J., Rameau, F., Kim, J., Lee, S., Shin, S., & Kweon, I. S. (2017). Pixel-level matching for video object segmen-tation using convolutional neural networks. In – reference: Javed, S., Sobral, A., Oh, S., Bouwmans, T., & Jung, S. (2014). OR-PCA with MRF for robust foreground detection in highly dynamic backgrounds. In – volume: 4 year: 2018 ident: b186 article-title: Labgen-p-semantic: A first step for leveraging semantic segmentation in background generation publication-title: MDPI Journal of Imaging – reference: (pp. 1670–1675). – year: 2013 ident: b364 article-title: GOSUS: Grassmannian online subspace updates with structured-sparsity publication-title: International Conference on Computer Vision, ICCV 2013 – reference: Braham, M., Pierard, S., & Droogenbroeck, M. V. (2017). Semantic background subtraction. In – reference: Graves, A., Mohamed, A., & Hinton, G. (2013). Speech recognition with deep recurrent neural networks. In – reference: Guyon, C., Bouwmans, T., & Zahzah, E. (2012). Robust principal component analysis for background subtraction: Systematic evaluation and comparative analysis. In – reference: Maddalena, L., & Petrosino, A. (2009b). 3D neural model-based stopped object detection. In – reference: Baf, F. E., Bouwmans, T., & Vachon, B. (2008c). Type-2 fuzzy mixture of Gaussians model: Application to background modeling. In – volume: 60 year: 2015 ident: b242 article-title: The detection of moving objects in video by background subtraction using dempster-shafer theory publication-title: Transactions on Electronics and Communications – volume: 64 start-page: 739 year: 2010 end-page: 747 ident: b361 article-title: Spatio-temporal context for codebook-based dynamic background subtraction publication-title: AEU-International Journal of Electronic Communication – reference: He, J., Balzano, L., & Szlam, A. (2012). Incremental gradient on the grassmannian for online foreground and background separation in subsampled video. In – volume: 26 start-page: 1702 year: 2018 end-page: 1726 ident: b337 article-title: Supervised speech separation based on deep learning: An overview publication-title: IEEE/ACM Transactions on Audio, Speech, and Language Processing – reference: Bouwmans, T., & Garcia-Garcia, B. (2019). Background Subtraction in Real Applications: Challenges, Current Models and Future Directions, Preprint. – volume: 19 start-page: 230 year: 2018 end-page: 241 ident: b339 article-title: Embedding structured contour and location prior in siamesed fully convolutional networks for road detection publication-title: IEEE Transactions on Intelligent Transportation Systems – year: 2018 ident: b153 article-title: 3D atrous convolutional long short-term memory network for background subtraction publication-title: IEEE Access – reference: (pp. 6645–6649). – reference: Teng, X., Yan, M., Ertugrul, A., & Lin, Y. (2018). Deep into hypersphere: Robust and unsupervised anomaly discovery in dynamic networks. In – reference: Laugraud, B., Pierard, S., & Droogenbroeck, M. V. (2016). LaBGen-P: A pixel-level stationary background generation method based on LaBGen. In – reference: , 2014. – reference: Yuan, Y., & Z. Xiong, a. Q. W. (2019). ACM: Adaptive cross-modal graph convolutional neural networks for RGB-D scene recognition. In – year: 2011 ident: b391 article-title: Background subtraction via robust dictionary learning publication-title: EURASIP Journal on Image and Video Processing, IVP 2011 – reference: Ranzato, M., Krizhevsky, A., & Hinton, G. (2010). Factored 3-Way restricted Boltzmann machines for modeling natural images. In – volume: 1 start-page: 265 year: 2009 end-page: 277 ident: b25 article-title: Modeling of dynamic backgrounds by type-2 Fuzzy Gaussians mixture models publication-title: MASAUM Journal of Basic and Applied Sciences – volume: 27 start-page: 189 year: 2019 end-page: 198 ident: b314 article-title: Gated residual networks with dilated convolutions for monaural speech enhancement publication-title: IEEE/ACM Transactions on Audio, Speech, and Language Processing – reference: Zhang, H., & Xu, D. 2006a. Fusing color and gradient features for background model. In – reference: He, K., Zhang, X., & Ren, S. (2016). Deep residual learning for image recognition. In – reference: Karadag, O., & Erdas, O. (2018). Evaluation of the robustness of deep features on the change detection problem. In – year: 2018 ident: b248 article-title: Learning deep models: Critical points and local openness – year: 2016 ident: b240 article-title: Modelling depth for nonparametric foreground segmentation using RGBD devices publication-title: Pattern Recognition Letters – reference: (pp. 1770–1778). – year: 2018 ident: b351 article-title: Foreground detection with deeply learned multi-scale spatial-temporal features publication-title: MDPI Sensors – year: 2017 ident: b276 article-title: Real-time adaptive histogram min-max bucket (HMMB) model for background subtraction publication-title: IEEE Transactions on Circuits and Systems for Video Technology – reference: . – volume: 35 start-page: 32 year: 2018 end-page: 55 ident: b329 article-title: Robust subspace learning: Robust PCA, robust subspace tracking and robust subspace recovery publication-title: IEEE Signal Processing Magazine – reference: Du, Y., Yuan, C., Hu, W., & Maybank, S. (2017). Spatio-temporal self-organizing map deep network for dynamic object detection from videos. In – reference: (pp. 893–896). – year: 2019 ident: b8 article-title: sEnDec: An improved image to image CNN for foreground localization publication-title: IEEE Intelligent Transportation Systems Transactions – reference: (pp. 3431–3440). – reference: Stauffer, C., & Grimson, E. (1999). Adaptive background mixture models for real-time tracking. In – reference: Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., & Chen, X. (2016). Improved techniques for training GANs. In – reference: Wang, R., Bunyak, F., Seetharaman, G., & Palaniappa, K. (2014). Static and moving object detection using flux tensor with split Gaussian models. In – year: 2017 ident: b390 article-title: Joint background reconstruction and foreground segmentation via a two-stage convolutional neural network – reference: Goyette, N., Jodoin, P., Porikli, F., Konrad, J., & Ishwar, P. (2012). Changedetection.net: A new change detection benchmark dataset. In – reference: Guo, R., & Qi, H. (2013). Partially-sparse restricted Boltzmann machine for background modeling and subtraction. In – year: 2014 ident: b37 article-title: Advanced background modeling with RGB-D sensors through classifiers combination and inter-frame foreground prediction publication-title: Machine Vision and Applications – year: 2019 ident: b50 article-title: Deep, landmark-free FAME: Face alignment, modeling, and expression estimation publication-title: International Journal of Computer Vision – volume: 20 start-page: 273 year: 1995 end-page: 297 ident: b72 article-title: Support-vector networks publication-title: Machine Learning – reference: Wang, H., Lai, Y., Cheng, W., Cheng, C., & Hua, K. (2017). Background extraction based on joint gaussian conditional random fields. In – reference: Liang, X., Liao, S., Wang, X., Liu, W., Chen, Y., & Li, S. (2018). Deep background subtraction with guided learning. In – volume: 5 start-page: 115 year: 1943 end-page: 133 ident: b77 article-title: A logical calculus of the ideas immanent in nervous activity publication-title: Bulletin of Mathematical Biophysics – reference: Tao, Y., Palasek, P., Ling, Z., & Patras, I. (2017). Background modelling based on generative Unet. In – reference: Messelodi, S., Modena, C., Segata, N., & Zanin, M. (2005). A Kalman filter based background updating algorithm robust to sharp illumination changes. In – reference: Maddalena, L., & Petrosino, A. (2009c). Self organizing and fuzzy modelling for parked vehicles detection. In – year: 2015 ident: b392 article-title: Stacked multi-layer self-organizing map for background modeling publication-title: IEEE Transactions on Image Processing – reference: Cohen, N., & Shashua, A. (2016). Convolutional rectifier networks as generalized tensor decompositions. In – volume: 4 start-page: 79 year: 2018 ident: b123 article-title: Deep learning with a spatiotemporal descriptor of appearance and motion estimation for video anomaly detection publication-title: MDPI Journal of Imaging – year: 2018 ident: b382 article-title: Combining background subtraction algorithms with convolutional neural network – year: 2013 ident: b179 article-title: Auto-encoding variational bayes – year: 2018 ident: b4 article-title: Video foreground localization from traditional methods to deep learning – volume: 19 start-page: 1 year: 2018 end-page: 42 ident: b81 article-title: Connections with robust PCA and the role of emergent sparsity in variational autoencoder models publication-title: Journal of Machine Learning Research (JMLR) – reference: Culibrk, D., Marques, O., Socek, D., Kalva, H., & Furht, B. (2006). A neural network approach to Bayesian background modeling for video object segmentation. In – reference: He, K., Zhang, X., Ren, S., & Sun, J. (2015). Delving deep into rectifiers: Surpassing human-level performance on Imagenet classification. In – reference: Sobral, A., Bouwmans, T., & Zahzah, E. (2015a). Comparison of matrix completion algorithms for background initialization in videos. In – reference: Yun, C., Sra, S., & Jadbabaie, A. (2018). A critical view of global optimality in deep learning. In – reference: Javed, S., Bouwmans, T., & Jung, S. (2015c). Stochastic decomposition into low rank and sparse tensor for robust background subtraction. In – year: 2018 ident: b55 article-title: Learning to detect instantaneous changes with retrospective convolution and static sample synthesis – reference: -norm stacked robust autoencoders for domain adaptation. In – reference: Chang, T., Ghandi, T., & Trivedi, M. (2004). Vision modules for a multi sensory bridge monitoring approach. In – volume: 39 start-page: 1137 year: 2017 end-page: 1149 ident: b267 article-title: Faster R-CNN: Towards real-time object detection with region proposal networks publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence – reference: Bai, J., Zhang, H., & Li, Z. (2018). The generalized detection method for the dim small targets by faster R-CNN integrated with GAN. In – year: 2013 ident: b93 article-title: Finite asymmetric generalized Gaussian mixture models learning for infrared object detection publication-title: Computer Vision and Image Understanding – year: 2018 ident: b107 article-title: Lightweight probabilistic deep networks – reference: (pp. 4118–4122). – year: 2007 ident: b146 article-title: Deep belief nets publication-title: NIPS Tutorial – start-page: 1 year: 2018 end-page: 23 ident: b87 article-title: Background subtraction based on deep convolutional neural networks features publication-title: Multimedia Tools and Applications – year: 2018 ident: b5 article-title: A foreground inference network for video surveillance using multi-view receptive field – reference: Gemignani, G., & Rozza, A. (2015). A novel background subtraction approach based on multi-layered self organizing maps. In – reference: (pp. 1140–1148). – year: 2018 ident: b376 article-title: ReMotENet: EFficient relevant motion event detection for large-scale home surveillance videos – reference: Halfaoui, I., Bouzaraa, F., & Urfalioglu, O. (2016). CNN-based initial background estimation. In – volume: 25 start-page: 1006 year: 2017 end-page: 1012 ident: b84 article-title: A hierarchical fused Fuzzy deep neural network for data classification publication-title: IEEE Transactions on Fuzzy Systems – reference: (pp. 263–270). – year: 2018 ident: b252 article-title: MSFgNet: A novel compact end-to-end deep network for moving object detection publication-title: IEEE Transactions on Intelligent Transportation Systems – year: 2017 ident: b10 article-title: A deep convolutional neural network for background subtraction publication-title: Pattern Recognition – year: 2019 ident: b338 article-title: Deep face recognition: A survey – year: 2018 ident: b234 article-title: Analytics of deep neural network-based background subtraction publication-title: MDPI Journal of Imaging – year: 2018 ident: b113 article-title: Unsupervised video object segmentation for deep reinforcement learning – year: 2018 ident: b294 article-title: Background subtraction using Gaussian publication-title: IET Image Processing – reference: (pp. 255–261). – reference: Lanza, A., Tombari, F., & Stefano, L. D. (2010). Accurate and efficient background subtraction by monotonic second-degree polynomial fitting. In – reference: Hofmann, M., Tiefenbacher, P., & Rigoll, G. (2012). Background Segmentation with Feedback: The Pixel-Based Adaptive Segmenter. In – reference: Chen, Y., Wang, J., & Lu, H. (2015). Learning sharable models for robust background subtraction. In – reference: Gao, Y., Cai, H., Zhang, X., Lan, L., & Luo, Z. (2018). Background subtraction via 3D convolutional neural networks. In – year: 2017 ident: b65 article-title: Anomaly detection in surveillance videos using deep residual networks – reference: Cohen, N., Sharir, O., & Shashua, A. (2016b). On the expressive power of deep learning: A tensor analysis. In – year: 2018 ident: b349 article-title: Robust hierarchical deep learning for vehicular management publication-title: IEEE Transactions on Vehicular Technology – reference: (pp. 652–661). – reference: (pp. 219–229). – year: 2017 ident: b45 article-title: Robust, deep and inductive anomaly detection – reference: (pp. 3347–3354). – reference: Wang, W., Sun, Y., Eriksson, B., Duke, W., & Aggarwal, V. Wide compression: Tensor ring net. In – reference: (pp. 1271–1276). – year: 2016 ident: b290 article-title: Stochasticnet: Forming deep neural networks via stochastic connectivity publication-title: IEEE Access – volume: 28 start-page: 26 year: 2018 end-page: 91 ident: b28 article-title: On the role and the importance of features for background modeling and foreground detection publication-title: Computer Science Review – start-page: 165 year: 2014 end-page: 169 ident: b261 article-title: Background scene modeling for PTZ cameras using RBM publication-title: International Conference on Control, Automation and Information Sciences, ICCAIS 2014 – reference: Sobral, A., Bouwmans, T., & Zahzah, E. (2015b). Double-constrained RPCA based on saliency maps for foreground detection in automated maritime surveillance. In – reference: (pp. 3527–3534). – reference: Jiang, W., Gao, H., Chung, F., & Huang, H. (2016). The – volume: 1 start-page: 22 year: 2019 end-page: 53 ident: b293 article-title: Performance analysis of moving object detection using BGS techniques in visual surveillance publication-title: International Journal of Spatio-Temporal Data Science, Inderscience – start-page: 254 year: 2017 end-page: 257 ident: b155 article-title: Deep neural network accelerator based on FPGA publication-title: NAFOSTED 2017 – reference: Thekumparampil, K., Khetan, A., Lin, Z., & Oh, S. (2018). Robustness of conditional GANs to noisy labels. In – year: 2018 ident: b199 article-title: Learning multi-scale features for foreground segmentation – volume: 28 start-page: 657 year: 2006 end-page: 662 ident: b145 article-title: A texture-based method for modeling the background and detecting moving objects publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI 2006 – year: 1957 ident: b275 publication-title: The perceptron–a perceiving and recognizing automaton – volume: 58 year: 2011 ident: b40 article-title: Robust principal component analysis? publication-title: International Journal of ACM – volume: 97 start-page: 173 year: 2018 end-page: 182 ident: b19 article-title: Deep neural networks for texture classification: A theoretical analysis publication-title: Neural Networks – reference: Choo, S., Seo, W., Jeong, D., & Cho, N. 2018b. Learning background subtraction by video synthesis and multi-scale recurrent networks. In – reference: (pp. 153–160). – start-page: 85 year: 2015 end-page: 117 ident: b284 article-title: Deep learning in neural networks: An overview publication-title: Neural Networks – reference: He, J., Balzano, L., & Luiz, J. (2011). Online robust subspace tracking from partial information. In – year: 2018 ident: b398 article-title: A novel background subtraction algorithm based on parallel vision and Bayesian GANs publication-title: Neurocomputing – volume: 17 start-page: 1168 year: 2008 end-page: 1177 ident: b213 article-title: A self organizing approach to background subtraction for visual surveillance applications publication-title: IEEE Transactions on Image Processing – reference: (pp. 474–480). – reference: Afonso, B., Cinelli, L., Thomaz, L., da Silva, A., da Silva, E., & Netto, S. (2018). Moving-camera video surveillance in cluttered environments using deep features. In – year: 2017 ident: b125 article-title: A review of semantic segmentation using deep neural networks publication-title: International Journal of Multimedia Information Retrieval – reference: Huang, J., Huang, X., & Metaxas, D. (2009). Learning with dynamic group sparsity. In – reference: Liang, D., Kaneko, S., Hashimoto, M., Iwata, K., Zhao, X., & Satoh, Y. (2013). Co-occurrence-based adaptive background model for robust object detection. In – volume: 10 start-page: 40 year: 2007 ident: b51 article-title: Efficient hierarchical method for background subtraction publication-title: Pattern Recognition – volume: 26 start-page: 5244 year: 2017 end-page: 5256 ident: b174 article-title: Extensive benchmark and survey of modeling methods for scene background initialization publication-title: IEEE Transactions on Image Processing – reference: Garcia-Gonzalez, J., de Lazcano-Lobato, J. O., Luque-Baena, R., & Molina-Cabello, M. (2018). Background modeling for video sequences by stacked denoising autoencoders. In – reference: Rosell-Ortega, J., Andreu-Garcia, G., Rodas-Jorda, A., & Atienza-Vanacloig, V. (2008). Background modelling in demanding situations with confidence measure. In – reference: (pp. 246–252). – reference: Zhang, H., & Xu, D. (2006b). Fusing color and texture features for background model. In – volume: 14 start-page: 115 year: 1994 end-page: 133 ident: b18 article-title: Approximation and estimation bounds for artificial neural networks publication-title: Neural Networks – reference: (pp. 341–350). – reference: Bakkay, M., Rashwan, H., Salmane, H., Khoudour, L., Puig, D., & Ruichek, Y. (2018). BSCGAN: Deep background subtraction with conditional generative adversarial networks. In – volume: 20 start-page: 1709 year: 2011 end-page: 1724 ident: b17 article-title: ViBe: A universal background subtraction algorithm for video sequences publication-title: IEEE Transactions on Image Processing – year: 2017 ident: b264 article-title: Temporal weighted learning model for background estimation with an automatic re-initialization stage and adaptive parameters update publication-title: Pattern Recognition Letters – reference: Hu, Y., Huang, J., & Schwing, A. (2017). MaskRNN: Instance level video object segmentation. In – reference: Oliver, N., Rosario, B., & Pentland, A. (1999). A Bayesian computer vision system for modeling human interactions. In – reference: (pp. 254–265). – year: 2016 ident: b330 article-title: Matconvnet: Convolutional neural networks for MATLAB – volume: 97 start-page: 162 year: 2018 end-page: 172 ident: b344 article-title: Visualizing deep neural network by alternately image blurring and deblurring publication-title: Neural Networks – volume: 122 start-page: 22 year: 2014 end-page: 34 ident: b30 article-title: Robust PCA via principal component pursuit: A review for a comparative evaluation in video surveillance publication-title: Special Issue on Background Models Challenge, Computer Vision and Image Understanding, CVIU 2014 – reference: Guo, L., & Du, M. (2012). Student’s t-distribution mixture background model for efficient object detection. In – start-page: 1 year: 2010 end-page: 8 ident: b217 article-title: A fuzzy spatial coherence-based approach to background/foreground separation for moving object detection publication-title: Neural Computing and Applications, NCA 2010 – year: 2018 ident: b225 article-title: Background subtraction for moving object detection in RGB-D data: A survey publication-title: MDPI Journal of Imaging – volume: 112 start-page: 256 year: 2018 end-page: 262 ident: b202 article-title: Foreground segmentation using convolutional neural networks for multiscale feature encoding publication-title: Pattern Recognition Letters – reference: Zhou, C., & Paffenroth, R. (2017). Anomaly detection with robust deep autoencoders. In – reference: Tavakkoli, A., Ambardekar, A., Nicolescu, M., & Louis, S. (2007). A genetic approach to training support vector data descriptors for background modeling in video data. In – reference: (pp. 1026–1034). – reference: Elgammal, A., & Davis, L. (2000). Non-parametric model for background subtraction. In – volume: 34 start-page: 014004 year: 2017 ident: b133 article-title: Stable architectures for deep neural networks publication-title: Inverse Problems – year: 2018 ident: b309 article-title: Unsupervised deep context prediction for background estimation and foreground segmentation publication-title: Machine Vision and Applications – reference: Akilan, T., & Wu, J. (2018). Double encoding - slow decoding image to image CNN for foreground identification with application towards intelligent transportation. In – reference: Javed, S., Bouwmans, T., Sultana, M., & Jung, S. (2017). Moving object detection on RGB-D videos using graph regularized spatiotemporal RPCA. In – reference: Guo, X., Wang, X., Yang, L., Cao, X., & Ma, Y. (2014). Robust Foreground Detection using Smoothness and Arbitrariness Constraints. In – volume: 35 start-page: 872 year: 2013 end-page: 1886 ident: b33 article-title: Invariant scattering convolution networks publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence – reference: Maddalena, L., & Petrosino, A. (2015). Towards benchmarking scene background initialization. In – reference: Wang, M., Li, W., & Wang, X. (2012). Transferring a generic pedestrian detector towards specific scenes. In: – reference: (pp. 4480–4488). – year: 2016 ident: b52 article-title: Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution and fully connected CRFs – reference: Maddalena, L., & Petrosino, A. (2012). The SOBS algorithm: What are the limits?. In – year: 2018 ident: 10.1016/j.neunet.2019.04.024_b325 – year: 2019 ident: 10.1016/j.neunet.2019.04.024_b404 – ident: 10.1016/j.neunet.2019.04.024_b135 doi: 10.1109/CVPR.2017.467 – ident: 10.1016/j.neunet.2019.04.024_b180 doi: 10.1109/ISCAS.2018.8351344 – ident: 10.1016/j.neunet.2019.04.024_b162 doi: 10.1007/978-3-319-70742-6_22 – volume: 30 start-page: 1004 issue: 12 year: 2012 ident: 10.1016/j.neunet.2019.04.024_b333 article-title: Real-time robust background subtraction under rapidly changing illumination conditions publication-title: Image Vision and Computing doi: 10.1016/j.imavis.2012.08.017 – ident: 10.1016/j.neunet.2019.04.024_b345 doi: 10.1109/IJCNN.2018.8489230 – ident: 10.1016/j.neunet.2019.04.024_b270 – ident: 10.1016/j.neunet.2019.04.024_b131 doi: 10.1007/978-3-642-33179-4_41 – ident: 10.1016/j.neunet.2019.04.024_b43 doi: 10.1109/IJCNN.2009.5178632 – year: 2016 ident: 10.1016/j.neunet.2019.04.024_b240 article-title: Modelling depth for nonparametric foreground segmentation using RGBD devices publication-title: Pattern Recognition Letters – start-page: 85 year: 2015 ident: 10.1016/j.neunet.2019.04.024_b284 article-title: Deep learning in neural networks: An overview publication-title: Neural Networks doi: 10.1016/j.neunet.2014.09.003 – ident: 10.1016/j.neunet.2019.04.024_b396 – year: 2018 ident: 10.1016/j.neunet.2019.04.024_b5 – year: 2015 ident: 10.1016/j.neunet.2019.04.024_b260 article-title: Unsupervised representation learning with deep convolutional generative adversarial networks publication-title: Computer Science – year: 2019 ident: 10.1016/j.neunet.2019.04.024_b88 – volume: 26 start-page: 903 issue: 5 year: 2016 ident: 10.1016/j.neunet.2019.04.024_b369 article-title: Pixel-to-model distance for robust background reconstruction publication-title: IEEE Transactions on Circuits Systems and Video Technology doi: 10.1109/TCSVT.2015.2424052 – start-page: 254 year: 2017 ident: 10.1016/j.neunet.2019.04.024_b155 article-title: Deep neural network accelerator based on FPGA publication-title: NAFOSTED 2017 – ident: 10.1016/j.neunet.2019.04.024_b326 – ident: 10.1016/j.neunet.2019.04.024_b223 doi: 10.1109/ICPR.2016.7899623 – ident: 10.1016/j.neunet.2019.04.024_b161 doi: 10.1145/3019612.3019687 – ident: 10.1016/j.neunet.2019.04.024_b281 – volume: 77 start-page: 354 year: 2018 ident: 10.1016/j.neunet.2019.04.024_b122 article-title: Recent advances in convolutional neural networks publication-title: Pattern Recognition doi: 10.1016/j.patcog.2017.10.013 – ident: 10.1016/j.neunet.2019.04.024_b159 doi: 10.1109/FCV.2015.7103745 – year: 2016 ident: 10.1016/j.neunet.2019.04.024_b346 article-title: Interactive deep learning method for segmenting moving objects publication-title: Pattern Recognition Letters – ident: 10.1016/j.neunet.2019.04.024_b157 doi: 10.1109/CVPR.2017.790 – ident: 10.1016/j.neunet.2019.04.024_b233 doi: 10.1109/AVSS.2017.8078550 – ident: 10.1016/j.neunet.2019.04.024_b286 doi: 10.1109/CVPRW.2014.65 – ident: 10.1016/j.neunet.2019.04.024_b280 doi: 10.1109/ICASSP.2019.8682914 – ident: 10.1016/j.neunet.2019.04.024_b1 – year: 2018 ident: 10.1016/j.neunet.2019.04.024_b14 – volume: 28 start-page: 657 issue: 4 year: 2006 ident: 10.1016/j.neunet.2019.04.024_b145 article-title: A texture-based method for modeling the background and detecting moving objects publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI 2006 doi: 10.1109/TPAMI.2006.68 – ident: 10.1016/j.neunet.2019.04.024_b197 doi: 10.1109/ICME.2018.8486556 – volume: 4 start-page: 251 issue: 2 year: 1991 ident: 10.1016/j.neunet.2019.04.024_b150 article-title: Approximation capabilities of multilayer feedforwardnetworks publication-title: Neural Networks doi: 10.1016/0893-6080(91)90009-T – volume: 27 start-page: 189 year: 2019 ident: 10.1016/j.neunet.2019.04.024_b314 article-title: Gated residual networks with dilated convolutions for monaural speech enhancement publication-title: IEEE/ACM Transactions on Audio, Speech, and Language Processing doi: 10.1109/TASLP.2018.2876171 – ident: 10.1016/j.neunet.2019.04.024_b92 doi: 10.1007/3-540-45053-X_48 – year: 2015 ident: 10.1016/j.neunet.2019.04.024_b173 – volume: 19 start-page: 230 issue: 1 year: 2018 ident: 10.1016/j.neunet.2019.04.024_b339 article-title: Embedding structured contour and location prior in siamesed fully convolutional networks for road detection publication-title: IEEE Transactions on Intelligent Transportation Systems doi: 10.1109/TITS.2017.2749964 – year: 2017 ident: 10.1016/j.neunet.2019.04.024_b125 article-title: A review of semantic segmentation using deep neural networks publication-title: International Journal of Multimedia Information Retrieval – volume: 2 start-page: 359 issue: 5 year: 1989 ident: 10.1016/j.neunet.2019.04.024_b151 article-title: Multilayer feedforward networks are universal approximators publication-title: Neural Networks doi: 10.1016/0893-6080(89)90020-8 – ident: 10.1016/j.neunet.2019.04.024_b109 – year: 2018 ident: 10.1016/j.neunet.2019.04.024_b382 – ident: 10.1016/j.neunet.2019.04.024_b126 doi: 10.1109/ICMLA.2013.43 – start-page: 1 year: 2019 ident: 10.1016/j.neunet.2019.04.024_b117 article-title: Dynamic background modeling using deep learning autoencoder network publication-title: Multimedia Tools and Applications – volume: 234 start-page: 11 year: 2017 ident: 10.1016/j.neunet.2019.04.024_b205 article-title: A survey of deep neural network architectures and their applications publication-title: Neurocomputing doi: 10.1016/j.neucom.2016.12.038 – year: 2015 ident: 10.1016/j.neunet.2019.04.024_b384 article-title: Deep learning driven blockwise moving object detection with binary scene modeling publication-title: Neurocomputing doi: 10.1016/j.neucom.2015.05.082 – ident: 10.1016/j.neunet.2019.04.024_b306 doi: 10.1109/CVPR.1999.784637 – ident: 10.1016/j.neunet.2019.04.024_b405 – year: 2018 ident: 10.1016/j.neunet.2019.04.024_b309 article-title: Unsupervised deep context prediction for background estimation and foreground segmentation publication-title: Machine Vision and Applications – volume: 108 start-page: 296 year: 2018 ident: 10.1016/j.neunet.2019.04.024_b255 article-title: Optimal approximation of piecewise smooth functions using deep relu neural networks publication-title: Neural Networks doi: 10.1016/j.neunet.2018.08.019 – year: 2018 ident: 10.1016/j.neunet.2019.04.024_b383 article-title: Multiscale fully convolutional network for foreground object detection in infrared videos publication-title: IEEE Geoscience and Remote Sensing Letters doi: 10.1109/LGRS.2018.2841502 – year: 2018 ident: 10.1016/j.neunet.2019.04.024_b269 article-title: Moving object detection through robust matrix completion augmented with objectness publication-title: IEEE Journal of Selected Topics in Signal Processing doi: 10.1109/JSTSP.2018.2869111 – ident: 10.1016/j.neunet.2019.04.024_b171 doi: 10.1145/2647868.2654889 – ident: 10.1016/j.neunet.2019.04.024_b165 – volume: 30 start-page: 968 issue: 1 year: 2019 ident: 10.1016/j.neunet.2019.04.024_b73 article-title: Denoising adversarial autoencoders publication-title: IEEE Transactions on Neural Networks and Learning Systems doi: 10.1109/TNNLS.2018.2852738 – ident: 10.1016/j.neunet.2019.04.024_b32 doi: 10.1109/ICIP.2017.8297144 – start-page: 103 year: 2012 ident: 10.1016/j.neunet.2019.04.024_b22 article-title: Background subtraction for visual surveillance: A fuzzy approach – ident: 10.1016/j.neunet.2019.04.024_b368 doi: 10.1007/978-3-030-00563-4_8 – year: 2017 ident: 10.1016/j.neunet.2019.04.024_b193 article-title: Adaptive deep convolutional neural networks for scene-specific object detection publication-title: IEEE Transactions on Circuits and Systems for Video Technology – volume: 25 start-page: 1006 issue: 4 year: 2017 ident: 10.1016/j.neunet.2019.04.024_b84 article-title: A hierarchical fused Fuzzy deep neural network for data classification publication-title: IEEE Transactions on Fuzzy Systems doi: 10.1109/TFUZZ.2016.2574915 – ident: 10.1016/j.neunet.2019.04.024_b315 doi: 10.1109/AVSS.2017.8078483 – volume: 1 start-page: 22 issue: 1 year: 2019 ident: 10.1016/j.neunet.2019.04.024_b293 article-title: Performance analysis of moving object detection using BGS techniques in visual surveillance publication-title: International Journal of Spatio-Temporal Data Science, Inderscience doi: 10.1504/IJSTDS.2019.097607 – ident: 10.1016/j.neunet.2019.04.024_b317 doi: 10.1007/978-3-540-76856-2_31 – ident: 10.1016/j.neunet.2019.04.024_b11 – ident: 10.1016/j.neunet.2019.04.024_b132 doi: 10.5772/38267 – ident: 10.1016/j.neunet.2019.04.024_b144 doi: 10.1109/CVPR.2016.90 – ident: 10.1016/j.neunet.2019.04.024_b118 – year: 2018 ident: 10.1016/j.neunet.2019.04.024_b278 article-title: Illumination-aware multi-task GANs for foreground segmentation publication-title: IEEE Access – ident: 10.1016/j.neunet.2019.04.024_b395 – volume: 26 start-page: 117 issue: 1 year: 2018 ident: 10.1016/j.neunet.2019.04.024_b99 article-title: A fuzzy restricted Boltzmann machine: Novel learning algorithms based on the crisp possibilistic mean value of fuzzy numbers publication-title: IEEE Transactions on Fuzzy Systems doi: 10.1109/TFUZZ.2016.2639064 – ident: 10.1016/j.neunet.2019.04.024_b374 – ident: 10.1016/j.neunet.2019.04.024_b168 doi: 10.1109/ICCVW.2015.123 – year: 2015 ident: 10.1016/j.neunet.2019.04.024_b134 – ident: 10.1016/j.neunet.2019.04.024_b47 doi: 10.1109/ITSC.2004.1399038 – year: 2018 ident: 10.1016/j.neunet.2019.04.024_b252 article-title: MSFgNet: A novel compact end-to-end deep network for moving object detection publication-title: IEEE Transactions on Intelligent Transportation Systems – ident: 10.1016/j.neunet.2019.04.024_b288 doi: 10.1109/CVPRW.2016.109 – volume: 12 start-page: 1398 issue: 4 year: 2011 ident: 10.1016/j.neunet.2019.04.024_b98 article-title: Adaptive background modeling integrated with luminosity sensors and occlusion processing for reliable vehicle detection publication-title: IEEE Transactions on Intelligent Transportation Systems doi: 10.1109/TITS.2011.2159266 – volume: 19 start-page: 1 year: 2018 ident: 10.1016/j.neunet.2019.04.024_b81 article-title: Connections with robust PCA and the role of emergent sparsity in variational autoencoder models publication-title: Journal of Machine Learning Research (JMLR) – year: 2017 ident: 10.1016/j.neunet.2019.04.024_b45 – ident: 10.1016/j.neunet.2019.04.024_b158 doi: 10.1007/978-3-319-23234-8_32 – ident: 10.1016/j.neunet.2019.04.024_b31 doi: 10.1109/IWSSIP.2016.7502717 – year: 2018 ident: 10.1016/j.neunet.2019.04.024_b262 – ident: 10.1016/j.neunet.2019.04.024_b257 – volume: 122 start-page: 74 year: 2014 ident: 10.1016/j.neunet.2019.04.024_b302 article-title: A texton-based kernel density estimation approach for background modeling under extreme conditions publication-title: Computer Vision and Image Understanding, CVIU 2014 doi: 10.1016/j.cviu.2013.12.003 – volume: 31 start-page: 539 issue: 3 year: 2009 ident: 10.1016/j.neunet.2019.04.024_b347 article-title: Unsupervised activity perception in crowded and complicated scenes using hierarchical Bayesian models publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence doi: 10.1109/TPAMI.2008.87 – ident: 10.1016/j.neunet.2019.04.024_b328 – ident: 10.1016/j.neunet.2019.04.024_b388 doi: 10.1007/978-3-642-35286-7_23 – ident: 10.1016/j.neunet.2019.04.024_b299 doi: 10.1007/978-3-319-23222-5_62 – volume: 65 start-page: 677 issue: 2 year: 2018 ident: 10.1016/j.neunet.2019.04.024_b385 article-title: Memristor-based circuit design for multilayer neural networks publication-title: IEEE Transactions on Circuits and Systems. I. Regular Papers doi: 10.1109/TCSI.2017.2729787 – year: 2015 ident: 10.1016/j.neunet.2019.04.024_b60 – volume: 17 start-page: 1168 issue: 7 year: 2008 ident: 10.1016/j.neunet.2019.04.024_b213 article-title: A self organizing approach to background subtraction for visual surveillance applications publication-title: IEEE Transactions on Image Processing doi: 10.1109/TIP.2008.924285 – year: 2019 ident: 10.1016/j.neunet.2019.04.024_b97 – start-page: 57 year: 2008 ident: 10.1016/j.neunet.2019.04.024_b212 article-title: Neural model-based segmentation of image motion publication-title: KES 2008 – ident: 10.1016/j.neunet.2019.04.024_b170 – ident: 10.1016/j.neunet.2019.04.024_b336 doi: 10.1109/CVPRW.2014.68 – year: 2019 ident: 10.1016/j.neunet.2019.04.024_b338 – year: 2014 ident: 10.1016/j.neunet.2019.04.024_b221 article-title: Background model initialization for static Cameras – volume: 26 start-page: 5244 issue: 11 year: 2017 ident: 10.1016/j.neunet.2019.04.024_b174 article-title: Extensive benchmark and survey of modeling methods for scene background initialization publication-title: IEEE Transactions on Image Processing doi: 10.1109/TIP.2017.2728181 – ident: 10.1016/j.neunet.2019.04.024_b163 doi: 10.1109/ICPR.2016.7899619 – year: 2015 ident: 10.1016/j.neunet.2019.04.024_b360 – ident: 10.1016/j.neunet.2019.04.024_b9 doi: 10.1109/MWSCAS.2018.8623825 – ident: 10.1016/j.neunet.2019.04.024_b204 doi: 10.1109/ICIP.2018.8451816 – ident: 10.1016/j.neunet.2019.04.024_b152 – ident: 10.1016/j.neunet.2019.04.024_b198 doi: 10.1007/978-3-030-00776-8_48 – volume: 26 start-page: 1702 year: 2018 ident: 10.1016/j.neunet.2019.04.024_b337 article-title: Supervised speech separation based on deep learning: An overview publication-title: IEEE/ACM Transactions on Audio, Speech, and Language Processing doi: 10.1109/TASLP.2018.2842159 – volume: 27 start-page: 457 year: 2019 ident: 10.1016/j.neunet.2019.04.024_b350 article-title: Combining spectral and spatial features for deep learning based blind speaker separation publication-title: IEEE/ACM Transactions on Audio, Speech, and Language Processing doi: 10.1109/TASLP.2018.2881912 – year: 2015 ident: 10.1016/j.neunet.2019.04.024_b298 article-title: Very deep convolutional networks for large-scale image recognition publication-title: International Conference on Learning Representation, ICLR 2015 – volume: 35 start-page: 872 issue: 8 year: 2013 ident: 10.1016/j.neunet.2019.04.024_b33 article-title: Invariant scattering convolution networks publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence doi: 10.1109/TPAMI.2012.230 – year: 2018 ident: 10.1016/j.neunet.2019.04.024_b393 – volume: 118 start-page: 14 year: 2019 ident: 10.1016/j.neunet.2019.04.024_b373 article-title: A review of convolutional-neural-network-based action recognition publication-title: Pattern Recognition Letters doi: 10.1016/j.patrec.2018.05.018 – ident: 10.1016/j.neunet.2019.04.024_b53 – ident: 10.1016/j.neunet.2019.04.024_b231 doi: 10.1007/11553595_20 – ident: 10.1016/j.neunet.2019.04.024_b292 doi: 10.1109/ICCV.2017.548 – ident: 10.1016/j.neunet.2019.04.024_b313 – ident: 10.1016/j.neunet.2019.04.024_b89 doi: 10.1109/CVPR.2017.452 – year: 2016 ident: 10.1016/j.neunet.2019.04.024_b52 – start-page: 1 year: 2018 ident: 10.1016/j.neunet.2019.04.024_b87 article-title: Background subtraction based on deep convolutional neural networks features publication-title: Multimedia Tools and Applications – ident: 10.1016/j.neunet.2019.04.024_b253 doi: 10.1109/WACV.2019.00193 – year: 2016 ident: 10.1016/j.neunet.2019.04.024_b330 – year: 2018 ident: 10.1016/j.neunet.2019.04.024_b256 article-title: Compressive online video background–Foreground separation using multiple prior information and optical flow publication-title: MDPI Journal of Imaging doi: 10.3390/jimaging4070090 – volume: 26 start-page: 5840 issue: 12 year: 2017 ident: 10.1016/j.neunet.2019.04.024_b166 article-title: Background-foreground modeling based on spatio-temporal sparse subspace clustering publication-title: IEEE Transactions on Image Processing doi: 10.1109/TIP.2017.2746268 – ident: 10.1016/j.neunet.2019.04.024_b272 doi: 10.1007/978-3-319-24574-4_28 – ident: 10.1016/j.neunet.2019.04.024_b353 doi: 10.1109/CIS.2009.178 – ident: 10.1016/j.neunet.2019.04.024_b206 doi: 10.1109/CVPR.2015.7298965 – ident: 10.1016/j.neunet.2019.04.024_b327 doi: 10.1109/AVSS.2013.6636617 – ident: 10.1016/j.neunet.2019.04.024_b182 – volume: 23 start-page: 294 issue: 1 year: 2004 ident: 10.1016/j.neunet.2019.04.024_b3 article-title: Interactive digital photomontage publication-title: ACM Transactions on Graphics doi: 10.1145/1015706.1015718 – ident: 10.1016/j.neunet.2019.04.024_b56 doi: 10.1145/3194554 – ident: 10.1016/j.neunet.2019.04.024_b121 doi: 10.1007/978-3-319-70742-6_23 – ident: 10.1016/j.neunet.2019.04.024_b265 doi: 10.1109/IJCNN.2013.6706737 – year: 2017 ident: 10.1016/j.neunet.2019.04.024_b390 – year: 2017 ident: 10.1016/j.neunet.2019.04.024_b104 – ident: 10.1016/j.neunet.2019.04.024_b76 – ident: 10.1016/j.neunet.2019.04.024_b324 – ident: 10.1016/j.neunet.2019.04.024_b140 – ident: 10.1016/j.neunet.2019.04.024_b340 doi: 10.1109/CVPRW.2014.126 – year: 2019 ident: 10.1016/j.neunet.2019.04.024_b8 article-title: sEnDec: An improved image to image CNN for foreground localization publication-title: IEEE Intelligent Transportation Systems Transactions – ident: 10.1016/j.neunet.2019.04.024_b136 doi: 10.1007/978-3-642-33765-9_8 – volume: 19 start-page: 780 issue: 7 year: 1997 ident: 10.1016/j.neunet.2019.04.024_b359 article-title: Pfinder: Real-time tracking of the human body publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence doi: 10.1109/34.598236 – year: 2017 ident: 10.1016/j.neunet.2019.04.024_b27 article-title: Scene background initialization: a taxonomy publication-title: Pattern Recognition Letters doi: 10.1016/j.patrec.2016.12.024 – year: 2017 ident: 10.1016/j.neunet.2019.04.024_b10 article-title: A deep convolutional neural network for background subtraction publication-title: Pattern Recognition – ident: 10.1016/j.neunet.2019.04.024_b366 doi: 10.1145/2647868.2654914 – ident: 10.1016/j.neunet.2019.04.024_b195 doi: 10.1109/AVSS.2013.6636673 – volume: 4729 start-page: 181 year: 2007 ident: 10.1016/j.neunet.2019.04.024_b210 article-title: A self-organizing approach to detection of moving patterns for real-time applications publication-title: Advances in Brain, Vision, and Artificial Intelligence – year: 2018 ident: 10.1016/j.neunet.2019.04.024_b351 article-title: Foreground detection with deeply learned multi-scale spatial-temporal features publication-title: MDPI Sensors doi: 10.3390/s18124269 – year: 2018 ident: 10.1016/j.neunet.2019.04.024_b107 – year: 2015 ident: 10.1016/j.neunet.2019.04.024_b259 – ident: 10.1016/j.neunet.2019.04.024_b401 doi: 10.1145/3097983.3098052 – ident: 10.1016/j.neunet.2019.04.024_b318 doi: 10.1007/11919476_5 – volume: 97 start-page: 162 year: 2018 ident: 10.1016/j.neunet.2019.04.024_b344 article-title: Visualizing deep neural network by alternately image blurring and deblurring publication-title: Neural Networks doi: 10.1016/j.neunet.2017.09.007 – start-page: 1 year: 2017 ident: 10.1016/j.neunet.2019.04.024_b279 article-title: End-to-end video background subtraction with 3D convolutional neural networks publication-title: Multimedia Tools and Applications – ident: 10.1016/j.neunet.2019.04.024_b348 – year: 2018 ident: 10.1016/j.neunet.2019.04.024_b181 – year: 2019 ident: 10.1016/j.neunet.2019.04.024_b307 – year: 2018 ident: 10.1016/j.neunet.2019.04.024_b248 – year: 2018 ident: 10.1016/j.neunet.2019.04.024_b349 article-title: Robust hierarchical deep learning for vehicular management publication-title: IEEE Transactions on Vehicular Technology – year: 2017 ident: 10.1016/j.neunet.2019.04.024_b185 article-title: A method based on motion detection for generating the background of a scene publication-title: Pattern Recognition Letters doi: 10.1016/j.patrec.2016.11.022 – year: 2015 ident: 10.1016/j.neunet.2019.04.024_b392 article-title: Stacked multi-layer self-organizing map for background modeling publication-title: IEEE Transactions on Image Processing doi: 10.1109/TIP.2015.2427519 – volume: 27 start-page: 773 issue: 7 year: 2006 ident: 10.1016/j.neunet.2019.04.024_b406 article-title: Efficient adaptive density estimation per image pixel for the task of background subtraction publication-title: Pattern Recognition Letters doi: 10.1016/j.patrec.2005.11.005 – year: 2019 ident: 10.1016/j.neunet.2019.04.024_b57 – ident: 10.1016/j.neunet.2019.04.024_b296 doi: 10.1109/ICPR.2016.7899965 – volume: 15 issue: 1 year: 2018 ident: 10.1016/j.neunet.2019.04.024_b232 article-title: On the implicit bias of dropout publication-title: International Conference on Machine Learning, ICML 2018 – volume: 64 start-page: 739 issue: 8 year: 2010 ident: 10.1016/j.neunet.2019.04.024_b361 article-title: Spatio-temporal context for codebook-based dynamic background subtraction publication-title: AEU-International Journal of Electronic Communication doi: 10.1016/j.aeue.2009.05.004 – ident: 10.1016/j.neunet.2019.04.024_b63 – ident: 10.1016/j.neunet.2019.04.024_b139 – volume: 4 start-page: 79 issue: 6 year: 2018 ident: 10.1016/j.neunet.2019.04.024_b123 article-title: Deep learning with a spatiotemporal descriptor of appearance and motion estimation for video anomaly detection publication-title: MDPI Journal of Imaging doi: 10.3390/jimaging4060079 – year: 2018 ident: 10.1016/j.neunet.2019.04.024_b397 article-title: Background subtraction algorithm based on Bayesian generative adversarial networks publication-title: Acta Automatica Sinica – year: 2015 ident: 10.1016/j.neunet.2019.04.024_b263 article-title: Self-adaptive SOM-CNN neural system for dynamic object detection in normal and complex scenarios publication-title: Pattern Recognition doi: 10.1016/j.patcog.2014.09.009 – ident: 10.1016/j.neunet.2019.04.024_b258 doi: 10.1109/ICAIPR.2016.7585207 – ident: 10.1016/j.neunet.2019.04.024_b141 doi: 10.1109/CVPR.2016.90 – year: 2018 ident: 10.1016/j.neunet.2019.04.024_b380 – ident: 10.1016/j.neunet.2019.04.024_b211 doi: 10.1007/978-3-540-87536-9_67 – year: 2019 ident: 10.1016/j.neunet.2019.04.024_b370 – volume: 29 start-page: 2123 year: 2017 ident: 10.1016/j.neunet.2019.04.024_b310 article-title: Deep restricted kernel machines using conjugate feature duality publication-title: Neural Computation doi: 10.1162/neco_a_00984 – year: 2016 ident: 10.1016/j.neunet.2019.04.024_b290 article-title: Stochasticnet: Forming deep neural networks via stochastic connectivity publication-title: IEEE Access doi: 10.1109/ACCESS.2016.2551458 – volume: 57 start-page: 3 issue: 1 year: 2019 ident: 10.1016/j.neunet.2019.04.024_b352 article-title: GETNET: A general end-to-end 2-d CNN framework for hyperspectral image change detection publication-title: IEEE Transactions on Geoscience and Remote Sensing doi: 10.1109/TGRS.2018.2849692 – ident: 10.1016/j.neunet.2019.04.024_b2 doi: 10.1109/ICIP.2018.8451540 – year: 2016 ident: 10.1016/j.neunet.2019.04.024_b394 – ident: 10.1016/j.neunet.2019.04.024_b111 doi: 10.1109/CVPR.2014.81 – ident: 10.1016/j.neunet.2019.04.024_b266 – ident: 10.1016/j.neunet.2019.04.024_b74 doi: 10.1109/ICIP.2010.5653489 – year: 1957 ident: 10.1016/j.neunet.2019.04.024_b275 – volume: 9 start-page: 1735 issue: 8 year: 1997 ident: 10.1016/j.neunet.2019.04.024_b148 article-title: Long short-term memory publication-title: Neural Computation doi: 10.1162/neco.1997.9.8.1735 – ident: 10.1016/j.neunet.2019.04.024_b176 doi: 10.1109/SIU.2018.8404636 – start-page: 165 year: 2014 ident: 10.1016/j.neunet.2019.04.024_b261 article-title: Background scene modeling for PTZ cameras using RBM publication-title: International Conference on Control, Automation and Information Sciences, ICCAIS 2014 – ident: 10.1016/j.neunet.2019.04.024_b243 – year: 2010 ident: 10.1016/j.neunet.2019.04.024_b86 article-title: Adaptive learning of multi-subspace for foreground detection under illumination changes publication-title: Computer Vision and Image Understanding – ident: 10.1016/j.neunet.2019.04.024_b127 – volume: 107 start-page: 3 year: 2018 ident: 10.1016/j.neunet.2019.04.024_b91 article-title: Sigmoid-weighted linear units for neural network function approximation in reinforcement learning publication-title: Neural Networks doi: 10.1016/j.neunet.2017.12.012 – ident: 10.1016/j.neunet.2019.04.024_b154 doi: 10.1109/ICCV.2009.5459202 – ident: 10.1016/j.neunet.2019.04.024_b149 doi: 10.1109/CVPRW.2012.6238925 – ident: 10.1016/j.neunet.2019.04.024_b106 doi: 10.1007/978-3-030-00374-6_32 – ident: 10.1016/j.neunet.2019.04.024_b291 doi: 10.1109/ISIE.2017.8001492 – volume: 28 year: 2018 ident: 10.1016/j.neunet.2019.04.024_b230 article-title: New trends on moving object detection in video images Captured by a moving Camera: A survey publication-title: Computer Science Review – year: 2017 ident: 10.1016/j.neunet.2019.04.024_b156 – ident: 10.1016/j.neunet.2019.04.024_b116 – ident: 10.1016/j.neunet.2019.04.024_b119 doi: 10.1109/ICASSP.2013.6638947 – year: 2013 ident: 10.1016/j.neunet.2019.04.024_b179 – ident: 10.1016/j.neunet.2019.04.024_b44 doi: 10.1109/IJCNN.2013.6706734 – ident: 10.1016/j.neunet.2019.04.024_b110 doi: 10.1109/ICCV.2015.169 – volume: 1 start-page: 137 year: 2018 ident: 10.1016/j.neunet.2019.04.024_b355 article-title: Fully memristive neural networks for pattern classification with unsupervised learning publication-title: Nature Electronics doi: 10.1038/s41928-018-0023-2 – ident: 10.1016/j.neunet.2019.04.024_b184 doi: 10.1109/ICPR.2016.7899617 – ident: 10.1016/j.neunet.2019.04.024_b13 doi: 10.1007/978-3-540-89639-5_74 – volume: 97 start-page: 173 year: 2018 ident: 10.1016/j.neunet.2019.04.024_b19 article-title: Deep neural networks for texture classification: A theoretical analysis publication-title: Neural Networks doi: 10.1016/j.neunet.2017.10.001 – ident: 10.1016/j.neunet.2019.04.024_b58 doi: 10.1145/3061639.3062326 – volume: 78 start-page: 1415 issue: 9 year: 1990 ident: 10.1016/j.neunet.2019.04.024_b358 article-title: 30 years of adaptive neural networks: perceptron, madaline, and backpropagation publication-title: Proceedings of the IEEE doi: 10.1109/5.58323 – ident: 10.1016/j.neunet.2019.04.024_b321 – year: 2018 ident: 10.1016/j.neunet.2019.04.024_b294 article-title: Background subtraction using Gaussian−bernoulli restricted Boltzmann machine publication-title: IET Image Processing doi: 10.1049/iet-ipr.2017.1055 – year: 2011 ident: 10.1016/j.neunet.2019.04.024_b209 article-title: Online robust dictionary learning publication-title: EURASIP Journal on Image and Video Processing, IVP 2011 – year: 2018 ident: 10.1016/j.neunet.2019.04.024_b343 article-title: Scene classification with recurrent attention of VHR remote sensing images publication-title: IEEE Transactions on Geoscience and Remote Sensing doi: 10.1109/LGRS.2018.2859024 – volume: 20 start-page: 1709 issue: 6 year: 2011 ident: 10.1016/j.neunet.2019.04.024_b17 article-title: ViBe: A universal background subtraction algorithm for video sequences publication-title: IEEE Transactions on Image Processing doi: 10.1109/TIP.2010.2101613 – ident: 10.1016/j.neunet.2019.04.024_b254 doi: 10.1109/SMC.2018.00289 – volume: 23 start-page: 1 year: 2017 ident: 10.1016/j.neunet.2019.04.024_b29 article-title: Decomposition into low-rank plus additive matrices for background/foreground separation: A review for a comparative evaluation with a large-scale dataset publication-title: Computer Science Review doi: 10.1016/j.cosrev.2016.11.001 – ident: 10.1016/j.neunet.2019.04.024_b241 – start-page: 14 year: 2014 ident: 10.1016/j.neunet.2019.04.024_b196 article-title: Robust object detection in severe imaging conditions using co-occurrence background model publication-title: International Journal of Optomechatronics doi: 10.1080/15599612.2014.890686 – volume: 60 issue: 1 year: 2015 ident: 10.1016/j.neunet.2019.04.024_b242 article-title: The detection of moving objects in video by background subtraction using dempster-shafer theory publication-title: Transactions on Electronics and Communications – ident: 10.1016/j.neunet.2019.04.024_b301 doi: 10.1109/ICCVW.2015.125 – ident: 10.1016/j.neunet.2019.04.024_b378 – volume: 4 issue: 7 year: 2018 ident: 10.1016/j.neunet.2019.04.024_b186 article-title: Labgen-p-semantic: A first step for leveraging semantic segmentation in background generation publication-title: MDPI Journal of Imaging – ident: 10.1016/j.neunet.2019.04.024_b214 doi: 10.1007/978-3-642-02282-1_33 – ident: 10.1016/j.neunet.2019.04.024_b239 doi: 10.1109/CVPR.2018.00084 – start-page: 1 year: 2008 ident: 10.1016/j.neunet.2019.04.024_b335 article-title: Improving target detection by coupling it with tracking publication-title: Machine Vision and Application – start-page: 16010 year: 2018 ident: 10.1016/j.neunet.2019.04.024_b381 article-title: Background subtraction using multiscale fully convolutional network publication-title: IEEE Access doi: 10.1109/ACCESS.2018.2817129 – ident: 10.1016/j.neunet.2019.04.024_b64 doi: 10.1109/IJCNN.2011.6033261 – year: 2018 ident: 10.1016/j.neunet.2019.04.024_b226 article-title: Self-organizing background subtraction using color and depth data publication-title: Multimedia Tools and Applications – volume: 29 start-page: 1421 year: 1996 ident: 10.1016/j.neunet.2019.04.024_b285 article-title: A system for counting people in video images using neural networks to identify the background scene publication-title: Pattern Recognition doi: 10.1016/0031-3203(95)00163-8 – volume: 71 start-page: 1 year: 2019 ident: 10.1016/j.neunet.2019.04.024_b379 article-title: A robust background initialization algorithm with superpixel motion detection publication-title: Signal Processing: Image Communication – year: 2018 ident: 10.1016/j.neunet.2019.04.024_b55 – ident: 10.1016/j.neunet.2019.04.024_b62 – ident: 10.1016/j.neunet.2019.04.024_b236 doi: 10.1109/CVPR.2017.17 – volume: 35 start-page: 597 year: 2013 ident: 10.1016/j.neunet.2019.04.024_b403 article-title: Moving object detection by detecting contiguous outliers in the low-rank representation publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence doi: 10.1109/TPAMI.2012.132 – volume: 112 start-page: 256 year: 2018 ident: 10.1016/j.neunet.2019.04.024_b202 article-title: Foreground segmentation using convolutional neural networks for multiscale feature encoding publication-title: Pattern Recognition Letters doi: 10.1016/j.patrec.2018.08.002 – volume: 35 start-page: 32 issue: 4 year: 2018 ident: 10.1016/j.neunet.2019.04.024_b329 article-title: Robust subspace learning: Robust PCA, robust subspace tracking and robust subspace recovery publication-title: IEEE Signal Processing Magazine doi: 10.1109/MSP.2018.2826566 – year: 2013 ident: 10.1016/j.neunet.2019.04.024_b93 article-title: Finite asymmetric generalized Gaussian mixture models learning for infrared object detection publication-title: Computer Vision and Image Understanding doi: 10.1016/j.cviu.2013.07.007 – ident: 10.1016/j.neunet.2019.04.024_b49 doi: 10.1109/ICCVW.2017.188 – ident: 10.1016/j.neunet.2019.04.024_b115 – ident: 10.1016/j.neunet.2019.04.024_b228 – year: 2017 ident: 10.1016/j.neunet.2019.04.024_b331 – year: 2018 ident: 10.1016/j.neunet.2019.04.024_b283 – ident: 10.1016/j.neunet.2019.04.024_b128 doi: 10.1007/978-3-319-10584-0_35 – volume: 24 start-page: 723 issue: 5 year: 2013 ident: 10.1016/j.neunet.2019.04.024_b219 article-title: Stopped object detection by learning foreground model in videos publication-title: IEEE Transactions on Neural Networks and Learning Systems doi: 10.1109/TNNLS.2013.2242092 – year: 2017 ident: 10.1016/j.neunet.2019.04.024_b289 article-title: Real-time embedded motion detection via neural response mixture modeling publication-title: Journal of Signal Processing Systems – volume: 110 start-page: 104 year: 2019 ident: 10.1016/j.neunet.2019.04.024_b399 article-title: ADA-Tucker: Compressing deep neural networks via adaptive dimension adjustment tucker decomposition, neural networks publication-title: Neural Networks doi: 10.1016/j.neunet.2018.10.016 – year: 2019 ident: 10.1016/j.neunet.2019.04.024_b7 article-title: An improved video foreground extraction strategy using multi-view receptive field and EnDec CNN publication-title: IEEE Transactions on Industrial Informatics – year: 2018 ident: 10.1016/j.neunet.2019.04.024_b398 article-title: A novel background subtraction algorithm based on parallel vision and Bayesian GANs publication-title: Neurocomputing – volume: 2 start-page: 1034 year: 2005 ident: 10.1016/j.neunet.2019.04.024_b66 article-title: Background estimation as a labeling problem publication-title: International Conference on Computer Vision, ICCV 2005 – volume: 25 start-page: 1 year: 2016 ident: 10.1016/j.neunet.2019.04.024_b36 article-title: DehazeNet: AN end-to-end system for single image haze removal publication-title: IEEE Transactions on Image Processing doi: 10.1109/TIP.2016.2598681 – ident: 10.1016/j.neunet.2019.04.024_b138 doi: 10.1109/IJCNN.2017.7966300 – ident: 10.1016/j.neunet.2019.04.024_b363 doi: 10.1109/CVPR.2018.00125 – volume: 26 start-page: 3249 issue: 7 year: 2017 ident: 10.1016/j.neunet.2019.04.024_b277 article-title: Universal multimode background subtraction publication-title: IEEE Transactions on Image Processing doi: 10.1109/TIP.2017.2695882 – ident: 10.1016/j.neunet.2019.04.024_b142 doi: 10.1109/ICCV.2015.123 – ident: 10.1016/j.neunet.2019.04.024_b215 doi: 10.1007/978-3-642-04146-4_63 – ident: 10.1016/j.neunet.2019.04.024_b41 doi: 10.1109/AVSS.2018.8639077 – year: 2018 ident: 10.1016/j.neunet.2019.04.024_b113 – ident: 10.1016/j.neunet.2019.04.024_b67 doi: 10.1109/CVPR.2016.517 – ident: 10.1016/j.neunet.2019.04.024_b124 doi: 10.1109/ICSPCC.2012.6335632 – ident: 10.1016/j.neunet.2019.04.024_b129 doi: 10.1109/ICIP.2012.6467087 – ident: 10.1016/j.neunet.2019.04.024_b183 doi: 10.1109/AVSS.2010.45 – year: 2018 ident: 10.1016/j.neunet.2019.04.024_b4 – year: 2012 ident: 10.1016/j.neunet.2019.04.024_b94 article-title: Online variational learning of finite Dirichlet mixture models publication-title: Evolving Systems doi: 10.1007/s12530-012-9047-4 – year: 1962 ident: 10.1016/j.neunet.2019.04.024_b357 article-title: Generalization and information storage in networks of ADALINE publication-title: Self Organizing Systems – ident: 10.1016/j.neunet.2019.04.024_b95 – ident: 10.1016/j.neunet.2019.04.024_b320 doi: 10.24963/ijcai.2018/378 – year: 2018 ident: 10.1016/j.neunet.2019.04.024_b187 article-title: Survey on deep learning techniques for person re-identification task publication-title: Neurocomputing – volume: 58 issue: 3 year: 2011 ident: 10.1016/j.neunet.2019.04.024_b40 article-title: Robust principal component analysis? publication-title: International Journal of ACM – ident: 10.1016/j.neunet.2019.04.024_b61 doi: 10.1109/ICPR.2018.8545597 – year: 2018 ident: 10.1016/j.neunet.2019.04.024_b199 – year: 2016 ident: 10.1016/j.neunet.2019.04.024_b75 article-title: Labeled dataset for integral evaluation of moving object detection algorithms: LASIESTA publication-title: Computer Vision and Image Understanding doi: 10.1016/j.cviu.2016.08.005 – year: 2018 ident: 10.1016/j.neunet.2019.04.024_b376 – ident: 10.1016/j.neunet.2019.04.024_b143 doi: 10.1109/ICCV.2015.123 – ident: 10.1016/j.neunet.2019.04.024_b137 doi: 10.1109/ICPR.2016.7899616 – ident: 10.1016/j.neunet.2019.04.024_b311 doi: 10.1109/CVPR.2016.308 – volume: 14 start-page: 115 issue: 1 year: 1994 ident: 10.1016/j.neunet.2019.04.024_b18 article-title: Approximation and estimation bounds for artificial neural networks publication-title: Neural Networks – year: 2018 ident: 10.1016/j.neunet.2019.04.024_b246 article-title: Change detection by training a triplet network for motion feature extraction publication-title: IEEE Transactions on Circuits and Systems for Video Technology – year: 2018 ident: 10.1016/j.neunet.2019.04.024_b356 article-title: Scene-specific convolutional neural networks for video-based biodiversity detection publication-title: Methods in Ecology and Evolution doi: 10.1111/2041-210X.13011 – ident: 10.1016/j.neunet.2019.04.024_b312 doi: 10.1109/CVPR.2015.7298594 – start-page: 1 year: 2010 ident: 10.1016/j.neunet.2019.04.024_b217 article-title: A fuzzy spatial coherence-based approach to background/foreground separation for moving object detection publication-title: Neural Computing and Applications, NCA 2010 – ident: 10.1016/j.neunet.2019.04.024_b268 doi: 10.1109/ICCVW.2017.221 – ident: 10.1016/j.neunet.2019.04.024_b35 doi: 10.1109/CVPR.2017.565 – ident: 10.1016/j.neunet.2019.04.024_b178 doi: 10.1109/AVSS.2017.8078475 – year: 2005 ident: 10.1016/j.neunet.2019.04.024_b316 article-title: Foreground-background segmentation in video sequences using neural networks publication-title: Intelligent Systems: Neural Networks and Applications – year: 2018 ident: 10.1016/j.neunet.2019.04.024_b153 article-title: 3D atrous convolutional long short-term memory network for background subtraction publication-title: IEEE Access – year: 2017 ident: 10.1016/j.neunet.2019.04.024_b264 article-title: Temporal weighted learning model for background estimation with an automatic re-initialization stage and adaptive parameters update publication-title: Pattern Recognition Letters doi: 10.1016/j.patrec.2017.01.011 – volume: 18 start-page: 1527 issue: 7 year: 2006 ident: 10.1016/j.neunet.2019.04.024_b147 article-title: A fast learning algorithm for deep belief nets publication-title: Neural Computation doi: 10.1162/neco.2006.18.7.1527 – ident: 10.1016/j.neunet.2019.04.024_b160 doi: 10.1049/ic.2015.0105 – ident: 10.1016/j.neunet.2019.04.024_b235 doi: 10.1109/CVPR.2004.1315179 – year: 2018 ident: 10.1016/j.neunet.2019.04.024_b245 – year: 2017 ident: 10.1016/j.neunet.2019.04.024_b54 article-title: Pixel-wise deep sequence learning for moving object detection publication-title: IEEE Transactions on Circuits and Systems for Video Technology – year: 2018 ident: 10.1016/j.neunet.2019.04.024_b287 – ident: 10.1016/j.neunet.2019.04.024_b274 doi: 10.1109/ICPR.2008.4761047 – ident: 10.1016/j.neunet.2019.04.024_b216 doi: 10.1007/978-3-642-04697-1_39 – ident: 10.1016/j.neunet.2019.04.024_b38 – volume: 46 start-page: 1014 issue: 4 year: 2016 ident: 10.1016/j.neunet.2019.04.024_b42 article-title: Total variation regularized RPCA for irregularly moving object detection under dynamic background publication-title: IEEE Transactions on Cybernetics doi: 10.1109/TCYB.2015.2419737 – ident: 10.1016/j.neunet.2019.04.024_b334 – ident: 10.1016/j.neunet.2019.04.024_b175 doi: 10.1109/TVCG.2018.2864500 – volume: 23 start-page: 1083 issue: 6 year: 2012 ident: 10.1016/j.neunet.2019.04.024_b96 article-title: Background subtraction via incremental maximum margin criterion: A discriminative approach publication-title: Machine Vision and Applications doi: 10.1007/s00138-012-0421-9 – ident: 10.1016/j.neunet.2019.04.024_b229 – ident: 10.1016/j.neunet.2019.04.024_b377 doi: 10.1609/aaai.v33i01.33019176 – year: 2015 ident: 10.1016/j.neunet.2019.04.024_b167 article-title: Robust background subtraction to global illumination changes via multiple features based OR-PCA with MRF publication-title: Journal of Electronic Imaging doi: 10.1117/1.JEI.24.4.043011 – year: 2019 ident: 10.1016/j.neunet.2019.04.024_b50 article-title: Deep, landmark-free FAME: Face alignment, modeling, and expression estimation publication-title: International Journal of Computer Vision doi: 10.1007/s11263-019-01151-x – ident: 10.1016/j.neunet.2019.04.024_b367 doi: 10.1109/ICME.2014.6890140 – year: 2016 ident: 10.1016/j.neunet.2019.04.024_b85 – volume: 70 start-page: 41 year: 2018 ident: 10.1016/j.neunet.2019.04.024_b105 article-title: A survey on deep learning techniques for image and video semantic segmentation publication-title: Applied Soft Computing doi: 10.1016/j.asoc.2018.05.018 – ident: 10.1016/j.neunet.2019.04.024_b295 doi: 10.5220/0005266303950402 – year: 2016 ident: 10.1016/j.neunet.2019.04.024_b164 article-title: Spatiotemporal low-rank modeling for complex scene background initialization publication-title: IEEE Transactions on Circuits and Systems for Video Technology – ident: 10.1016/j.neunet.2019.04.024_b273 – volume: 13 start-page: 1459 issue: 11 year: 2004 ident: 10.1016/j.neunet.2019.04.024_b191 article-title: Statistical modeling of complex background for foreground object detection publication-title: IEEE Transaction on Image Processing doi: 10.1109/TIP.2004.836169 – ident: 10.1016/j.neunet.2019.04.024_b12 – ident: 10.1016/j.neunet.2019.04.024_b323 doi: 10.1109/ICCV.1999.791228 – ident: 10.1016/j.neunet.2019.04.024_b83 doi: 10.1109/CVPR.2009.5206848 – volume: 39 start-page: 1137 year: 2017 ident: 10.1016/j.neunet.2019.04.024_b267 article-title: Faster R-CNN: Towards real-time object detection with region proposal networks publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence doi: 10.1109/TPAMI.2016.2577031 – year: 2019 ident: 10.1016/j.neunet.2019.04.024_b362 article-title: Deep learning-based methods for person re-identification: A comprehensive review publication-title: Neurocomputing doi: 10.1016/j.neucom.2019.01.079 – year: 2017 ident: 10.1016/j.neunet.2019.04.024_b65 – ident: 10.1016/j.neunet.2019.04.024_b203 – year: 2019 ident: 10.1016/j.neunet.2019.04.024_b208 article-title: Tensor robust principal component analysis with a new tensor nuclear norm publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence – volume: 11 issue: 31–66 year: 2014 ident: 10.1016/j.neunet.2019.04.024_b23 article-title: Traditional and recent approaches in background modeling for foreground detection: An overview publication-title: Computer Science Review – ident: 10.1016/j.neunet.2019.04.024_b6 doi: 10.1109/Cybermatics_2018.2018.00093 – ident: 10.1016/j.neunet.2019.04.024_b238 – ident: 10.1016/j.neunet.2019.04.024_b387 doi: 10.1007/11881599_110 – ident: 10.1016/j.neunet.2019.04.024_b103 doi: 10.1109/ICPR.2018.8545320 – ident: 10.1016/j.neunet.2019.04.024_b300 doi: 10.1109/AVSS.2015.7301753 – year: 2016 ident: 10.1016/j.neunet.2019.04.024_b400 – ident: 10.1016/j.neunet.2019.04.024_b70 – year: 2019 ident: 10.1016/j.neunet.2019.04.024_b282 article-title: Automatic fish detection in underwater videos by a deep neural network-based hybrid motion learning system publication-title: ICES Journal of Marine Science – ident: 10.1016/j.neunet.2019.04.024_b20 doi: 10.1109/TENCONSpring.2016.7519418 – volume: 48 start-page: 1374 issue: 4 year: 2015 ident: 10.1016/j.neunet.2019.04.024_b194 article-title: Co-occurrence probability based pixel pairs background model for robust object detection in dynamic scenes publication-title: Pattern Recognition doi: 10.1016/j.patcog.2014.10.020 – ident: 10.1016/j.neunet.2019.04.024_b71 – year: 2018 ident: 10.1016/j.neunet.2019.04.024_b332 – ident: 10.1016/j.neunet.2019.04.024_b249 doi: 10.1007/3-540-49256-9_16 – ident: 10.1016/j.neunet.2019.04.024_b372 doi: 10.1109/VCIP.2016.7805552 – year: 2007 ident: 10.1016/j.neunet.2019.04.024_b146 article-title: Deep belief nets publication-title: NIPS Tutorial – year: 2014 ident: 10.1016/j.neunet.2019.04.024_b100 article-title: Background subtraction model based on color and depth cues publication-title: Machine Vision and Applications doi: 10.1007/s00138-013-0562-5 – year: 2012 ident: 10.1016/j.neunet.2019.04.024_b250 article-title: Simultaneous video stabilization and moving object detection in turbulence publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI 2012 – ident: 10.1016/j.neunet.2019.04.024_b304 doi: 10.1109/CVPRW.2014.67 – volume: 20 start-page: 273 issue: 3 year: 1995 ident: 10.1016/j.neunet.2019.04.024_b72 article-title: Support-vector networks publication-title: Machine Learning doi: 10.1007/BF00994018 – volume: 55 start-page: 1 issue: 1 year: 2016 ident: 10.1016/j.neunet.2019.04.024_b271 article-title: Incremental principal component pursuit for video background modeling publication-title: Journal of Mathematical Imaging and Vision doi: 10.1007/s10851-015-0610-z – ident: 10.1016/j.neunet.2019.04.024_b224 doi: 10.1007/978-3-319-70742-6_24 – ident: 10.1016/j.neunet.2019.04.024_b375 – ident: 10.1016/j.neunet.2019.04.024_b120 doi: 10.1007/978-3-319-23222-5_60 – ident: 10.1016/j.neunet.2019.04.024_b341 – volume: 122 start-page: 22 year: 2014 ident: 10.1016/j.neunet.2019.04.024_b30 article-title: Robust PCA via principal component pursuit: A review for a comparative evaluation in video surveillance publication-title: Special Issue on Background Models Challenge, Computer Vision and Image Understanding, CVIU 2014 – start-page: 315 year: 2002 ident: 10.1016/j.neunet.2019.04.024_b189 article-title: Background estimation for video surveillance publication-title: Image and Vision Computing New Zealand, IVCNZ 2002 – ident: 10.1016/j.neunet.2019.04.024_b222 doi: 10.1007/978-3-319-23222-5_57 – volume: 300 start-page: 17 year: 2018 ident: 10.1016/j.neunet.2019.04.024_b34 article-title: Computer vision and deep learning techniques for pedestrian detection and tracking: A survey publication-title: Neurocomputing doi: 10.1016/j.neucom.2018.01.092 – year: 2015 ident: 10.1016/j.neunet.2019.04.024_b21 – year: 2015 ident: 10.1016/j.neunet.2019.04.024_b101 – volume: 19 start-page: 254 issue: 1 year: 2018 ident: 10.1016/j.neunet.2019.04.024_b371 article-title: Deep background modeling using fully convolutional network publication-title: IEEE Transactions on Intelligent Transportation Systems doi: 10.1109/TITS.2017.2754099 – year: 2011 ident: 10.1016/j.neunet.2019.04.024_b391 article-title: Background subtraction via robust dictionary learning publication-title: EURASIP Journal on Image and Video Processing, IVP 2011 doi: 10.1155/2011/972961 – ident: 10.1016/j.neunet.2019.04.024_b112 – ident: 10.1016/j.neunet.2019.04.024_b82 – ident: 10.1016/j.neunet.2019.04.024_b15 doi: 10.1109/ICOMIS.2018.8644960 – volume: 2 start-page: 303 issue: 4 year: 1989 ident: 10.1016/j.neunet.2019.04.024_b79 article-title: Approximation by superpositions of a sigmoidal function publication-title: Mathematics of Control Signals and Systems doi: 10.1007/BF02551274 – ident: 10.1016/j.neunet.2019.04.024_b305 doi: 10.1109/WACV.2015.137 – volume: 1 start-page: 265 issue: 2 year: 2009 ident: 10.1016/j.neunet.2019.04.024_b25 article-title: Modeling of dynamic backgrounds by type-2 Fuzzy Gaussians mixture models publication-title: MASAUM Journal of Basic and Applied Sciences – volume: 15 start-page: 1929 year: 2014 ident: 10.1016/j.neunet.2019.04.024_b303 article-title: Dropout: A simple way to prevent neural networks from overfitting publication-title: Journal of Machine Learning Research (JMLR) – ident: 10.1016/j.neunet.2019.04.024_b48 doi: 10.1609/aaai.v32i1.11668 – ident: 10.1016/j.neunet.2019.04.024_b200 doi: 10.1109/AVSS.2017.8078547 – year: 2018 ident: 10.1016/j.neunet.2019.04.024_b90 – ident: 10.1016/j.neunet.2019.04.024_b218 doi: 10.1109/CVPRW.2012.6238922 – ident: 10.1016/j.neunet.2019.04.024_b386 – ident: 10.1016/j.neunet.2019.04.024_b130 doi: 10.1007/978-3-642-31295-3_14 – year: 2014 ident: 10.1016/j.neunet.2019.04.024_b24 article-title: Traditional approaches in background modeling for video surveillance – ident: 10.1016/j.neunet.2019.04.024_b188 – year: 2014 ident: 10.1016/j.neunet.2019.04.024_b37 article-title: Advanced background modeling with RGB-D sensors through classifiers combination and inter-frame foreground prediction publication-title: Machine Vision and Applications – ident: 10.1016/j.neunet.2019.04.024_b39 doi: 10.1007/978-3-319-70742-6_21 – ident: 10.1016/j.neunet.2019.04.024_b108 doi: 10.1109/ICIP.2015.7350841 – ident: 10.1016/j.neunet.2019.04.024_b177 – ident: 10.1016/j.neunet.2019.04.024_b68 – ident: 10.1016/j.neunet.2019.04.024_b172 doi: 10.1609/aaai.v30i1.10274 – ident: 10.1016/j.neunet.2019.04.024_b80 – ident: 10.1016/j.neunet.2019.04.024_b308 – volume: 28 start-page: 26 year: 2018 ident: 10.1016/j.neunet.2019.04.024_b28 article-title: On the role and the importance of features for background modeling and foreground detection publication-title: Computer Science Review doi: 10.1016/j.cosrev.2018.01.004 – ident: 10.1016/j.neunet.2019.04.024_b251 doi: 10.1109/CVPR.2016.278 – volume: 28 start-page: 013038 issue: 1 year: 2019 ident: 10.1016/j.neunet.2019.04.024_b46 article-title: Deep learning-based scene-awareness approach for intelligent change detection in videos publication-title: Journal of Electronic Imaging doi: 10.1117/1.JEI.28.1.013038 – ident: 10.1016/j.neunet.2019.04.024_b192 doi: 10.1109/SNSP.2018.00089 – volume: 27 start-page: 178 year: 2019 ident: 10.1016/j.neunet.2019.04.024_b354 article-title: Robust speaker localization guided by deep learning-based time-frequency masking publication-title: IEEE/ACM Transactions on Audio, Speech, and Language Processing doi: 10.1109/TASLP.2018.2876169 – volume: 5 start-page: 115 year: 1943 ident: 10.1016/j.neunet.2019.04.024_b77 article-title: A logical calculus of the ideas immanent in nervous activity publication-title: Bulletin of Mathematical Biophysics doi: 10.1007/BF02478259 – ident: 10.1016/j.neunet.2019.04.024_b365 doi: 10.1609/aaai.v29i1.9481 – year: 2016 ident: 10.1016/j.neunet.2019.04.024_b59 article-title: Interval-valued model level Fuzzy aggregation-based background subtraction publication-title: IEEE Transactions on Cybernetics – ident: 10.1016/j.neunet.2019.04.024_b26 doi: 10.1016/j.cosrev.2019.100204 – ident: 10.1016/j.neunet.2019.04.024_b114 doi: 10.1109/ICIP.2011.6116367 – ident: 10.1016/j.neunet.2019.04.024_b319 – ident: 10.1016/j.neunet.2019.04.024_b227 – volume: 7441 start-page: 14 year: 2012 ident: 10.1016/j.neunet.2019.04.024_b102 article-title: An introduction to restricted Boltzmann machines publication-title: Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications doi: 10.1007/978-3-642-33275-3_2 – volume: 122 start-page: 65 year: 2014 ident: 10.1016/j.neunet.2019.04.024_b220 article-title: The 3dSOBS+ algorithm for moving object detection publication-title: Computer Vision and Image Understanding, CVIU 2014 doi: 10.1016/j.cviu.2013.11.006 – ident: 10.1016/j.neunet.2019.04.024_b342 – year: 2017 ident: 10.1016/j.neunet.2019.04.024_b297 article-title: Superpixel-based online wagging one-class ensemble for feature selection in background/foreground separation publication-title: Pattern Recognition Letters doi: 10.1016/j.patrec.2017.10.034 – ident: 10.1016/j.neunet.2019.04.024_b69 – year: 2018 ident: 10.1016/j.neunet.2019.04.024_b225 article-title: Background subtraction for moving object detection in RGB-D data: A survey publication-title: MDPI Journal of Imaging doi: 10.3390/jimaging4050071 – ident: 10.1016/j.neunet.2019.04.024_b16 doi: 10.1109/ICIP.2018.8451603 – ident: 10.1016/j.neunet.2019.04.024_b244 doi: 10.1109/ICASSP.2018.8461540 – ident: 10.1016/j.neunet.2019.04.024_b389 doi: 10.1109/ICME.2018.8486510 – ident: 10.1016/j.neunet.2019.04.024_b322 doi: 10.1109/WMVC.2009.5399242 – volume: 10 start-page: 40 year: 2007 ident: 10.1016/j.neunet.2019.04.024_b51 article-title: Efficient hierarchical method for background subtraction publication-title: Pattern Recognition – year: 2018 ident: 10.1016/j.neunet.2019.04.024_b201 – volume: 34 start-page: 014004 issue: 1 year: 2017 ident: 10.1016/j.neunet.2019.04.024_b133 article-title: Stable architectures for deep neural networks publication-title: Inverse Problems doi: 10.1088/1361-6420/aa9a90 – year: 2017 ident: 10.1016/j.neunet.2019.04.024_b276 article-title: Real-time adaptive histogram min-max bucket (HMMB) model for background subtraction publication-title: IEEE Transactions on Circuits and Systems for Video Technology – year: 2017 ident: 10.1016/j.neunet.2019.04.024_b237 – year: 2013 ident: 10.1016/j.neunet.2019.04.024_b364 article-title: GOSUS: Grassmannian online subspace updates with structured-sparsity publication-title: International Conference on Computer Vision, ICCV 2013 doi: 10.1109/ICCV.2013.419 – ident: 10.1016/j.neunet.2019.04.024_b402 – year: 2018 ident: 10.1016/j.neunet.2019.04.024_b190 – volume: 86 start-page: 2278 year: 1998 ident: 10.1016/j.neunet.2019.04.024_b78 article-title: Gradient-based learning applied to document recognition publication-title: Proceedings of IEEE doi: 10.1109/5.726791 – ident: 10.1016/j.neunet.2019.04.024_b207 doi: 10.1109/IJCNN.2014.6889404 – year: 2018 ident: 10.1016/j.neunet.2019.04.024_b234 article-title: Analytics of deep neural network-based background subtraction publication-title: MDPI Journal of Imaging doi: 10.3390/jimaging4060078 – ident: 10.1016/j.neunet.2019.04.024_b247 doi: 10.1109/MECO.2017.7977207 – ident: 10.1016/j.neunet.2019.04.024_b169 doi: 10.1145/2695664.2695863 |
SSID | ssj0006843 |
Score | 2.6638496 |
SecondaryResourceType | review_article |
Snippet | Conventional neural networks have been demonstrated to be a powerful framework for background subtraction in video acquired by static cameras. Indeed, the... |
SourceID | hal proquest pubmed crossref elsevier |
SourceType | Open Access Repository Aggregation Database Index Database Enrichment Source Publisher |
StartPage | 8 |
SubjectTerms | Auto-encoders networks Background subtraction Computer Science Convolutional neural networks Generative adversarial networks Image Processing Restricted Boltzmann machines |
Title | Deep neural network concepts for background subtraction:A systematic review and comparative evaluation |
URI | https://dx.doi.org/10.1016/j.neunet.2019.04.024 https://www.ncbi.nlm.nih.gov/pubmed/31129491 https://www.proquest.com/docview/2231849664 https://hal.science/hal-02118618 |
Volume | 117 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1NT9wwELWAXrhAP4BuP5CpuJpN1pPY6W1Fi7Yt5VKQuFm2Y4ulKKzY3R772zsTJ7vigJB6iZRoklgZ2_MmfvPM2DGOshC01SIWAAKijcKqrBIqjkY2d3UGnv5D_rwoJ1fw_bq43mCnfS0M0Sq7uT_N6e1s3V0Zdl9zOJtOh78yDLUlAh6EILT8RoqfAIr080_-rmkepU7MOTQWZN2Xz7UcryYsm0CMyrxqBU9H8FR42rwhnuRTILQNRmcv2U6HIvk4NfQV2wjNa7bb79DAuwH7hsUvIcw4aVaidZMY39ynSsU5R7zKnfW_qbKjqfl86RYPqc7h85ivJZ55Km_hFm38Wiycr4XC99jV2dfL04nodlYQHgHZQlhdgyzzEJUvSE8fvPZ57aIqHR5AWxuiLJWyWWWV9rGufCYLW8uC5OtklPtsq7lvwlvGMylrBxZk5itwDtM3CZXCrNti6uJtNWCy_6DGd7LjtPvFnen5ZbcmucGQG0wGBt0wYGJ11yzJbjxjr3pfmUfdx2BkeObOT-ja1UtIbXsyPjd0DeFPrstc_8kH7Kj3vMHhR2sqtgn3y7lBdIU5MuaM-KCD1CVWz5KEZaHK3_13496zbTpLnLYPbGvxsAwfEQQt3GHbyw_Zi_G3H5OLf6wSB08 |
linkProvider | Elsevier |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1NT9wwEB0BPbQXoJ8slNatenU3WTux09sKirbtwqUgcbNsxxa0VVixuz32tzMTJ7vigJB6ySGZJFbG9ryJ3zwDfMJRFoK2msdCSi6jjdyqrOIqjkY2d3UmPf2HPD0rJxfy-2VxuQFHfS0M0Sq7uT_N6e1s3Z0Zdl9zOLu-Hv7MMNSWCHgQgtDym9iEJ7IQirr2539rnkepE3UOrTmZ9_VzLcmrCcsmEKUyr1rF05F8KD5tXhFR8iEU2kajk13Y7mAkG6eWPoeN0LyAnX6LBtaN2JcQj0OYMRKtROsmUb6ZT6WKc4aAlTnrf1NpR1Oz-dItblOhw5cxW2s8s1Tfwiza-LVaOFsrhb-Ci5Ov50cT3m2twD0isgW3upaizENUviBBfem1z2sXVenwILW1IYpSKZtVVmkf68pnorC1KEi_TkTxGraamybsAcuEqJ20UmS-ks5h_iZkpTDttpi7eFsNQPQf1PhOd5y2v_hjeoLZL5PcYMgNJpMG3TAAvrprlnQ3HrFXva_Mvf5jMDQ8cudHdO3qJSS3PRlPDZ1D_JPrMtd_8wF86D1vcPzRooptws1ybhBeYZKMSSM-6E3qEqtnCQKzssr3_7tx7-Hp5Px0aqbfzn4cwDO6kghub2FrcbsMh4iIFu5d2-PvAK73COU |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Deep+Neural+Network+Concepts+for+Background+Subtraction%3A+A+Systematic+Review+and+Comparative+Evaluation&rft.jtitle=Neural+networks&rft.au=Bouwmans%2C+Thierry&rft.au=Javed%2C+Sajid&rft.au=Sultana%2C+Maryam&rft.au=Soon%2C+Ki+Jung&rft.date=2019-09-01&rft.pub=Elsevier&rft.issn=0893-6080&rft.volume=117&rft.spage=8&rft.epage=66&rft_id=info:doi/10.1016%2Fj.neunet.2019.04.024&rft.externalDBID=HAS_PDF_LINK&rft.externalDocID=oai_HAL_hal_02118618v1 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0893-6080&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0893-6080&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0893-6080&client=summon |