Attention-Driven Loss for Anomaly Detection in Video Surveillance
Recent video anomaly detection methods focus on reconstructing or predicting frames. Under this umbrella, the long-standing inter-class data-imbalance problem resorts to the imbalance between foreground and stationary background objects in video anomaly detection and this has been less investigated...
Saved in:
Published in | IEEE transactions on circuits and systems for video technology Vol. 30; no. 12; pp. 4639 - 4647 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.12.2020
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Recent video anomaly detection methods focus on reconstructing or predicting frames. Under this umbrella, the long-standing inter-class data-imbalance problem resorts to the imbalance between foreground and stationary background objects in video anomaly detection and this has been less investigated by existing solutions. Naively optimizing the reconstructing loss yields a biased optimization towards background reconstruction rather than the objects of interest in the foreground. To solve this, we proposed a simple yet effective solution, termed attention-driven loss to alleviate the foreground-background imbalance problem in anomaly detection. Specifically, we compute a single mask map that summarizes the frame evolution of moving foreground regions and suppresses the background in the training video clips. After that, we construct an attention map through the combination of the mask map and background to give different weights to the foreground and background region respectively. The proposed attention-driven loss is independent of backbone networks and can be easily augmented in most existing anomaly detection models. Augmented with attention-driven loss, the model is able to achieve AUC 86.0% on Avenue, 83.9% on Ped1, 96% on Ped2 datasets. Extensive experimental results and ablation studies further validate the effectiveness of our model. |
---|---|
AbstractList | Recent video anomaly detection methods focus on reconstructing or predicting frames. Under this umbrella, the long-standing inter-class data-imbalance problem resorts to the imbalance between foreground and stationary background objects in video anomaly detection and this has been less investigated by existing solutions. Naively optimizing the reconstructing loss yields a biased optimization towards background reconstruction rather than the objects of interest in the foreground. To solve this, we proposed a simple yet effective solution, termed attention-driven loss to alleviate the foreground-background imbalance problem in anomaly detection. Specifically, we compute a single mask map that summarizes the frame evolution of moving foreground regions and suppresses the background in the training video clips. After that, we construct an attention map through the combination of the mask map and background to give different weights to the foreground and background region respectively. The proposed attention-driven loss is independent of backbone networks and can be easily augmented in most existing anomaly detection models. Augmented with attention-driven loss, the model is able to achieve AUC 86.0% on Avenue, 83.9% on Ped1, 96% on Ped2 datasets. Extensive experimental results and ablation studies further validate the effectiveness of our model. |
Author | Zhou, Joey Tianyi Fang, Zhiwen Zhang, Le Peng, Xi Xiao, Yang Du, Jiawei |
Author_xml | – sequence: 1 givenname: Joey Tianyi orcidid: 0000-0002-4675-7055 surname: Zhou fullname: Zhou, Joey Tianyi organization: Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (ASTAR), Singapore – sequence: 2 givenname: Le orcidid: 0000-0002-6930-8674 surname: Zhang fullname: Zhang, Le email: lzhang027@ntu.edu.sg organization: Institute for Infocomm Research (I2R), Agency for Science, Technology and Research (ASTAR), Singapore – sequence: 3 givenname: Zhiwen surname: Fang fullname: Fang, Zhiwen organization: Guangdong Provincial Key Laboratory of Medical Image Processing, School of Biomedical Engineering, Southern Medical University, Guangzhou, China – sequence: 4 givenname: Jiawei surname: Du fullname: Du, Jiawei organization: Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (ASTAR), Singapore – sequence: 5 givenname: Xi orcidid: 0000-0002-5727-2790 surname: Peng fullname: Peng, Xi organization: College of Computer Science, Sichuan University, Chengdu, China – sequence: 6 givenname: Yang orcidid: 0000-0002-7739-4146 surname: Xiao fullname: Xiao, Yang organization: National Key Laboratory of Science and Technology on Multi-Spectral Information Processing, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan, China |
BookMark | eNp9kLtuwjAUhq2KSgXaF2iXSJ1DbSd27BFBbxJSByirZZxjySjY1DFIvH2Tgjp06HTO8H3n8o_QwAcPCN0TPCEEy6fVbLleTSgmckIlp5TKKzQkjImcUswGXY8ZyQUl7AaN2naLMSlFWQ3RdJoS-OSCz-fRHcFni9C2mQ0xm_qw080pm0MC0xOZ89na1RCy5SEewTWN9gZu0bXVTQt3lzpGny_Pq9lbvvh4fZ9NF7mhkqVcbLipNS-4BWYFF7VmutZFpbkEoQWArWtDNTOa1ro0G93RFRSFqDi3YmOLMXo8z93H8HWANqltOETfrVS05KJiXArcUeJMmdj9EcEq45Lur09Ru0YRrPrA1E9gqg9MXQLrVPpH3Ue30_H0v_RwlhwA_ApClgWWsvgGRYF61Q |
CODEN | ITCTEM |
CitedBy_id | crossref_primary_10_1109_TCSVT_2022_3190539 crossref_primary_10_1007_s00530_025_01735_3 crossref_primary_10_1016_j_knosys_2022_109348 crossref_primary_10_1109_TII_2021_3122801 crossref_primary_10_1109_TCSVT_2022_3221723 crossref_primary_10_1002_aisy_202300706 crossref_primary_10_1109_TCSVT_2022_3216457 crossref_primary_10_1016_j_jvcir_2022_103739 crossref_primary_10_1007_s10489_022_03488_2 crossref_primary_10_1007_s11042_022_13643_z crossref_primary_10_1007_s10489_022_03613_1 crossref_primary_10_1007_s10489_023_04940_7 crossref_primary_10_1016_j_neucom_2023_126561 crossref_primary_10_1109_TCSVT_2023_3338743 crossref_primary_10_3390_app13042284 crossref_primary_10_1109_TNNLS_2022_3159538 crossref_primary_10_1016_j_patcog_2021_108336 crossref_primary_10_1109_TITS_2023_3264573 crossref_primary_10_1109_TCSVT_2022_3148392 crossref_primary_10_1016_j_engappai_2023_106173 crossref_primary_10_1016_j_neucom_2023_02_027 crossref_primary_10_1109_TCSVT_2023_3321235 crossref_primary_10_1016_j_knosys_2023_111111 crossref_primary_10_1109_TCSVT_2020_3039798 crossref_primary_10_1109_TCYB_2021_3126831 crossref_primary_10_1007_s11063_023_11347_5 crossref_primary_10_1109_LSP_2021_3107750 crossref_primary_10_1109_TCSVT_2022_3218587 crossref_primary_10_1002_cpe_7056 crossref_primary_10_1016_j_cviu_2023_103686 crossref_primary_10_3390_s23104828 crossref_primary_10_1109_TCSVT_2022_3221755 crossref_primary_10_1007_s10489_021_02356_9 crossref_primary_10_1109_TCSVT_2024_3376399 crossref_primary_10_1109_TITS_2022_3157254 crossref_primary_10_1109_TCSVT_2021_3103677 crossref_primary_10_3390_s23031612 crossref_primary_10_1007_s00371_023_02882_2 crossref_primary_10_1007_s10462_022_10258_6 crossref_primary_10_1016_j_displa_2022_102327 crossref_primary_10_1007_s11760_022_02174_7 crossref_primary_10_1109_TCSVT_2024_3465517 crossref_primary_10_1016_j_eswa_2024_125581 crossref_primary_10_1016_j_neucom_2024_128673 crossref_primary_10_1016_j_neunet_2025_107299 crossref_primary_10_1109_ACCESS_2024_3435144 crossref_primary_10_3390_app13084956 crossref_primary_10_1109_TCSVT_2023_3303258 crossref_primary_10_1109_TIFS_2023_3300094 crossref_primary_10_1109_TCSVT_2022_3221622 crossref_primary_10_1016_j_comcom_2024_01_004 crossref_primary_10_1016_j_engappai_2023_107830 crossref_primary_10_1016_j_patcog_2021_108232 crossref_primary_10_1109_ACCESS_2021_3087509 crossref_primary_10_1109_TCYB_2022_3227044 crossref_primary_10_3390_electronics12071517 crossref_primary_10_1109_TPAMI_2021_3129349 crossref_primary_10_4218_etrij_2024_0115 crossref_primary_10_1016_j_imavis_2024_105139 crossref_primary_10_1016_j_jvcir_2022_103598 crossref_primary_10_1109_TCSVT_2024_3417810 crossref_primary_10_1007_s10462_023_10609_x crossref_primary_10_1109_TCSVT_2022_3227716 crossref_primary_10_1007_s10462_024_11092_8 crossref_primary_10_1007_s11760_024_03152_x crossref_primary_10_1109_TCSVT_2022_3181452 crossref_primary_10_1016_j_patcog_2024_110649 crossref_primary_10_1016_j_jvcir_2024_104108 crossref_primary_10_1109_TCSVT_2024_3450734 crossref_primary_10_1007_s11760_024_03570_x crossref_primary_10_1109_JSTARS_2023_3249216 crossref_primary_10_1016_j_engappai_2024_109496 crossref_primary_10_1109_TCSVT_2021_3066675 crossref_primary_10_1007_s11042_023_17382_7 crossref_primary_10_1016_j_imavis_2024_105205 crossref_primary_10_1109_TCSVT_2024_3350084 crossref_primary_10_1016_j_neucom_2024_127444 crossref_primary_10_1016_j_neucom_2024_128698 crossref_primary_10_1109_TCSVT_2022_3205348 crossref_primary_10_1109_TCSVT_2023_3268680 crossref_primary_10_1016_j_patrec_2024_08_013 crossref_primary_10_1007_s10462_023_10557_6 crossref_primary_10_1007_s10044_020_00901_9 |
Cites_doi | 10.1109/TNNLS.2019.2911236 10.1109/CVPR.2018.00684 10.1007/978-3-319-68548-9_70 10.1109/ICCV.2015.316 10.1016/j.cviu.2018.02.006 10.1016/j.cviu.2016.10.010 10.1109/TCSVT.2018.2870832 10.1109/ICCV.2017.315 10.1109/ICCV.2013.338 10.1109/CVPR.2009.5206569 10.1109/CVPR.2010.5539882 10.1007/978-3-319-10590-1_51 10.1109/CVPR.2016.396 10.1109/CVPRW.2016.90 10.1007/11744047_33 10.1109/ICCV.2017.324 10.1109/CVPR.2016.331 10.1109/CVPR.2016.86 10.1109/CVPR.2017.683 10.1109/CVPR.2005.316 10.1007/978-3-319-59050-9_12 10.1109/TIFS.2019.2900907 10.1109/ICME.2017.8019325 10.1109/TIP.2012.2233490 10.1109/CVPR.2011.5995524 10.1109/TITS.2016.2601655 10.1016/j.patcog.2016.09.016 10.1109/TIP.2017.2670780 10.1109/CVPR.2005.177 10.1109/TPAMI.2019.2931569 10.1109/CVPR.2010.5539872 10.1007/978-3-319-59081-3_23 10.1109/ICCV.2017.45 10.1109/CVPR.2016.580 10.1109/CVPR.2011.5995434 10.1109/TCYB.2014.2330853 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020 |
DBID | 97E RIA RIE AAYXX CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D |
DOI | 10.1109/TCSVT.2019.2962229 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
DatabaseTitle | CrossRef Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional |
DatabaseTitleList | Technology Research Database |
Database_xml | – sequence: 1 dbid: RIE name: IEEE Xplore url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering |
EISSN | 1558-2205 |
EndPage | 4647 |
ExternalDocumentID | 10_1109_TCSVT_2019_2962229 8943099 |
Genre | orig-research |
GrantInformation_xml | – fundername: Fundamental Research Funds for the Central Universities grantid: YJ201949 funderid: 10.13039/501100012226 – fundername: Singapore government’s Research, Innovation and Enterprise 2020 plan (Advanced Manufacturing and Engineering domain) grantid: A18A1b0045 – fundername: National Natural Science Foundation of China grantid: 61502187; 61702182; 61806135; 61625204; 61836006 funderid: 10.13039/501100001809 |
GroupedDBID | -~X 0R~ 29I 4.4 5GY 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AETIX AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS EJD HZ~ H~9 ICLAB IFIPE IFJZH IPLJI JAVBF LAI M43 O9- OCL P2P RIA RIE RNS RXW TAE TN5 VH1 AAYXX CITATION RIG 7SC 7SP 8FD JQ2 L7M L~C L~D |
ID | FETCH-LOGICAL-c295t-8b6cda636fe5f868da5ada37a69e8a8eefddc2a5ca2da4cbacda7e338766f8bf3 |
IEDL.DBID | RIE |
ISSN | 1051-8215 |
IngestDate | Sun Jun 29 13:28:15 EDT 2025 Tue Jul 01 00:41:13 EDT 2025 Thu Apr 24 23:01:41 EDT 2025 Wed Aug 27 02:32:40 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 12 |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c295t-8b6cda636fe5f868da5ada37a69e8a8eefddc2a5ca2da4cbacda7e338766f8bf3 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0002-7739-4146 0000-0002-5727-2790 0000-0002-4675-7055 0000-0002-6930-8674 |
PQID | 2468756980 |
PQPubID | 85433 |
PageCount | 9 |
ParticipantIDs | proquest_journals_2468756980 crossref_citationtrail_10_1109_TCSVT_2019_2962229 crossref_primary_10_1109_TCSVT_2019_2962229 ieee_primary_8943099 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2020-12-01 |
PublicationDateYYYYMMDD | 2020-12-01 |
PublicationDate_xml | – month: 12 year: 2020 text: 2020-12-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | New York |
PublicationPlace_xml | – name: New York |
PublicationTitle | IEEE transactions on circuits and systems for video technology |
PublicationTitleAbbrev | TCSVT |
PublicationYear | 2020 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref35 ref34 ref12 ref15 ref36 ref14 ref30 ref33 ref32 ref10 mathieu (ref24) 2015 ref39 ref17 ref38 ref16 ref19 ref18 sabokrou (ref31) 2018 akcay (ref1) 2018 ren (ref27) 2016; abs 1603 4026 medel (ref25) 2016 vaswani (ref37) 2017 ref46 ref45 ref23 rei (ref26) 2016 ref47 bahdanau (ref2) 2014 ref20 ref42 ref41 ref22 ref44 he (ref13) 2008; 21 ref21 ref43 ref29 goodfellow (ref11) 2014 ref8 ref7 ronneberger (ref28) 2015 ref9 ref4 ref3 ref6 ref5 ref40 |
References_xml | – ident: ref47 doi: 10.1109/TNNLS.2019.2911236 – ident: ref18 doi: 10.1109/CVPR.2018.00684 – ident: ref33 doi: 10.1007/978-3-319-68548-9_70 – ident: ref10 doi: 10.1109/ICCV.2015.316 – ident: ref30 doi: 10.1016/j.cviu.2018.02.006 – ident: ref40 doi: 10.1016/j.cviu.2016.10.010 – year: 2014 ident: ref11 article-title: Explaining and harnessing adversarial examples publication-title: arXiv 1412 6572 – ident: ref6 doi: 10.1109/TCSVT.2018.2870832 – volume: abs 1603 4026 start-page: 1 year: 2016 ident: ref27 article-title: A comprehensive study of sparse codes on abnormality detection publication-title: CoRR – ident: ref36 doi: 10.1109/ICCV.2017.315 – ident: ref19 doi: 10.1109/ICCV.2013.338 – ident: ref16 doi: 10.1109/CVPR.2009.5206569 – ident: ref39 doi: 10.1109/CVPR.2010.5539882 – start-page: 488 year: 2018 ident: ref31 article-title: AVID: Adversarial visual irregularity detection publication-title: Proc Asian Conf Comput Vis – ident: ref34 doi: 10.1007/978-3-319-10590-1_51 – year: 2018 ident: ref1 article-title: GANomaly: Semi-supervised anomaly detection via adversarial training publication-title: arXiv 1805 06725 – ident: ref4 doi: 10.1109/CVPR.2016.396 – start-page: 309 year: 2016 ident: ref26 article-title: Attending to characters in neural sequence labeling models publication-title: Proc 26th Int Conf Comput Linguistics Tech Papers – ident: ref15 doi: 10.1109/CVPRW.2016.90 – ident: ref9 doi: 10.1007/11744047_33 – ident: ref17 doi: 10.1109/ICCV.2017.324 – ident: ref3 doi: 10.1109/CVPR.2016.331 – ident: ref12 doi: 10.1109/CVPR.2016.86 – start-page: 234 year: 2015 ident: ref28 article-title: U-Net: Convolutional networks for biomedical image segmentation publication-title: Proc Int Conf Med Image Comput Comput -Assisted Intervent – year: 2016 ident: ref25 article-title: Anomaly detection in video using predictive convolutional long short-term memory networks publication-title: arXiv 1612 00390 – ident: ref38 doi: 10.1109/CVPR.2017.683 – ident: ref43 doi: 10.1109/CVPR.2005.316 – ident: ref32 doi: 10.1007/978-3-319-59050-9_12 – ident: ref45 doi: 10.1109/TIFS.2019.2900907 – ident: ref20 doi: 10.1109/ICME.2017.8019325 – ident: ref23 doi: 10.1109/TIP.2012.2233490 – ident: ref44 doi: 10.1109/CVPR.2011.5995524 – start-page: 5998 year: 2017 ident: ref37 article-title: Attention is all you need publication-title: Proc Adv Neural Inf Process Syst – volume: 21 start-page: 1263 year: 2008 ident: ref13 article-title: Learning from imbalanced data publication-title: IEEE Trans Knowl Data Eng – ident: ref42 doi: 10.1109/TITS.2016.2601655 – ident: ref35 doi: 10.1016/j.patcog.2016.09.016 – ident: ref29 doi: 10.1109/TIP.2017.2670780 – ident: ref8 doi: 10.1109/CVPR.2005.177 – ident: ref46 doi: 10.1109/TPAMI.2019.2931569 – ident: ref22 doi: 10.1109/CVPR.2010.5539872 – year: 2015 ident: ref24 article-title: Deep multi-scale video prediction beyond mean square error publication-title: arXiv 1511 05440 – ident: ref5 doi: 10.1007/978-3-319-59081-3_23 – ident: ref21 doi: 10.1109/ICCV.2017.45 – ident: ref14 doi: 10.1109/CVPR.2016.580 – ident: ref7 doi: 10.1109/CVPR.2011.5995434 – ident: ref41 doi: 10.1109/TCYB.2014.2330853 – year: 2014 ident: ref2 article-title: Neural machine translation by jointly learning to align and translate publication-title: arXiv 1409 0473 |
SSID | ssj0014847 |
Score | 2.633006 |
Snippet | Recent video anomaly detection methods focus on reconstructing or predicting frames. Under this umbrella, the long-standing inter-class data-imbalance problem... |
SourceID | proquest crossref ieee |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 4639 |
SubjectTerms | Ablation Anomalies Anomaly detection attention Computer networks Convolutional codes Deep learning Object recognition Optimization Task analysis Training data |
Title | Attention-Driven Loss for Anomaly Detection in Video Surveillance |
URI | https://ieeexplore.ieee.org/document/8943099 https://www.proquest.com/docview/2468756980 |
Volume | 30 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LT8MwDLZgJzjwRgwGyoEbdLRpmibHaYAQAi6MabcqTVwJAR0aHRL8epK0m3gJcevBUS3bdfw1zmeAQ4qpiJTNfqmK4oAVNAoERQxEnCuGaSxC7dk-b_jFHbscJaMFOJ7fhUFE33yGXffoz_LNWE_dr7ITzxUu5SIsWuBW39Wanxgw4YeJ2XLBvS1KZhdkQnky6N8OB66LS3ap5G6A9ZdNyE9V-ZGK_f5yvgrXM83qtpKH7rTKu_r9G2njf1Vfg5Wm0CS9OjLWYQHLDVj-RD-4Cb1eVdXdjsHpxGU9cmW1JbaMJb1y_KQe38gpVr5XqyT3JRneGxyT2-nkFd2sIhsuW3B3fjboXwTNSIVAU5lUgci5NorHvMCkEFwYlSij4lRxiUIJxMIYTVWiFTWK6VxZ6RQtjE05L0RexNvQKscl7gApHKGoLfZUkgumjQXlTKYy11KZSGDI2hDNbJzphm_cjb14zDzuCGXm_ZI5v2SNX9pwNF_zXLNt_Cm96Qw9l2xs3IbOzJVZ80G-ZJRxi8y4FOHu76v2YIk6KO07VTrQqiZT3Lf1RpUf-ED7AH-50nE |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwzV1NbxMxEB2VcIAe-CoVKQV8KCe06a7X67UPHKKGKm1DL02r3hZ_zEpRy6ZKN0Xht_BX-G_Y3k0EFHGr1Nse7JXseR7P2M9vAHYo5iJRzvvlKkkjVtIkEhQxEqlWDPNUxCaofR7z4Sk7PM_O1-DH6i0MIgbyGfb8Z7jLt1Mz90dlu0ErXMqWQnmEi28uQbv-eDBw1nxP6f6n8d4wamsIRIbKrI6E5sYqnvISs1JwYVWmrEpzxSUKJRBLaw1VmVHUKma0cq1zdHlbznkpdJm6_z6Ahy7OyGjzOmx1R8FEKF_mAhQ_viRbPsmJ5e547-Rs7Hljskcl9yWz_9j2Qh2XW84_7Gj7T-Hnci4aIstFb17rnvn-l0zkfZ2sZ_CkDaVJv8H-c1jD6gWs_yawuAH9fl03fM5oMPN-nYzc7BAXqJN-Nf2qLhdkgHVgo1VkUpGzicUpOZnPbtBXY3IL4iWc3skYNqFTTSt8BaT0kqkunFWZFsxY1CmTudRGKpsIjFkXkqVNC9MqqvvCHpdFyKxiWQQcFB4HRYuDLnxY9blq9ET-23rDG3bVsrVpF7aX0Clal3NdUMZd7smliLf-3esdPBqOP4-K0cHx0Wt4TP3BQeDlbEOnns3xjYuuav02gJzAl7sGyi-LdTaq |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Attention-Driven+Loss+for+Anomaly+Detection+in+Video+Surveillance&rft.jtitle=IEEE+transactions+on+circuits+and+systems+for+video+technology&rft.au=Zhou%2C+Joey+Tianyi&rft.au=Zhang%2C+Le&rft.au=Fang%2C+Zhiwen&rft.au=Du%2C+Jiawei&rft.date=2020-12-01&rft.pub=IEEE&rft.issn=1051-8215&rft.volume=30&rft.issue=12&rft.spage=4639&rft.epage=4647&rft_id=info:doi/10.1109%2FTCSVT.2019.2962229&rft.externalDocID=8943099 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1051-8215&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1051-8215&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1051-8215&client=summon |