Normality Learning in Multispace for Video Anomaly Detection

Video anomaly detection is a challenging task owing to the rare and diverse nature of abnormal events. However, most of the existing methods only learn the normality in a single space, focusing on low-level detailed features, which is easily affected by unimportant pixels. To address this issue, in...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on circuits and systems for video technology Vol. 31; no. 9; pp. 3694 - 3706
Main Authors Zhang, Yu, Nie, Xiushan, He, Rundong, Chen, Meng, Yin, Yilong
Format Journal Article
LanguageEnglish
Published New York IEEE 01.09.2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Video anomaly detection is a challenging task owing to the rare and diverse nature of abnormal events. However, most of the existing methods only learn the normality in a single space, focusing on low-level detailed features, which is easily affected by unimportant pixels. To address this issue, in this study, we propose a semi-supervised method based on the generative adversarial network and frame prediction, wherein the normality is learned in both the original image space and latent space, and the events deviating from the normality are detected as anomalies. In particular, given a video clip, we first predict a future frame and minimize the prediction errors between the generated frame and its ground truth. Thereafter, we encode the predicted frames and their ground truths in the latent space and minimize their differences. In the testing phase, we calculate the normal scores of each frame in both the image and latent spaces to obtain a comprehensive evaluation. Utilizing the multispace can capture more normality distribution information of the data, which can benefit anomaly detection. The results of experiments on three benchmark datasets demonstrate the effectiveness of the proposed method.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1051-8215
1558-2205
DOI:10.1109/TCSVT.2020.3039798