Robust, Deep and Inductive Anomaly Detection

PCA is a classical statistical technique whose simplicity and maturity has seen it find widespread use for anomaly detection. However, it is limited in this regard by being sensitive to gross perturbations of the input, and by seeking a linear subspace that captures normal behaviour. The first issue...

Full description

Saved in:
Bibliographic Details
Published inMachine Learning and Knowledge Discovery in Databases Vol. 10534; pp. 36 - 51
Main Authors Chalapathy, Raghavendra, Menon, Aditya Krishna, Chawla, Sanjay
Format Book Chapter
LanguageEnglish
Published Switzerland Springer International Publishing AG 2017
Springer International Publishing
SeriesLecture Notes in Computer Science
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:PCA is a classical statistical technique whose simplicity and maturity has seen it find widespread use for anomaly detection. However, it is limited in this regard by being sensitive to gross perturbations of the input, and by seeking a linear subspace that captures normal behaviour. The first issue has been dealt with by robust PCA, a variant of PCA that explicitly allows for some data points to be arbitrarily corrupted; however, this does not resolve the second issue, and indeed introduces the new issue that one can no longer inductively find anomalies on a test set. This paper addresses both issues in a single model, the robust autoencoder. This method learns a nonlinear subspace that captures the majority of data points, while allowing for some data to have arbitrary corruption. The model is simple to train and leverages recent advances in the optimisation of deep neural networks. Experiments on a range of real-world datasets highlight the model’s effectiveness.
ISBN:3319712489
9783319712482
ISSN:0302-9743
1611-3349
DOI:10.1007/978-3-319-71249-9_3