Efficient Top Rank Optimization with Gradient Boosting for Supervised Anomaly Detection
In this paper we address the anomaly detection problem in a supervised setting where positive examples might be very sparse. We tackle this task with a learning to rank strategy by optimizing a differentiable smoothed surrogate of the so-called Average Precision (AP). Despite its non-convexity, we s...
Saved in:
Published in | Machine Learning and Knowledge Discovery in Databases Vol. 10534; pp. 20 - 35 |
---|---|
Main Authors | , , , , |
Format | Book Chapter |
Language | English |
Published |
Switzerland
Springer International Publishing AG
2017
Springer International Publishing |
Series | Lecture Notes in Computer Science |
Online Access | Get full text |
Cover
Loading…
Summary: | In this paper we address the anomaly detection problem in a supervised setting where positive examples might be very sparse. We tackle this task with a learning to rank strategy by optimizing a differentiable smoothed surrogate of the so-called Average Precision (AP). Despite its non-convexity, we show how to use it efficiently in a stochastic gradient boosting framework. We show that using AP is much better to optimize the top rank alerts than the state of the art measures. We demonstrate on anomaly detection tasks that the interest of our method is even reinforced in highly unbalanced scenarios. |
---|---|
ISBN: | 3319712489 9783319712482 |
ISSN: | 0302-9743 1611-3349 |
DOI: | 10.1007/978-3-319-71249-9_2 |