Speedup Two-Class Supervised Outlier Detection
Outlier detection is an important topic in the community of data mining and machine learning. In two-class supervised outlier detection, it needs to solve a large quadratic programming whose size is twice the number of samples in the training set. Thus, training two-class supervised outlier detectio...
Saved in:
Published in | IEEE access Vol. 6; pp. 63923 - 63933 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
Piscataway
IEEE
2018
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Outlier detection is an important topic in the community of data mining and machine learning. In two-class supervised outlier detection, it needs to solve a large quadratic programming whose size is twice the number of samples in the training set. Thus, training two-class supervised outlier detection model is time consuming. In this paper, we show that the result of the two-class supervised outlier detection is determined by minor critical samples which are with nonzero Lagrange multipliers and the critical samples must be located near the boundary of each class. It is much faster to train the two-class supervised outlier detection on the subset which consists of critical samples. We compare three methods which could find boundary samples. The experimental results show that the nearest neighbors distribution is more suitable for finding critical samples for the two-class supervised outlier detection. The two-class supervised novelty detection could become much faster and the performance does not degrade when only critical samples are retained by nearest neighbors' distribution information. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 2169-3536 2169-3536 |
DOI: | 10.1109/ACCESS.2018.2877701 |