On Evaluation of Outlier Rankings and Outlier Scores

Erich Schubert; Remigius Wojdanowski; Arthur Zimek; Hans-Peter Kriegel
In Proceedings of the 12th SIAM International Conference on Data Mining (SDM), Anaheim, CA: 1047-1058, 2012.

Abstract:

Outlier detection research is currently focusing on the development of new methods and on improving the computation time for these methods. Evaluation however is rather heuristic, often considering just precision in the top k results or using the area under the ROC curve. These evaluation procedures do not allow for assessment of similarity between methods. Judging the similarity of or correlation between two rankings of outlier scores is an important question in itself but it is also an essential step towards meaningfully building outlier detection ensembles, where this aspect has been completely ignored so far. In this study, our generalized view of evaluation methods allows both to evaluate the performance of existing methods as well as to compare different methods w.r.t. their detection performance. Our new evaluation framework takes into consideration the class imbalance problem and offers new insights on similarity and redundancy of existing outlier detection methods. As a result, the design of effective ensemble methods for outlier detection is considerably enhanced.

Online:

SIAM Electronic Edition (Open Access) - Preprint (local) - DBLP BibTeX record - Source code