Accelerating the Local Outlier Factor Algorithm on a GPU for Intrusion Detection Systems
The Local Outlier Factor (LOF) is a very powerful anomaly detection method available in machine learning and classification.
The algorithm defines the notion of local outlier in which the degree to which an object is outlying is dependent
on the density of its local neighborhood, and each object can be assigned an LOF which represents the likelihood of that
object being an outlier. Although this concept of a local outlier is a useful one, the computation of LOF values for every
data object requires a large number of k-nearest neighbor queries – this overhead can limit the use of LOF due to the
computational overhead involved.
Due to the growing popularity of Graphics Processing Units (GPU) in general-purpose computing domains, and equipped
with a high-level programming language designed specifically for general-purpose applications (e.g., CUDA), we look
to apply this parallel computing approach to accelerate LOF. In this paper we explore how to utilize a CUDA-based GPU
implementation of the k-nearest neighbor algorithm to accelerate LOF classification. We achieve more than a 100X
speedup over a multi-threaded dual-core CPU implementation. We also consider the impact of input data set size, the
neighborhood size (i.e., the value of k) and the feature space dimension, and report on their impact on execution time.