Towards Algorithm Transformation for Temporal Data Mining on GPU
Data Mining allows one to analyze large amounts of data. With increasing amounts of data
being collected, more computing power is needed to mine these larger and larger sums of data.
The GPU is an excellent piece of hardware with a compelling price to performance ratio and
has rapidly risen in popularity. However, this increase in speed comes at a cost. The GPU's
architecture executes non-data parallel code with either marginal speedup or even slowdown.
The type of data mining we examine, temporal data mining, uses a finite state machine
(FSM), which is non-data parallel. We contribute the concept of algorithm transformation
for increasing the data parallelism of an algorithm. We apply the algorithm transformation
process to the problem of temporal data mining which solves the same problem as the FSM-
based algorithm, but is data parallel. The new GPU implementation shows a 6x speedup
over the best CPU implementation and 11x speedup over a previous GPU implementation.