Proceedings of the 20th International Conference on Data Engineering (ICDE 2004). Boston, MA. March 30 to April 2, 2004. Best Paper Award.
Shimin Chen, Anastassia Ailamaki, Philip B. Gibbons*, Todd C. Mowry
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
*Intel Research Pittsburgh
http://www.pdl.cmu.edu/
Hash join algorithms suffer from extensive CPU cache stalls. This paper shows that the standard hash join algorithm for disk-oriented databases (i.e. GRACE) spends over 73% of its user time stalled on CPU cache misses, and explores the use of prefetching to improve its cache performance. Applying prefetching to hash joins is complicated by the data dependencies, multiple code paths, and inherent randomness of hashing. We present two techniques, group prefetching and software-pipelined prefetching, that overcome these complications. These schemes achieve 2.0--2.9X speedups for the join phase and 1.4--2.6X speedups for the partition phase over GRACE and simple prefetching approaches. Compared with previous cache-aware approaches (i.e. cache partitioning), the schemes are at least 50% faster on large relations and do not require exclusive use of the CPU cache to be effective.