DaMoN’16, June 26-July 01 2016, San Francisco, CA, USA.
Lin Ma, Joy Arulraj, Sam Zhao†, Andrew Pavlo, Subramanya R. Dulloor*, Michael J. Giardino^,
Jeff Parkhurst*, Jason L. Gardner*, Kshitij Doshi*, Col. Stanley Zdonik†
Carnegie Mellon University
* Intel Labs
^ Georgia Institute of Technology
† Brown University
In-memory database management systems (DBMSs) outperform disk-oriented systems for on-line transaction processing (OLTP) workloads. But this improved performance is only achievable when the database is smaller than the amount of physical memory available in the system. To overcome this limitation, some in-memory DBMSs can move cold data out of volatile DRAM to secondary storage. Such data appears as if it resides in memory with the rest of the database even though it does not.
Although there have been several implementations proposed for this type of cold data storage, there has not been a thorough evaluation of the design decisions in implementing this technique, such as policies for when to evict tuples and how to bring them back when they are needed. These choices are further complicated by the varying performance characteristics of different storage devices, including future non-volatile memory technologies. We explore these issues in this paper and discuss several approaches to solve them. We implemented all of these approaches in an in-memory DBMS and evaluated them using five different storage technologies. Our results show that choosing the best strategy based on the hardware improves throughput by 92–340% over a generic configuration.
FULL PAPER: pdf