PARALLEL DATA LAB 

PDL Abstract

Baleen: ML Admission & Prefetching for Flash Caches

22nd USENIX Conference on File and Storage Technologies (FAST'24), Feb. 27–29, 2024, Santa Clara, CA.

Daniel Lin-Kit Wong*, Hao Wu†, Carson Molder§, Sathya Gunasekar†, Jimmy Lu†, Snehal Khandkar†, Abhinav Sharma†, Daniel S. Berger‡,
Nathan Beckmann*, Gregory R. Ganger*

*Carnegie Mellon University
†Meta
‡Microsoft & University of Washington
§UT Austin

http://www.pdl.cmu.edu/

Flash caches are used to reduce peak backend load for throughput-constrained data center services, reducing the total number of backend servers required. Bulk storage systems are a large-scale example, backed by high-capacity but low- throughput hard disks, and using flash caches to provide a more cost-effective storage layer underlying everything from blobstores to data warehouses. However, flash caches must address the limited write en- durance of flash by limiting the long-term average flash write rate to avoid premature wearout. To do so, most flash caches must use admission policies to filter cache insertions and maximize the workload-reduction value of each flash write. The Baleen flash cache uses coordinated ML admission and prefetching to reduce peak backend load. After learning painful lessons with our early ML policy attempts, we exploit a new cache residency model (which we call episodes) to guide model training. We focus on optimizing for an end-to- end system metric (Disk-head Time) that measures backend load more accurately than IO miss rate or byte miss rate. Evaluation using Meta traces from seven storage clusters shows that Baleen reduces Peak Disk-head Time (and hence the number of backend hard disks required) by 12% over state- of-the-art policies for a fixed flash write rate constraint. Baleen- TCO, which chooses an optimal flash write rate, reduces our estimated total cost of ownership (TCO) by 18%. Code and traces are available via https://www.pdl.cmu.edu/CILES/.

FULL PAPER: pdf
Code / Traces
SLIDES: pdf
VIDEO: youtube