2024 USENIX Annual Technical Conference. July 10–12, 2024 • Santa Clara, CA, USA.
Chunxu Tang1, Bin Fan1, Jing Zhao2, Chen Liang2, Yi Wang1, Beinan Wang1, Ziyue Qiu3,2, Lu Qiu1, Bowen Ding1, Shouzhuo Sun1, Saiguang Che1, Jiaming Mai1, Shouwei Chen1, Yu Zhu1, Jianjian Xie1, Yutian (James) Sun4, Yao Li2, Yangjun Zhang2, Ke Wang4, and Mingmin Chen2
1Alluxio, Inc.,
2Uber, Inc.,
3Carnegie Mellon University,
4Meta, Inc.
With the exponential growth of data and evolving use cases, petabyte-scale OLAP data platforms are increasingly adopting a model that decouples compute from storage. This shift, evident in organizations like Uber and Meta, introduces operational challenges including massive, read-heavy I/O traffic with potential throttling, as well as skewed and fragmented data access patterns. Addressing these challenges, this paper introduces the Alluxio local (edge) cache, a highly effective architectural optimization tailored for such environments. This embeddable cache, optimized for petabyte-scale data analytics, leverages local SSD resources to alleviate network I/O and API call pressures, significantly improving data transfer efficiency. Integrated with OLAP systems like Presto and storage services like HDFS, the Alluxio local cache has demonstrated its effectiveness in handling large-scale, enterprisegrade workloads over three years of deployment at Uber and Meta. We share insights and operational experiences in implementing these optimizations, providing valuable perspectives on managing modern, massive-scale OLAP workloads.
FULL PAPER: pdf