Scaling File System Metadata Performance with Stateless Caching and Bulk Insertion

Carnegie Mellon University Parallel Data Lab Technical Report CMU-PDL-14-103. May 2014.

Kai Ren, Qing Zheng, Swapnil Patil, Garth Gibson

Carnegie Mellon University


The growing size of modern storage systems is expected to achieve and exceed billions of objects, making metadata scalability critical to overall performance. Many existing parallel and cluster file systems only focus on providing highly parallel access to file data, but lack a scalable metadata service. In this paper, we introduce a middleware design called IndexFS that adds support to existing file systems such as PVFS and Hadoop HDFS for scalable high-performance operations on metadata and small files. IndexFS uses a tabular-based architecture that incrementally partitions the namespace on a per-directory basis, preserving server and disk locality for small directories. An optimized log-structured layout is used to store metadata and small files efficiently. We also propose two client storm-free caching techniques: bulk namespace insertion for creation intensive workloads such as N-N checkpointing; and stateless consistent metadata caching for hot spot mitigation. By combining these techniques, we have successfully scaled IndexFS to 128 metadata servers for various metadata workloads. Experiments demonstrate that our out-ofcore metadata throughput out-performs PVFS by 50% to an order of magnitude.

KEYWORDS: Parallel File System, Metadata, Storm-free Caching, Log-Structured Merge Tree, Bulk Insertion

FULL TR: pdf




© 2017. Last updated 10 May, 2014