IEEE Computer Architecture Letters (CAL), May 2012.
Justin Meza, Jichuan Chang†, HanBin Yoon, Onur Mutlu, Parthasarathy Ranganathan†
Carnegie Mellon University
5000 Forbes Ave.
Pittsburgh, PA 15213
†Hewlett-Packard Labs
http://www.pdl.cmu.edu/
Hybrid main memories composed of DRAM as a cache to scalable non-volatile memories such as phase-change memory (PCM) can provide much larger storage capacity than traditional main memories. A key challenge for enabling high-performance and scalable hybrid memories, though, is efficiently managing the metadata (e.g., tags) for data cached in DRAM at a fine granularity. Based on the observation that storing metadata off-chip in the same row as their data exploits DRAM row buffer locality, this paper reduces the overhead of fine-granularity DRAM caches by only caching the metadata for recently accessed rows on-chip using a small buffer. Leveraging the flexibility and efficiency of such a fine-granularity DRAM cache, we also develop an adaptive policy to choose the best granularity when migrating data into DRAM. On a hybrid memory with a 512MB DRAM cache, our proposal using an 8KB on-chip buffer can achieve within 6% of the performance of, and 18% better energy efficiency than, a conventional 8MB SRAM metadata store, even when the energy overhead due to large SRAM metadata storage is not considered.
KEYWORDS: Cache memories, tag storage, non-volatile memories, hybrid main memories.
FULL PAPER: pdf