PARALLEL DATA LAB 

PDL Abstract

Learning to Walk: Architecting Learned Virtual Memory Translation

MICRO 2025 58th IEEE/ACM International Symposium on Microarchitecture. October 18-22, 2025. Seoul, Korea.

Kaiyang Zhao, Yuang Chen, Xenia Xu, Dan Schatzberg^, Nastaran Hajinaza*, Rupin Vakharwala*, Andy Anderson*, Dimitrios Skarlatos

Carnegie Mellon University
* Intel
^ Meta

http://www.pdl.cmu.edu/

The rise in memory demands of emerging datacenter applications has placed virtual memory translation in the spotlight, exposing it as a significant performance bottleneck. To address this problem, this paper introduces Learned Virtual Memory (LVM), a page table structure that effectively provides optimal single-access address translation. LVM is founded on a novel learned index model that dynamically adapts the address translation procedure based on the characteristics of an application’s virtual address space. Furthermore, LVM’s learned index requires minimal memory space, does not impose stringent physical contiguity requirements, enjoys high cacheability in the MMU, efficiently supports insertions, and relies on simple fixed-point arithmetic. Finally, LVM supports all features of virtual memory, including multiple page sizes. We evaluate LVM with a set of operating system (OS) extensions in Linux, RTL synthesis, and full-system simulations across a wide range of workloads. LVM reduces the address translation overhead by an average of 44% over radix page tables, while reducing the page walk cache area required by 1.5×. Overall, LVM achieves a 2-27% speedup in application execution time and is within 1% of an ideal page table.

FULL TR: pdf