PARALLEL DATA LAB 

PDL Abstract

DÉjÀ Vu: KV-cache Streaming for Fast, Fault-tolerant Generative LLM Serving

Proceedings of the 41 st International Conference on Machine Learning, Vienna, Austria. PMLR 235, July 21-27, 2024.

Foteini Strati 1,2 Sara McAllister 1,3 Amar Phanishayee 4 Jakub Tarnawski 4 Ana Klimovic 2

1 MSR Project Fiddle Intern
2 ETH Zurich
3 Carnegie Mellon University
4 Microsoft Research

http://www.pdl.cmu.edu/

Distributed LLM serving is costly and often underutilizes hardware accelerators due to three key challenges: bubbles in pipeline-parallel deployments caused by the bimodal latency of prompt and token processing, GPU memory overprovisioning, and long recovery times in case of failures. DéjàVu addresses all these challenges using a versatile and efficient KV cache streaming library (DéjàVuLib). Using DéjàVuLib, we propose and implement efficient prompt-token disaggregation to reduce pipeline bubbles, microbatch swapping for efficient GPU memory management, and state replication for fault-tolerance. We highlight the efficacy of these solutions on a range of large models across cloud deployments.

FULL PAPER: pdf