CNI Seminar Series

Oneiros: KV Cache Optimization through Parameter Remapping for Multi-tenant LLM Serving

Dr. Ruihao Li, Research Scientist, Meta, AI and Systems Co-Design Group

#281

Abstract

KV cache accelerates LLM inference by avoiding redundant computation, at the expense of memory. To support larger KV caches, prior work extends GPU memory with CPU memory via CPU offloading. This involves swapping KV cache between GPU and CPU memory. However, because the cache updates dynamically, such swapping incurs high CPU memory traffic. We make a key observation that model parameters remain constant during runtime, unlike the dynamically updated KV cache. Building on this, we introduce Oneiros, which avoids KV cache swapping by remapping, and thereby repurposing, the memory allocated to model parameters for KV cache. This parameter remapping is especially beneficial in multi-tenant environments, where the memory used for the parameters of the inactive models can be more aggressively reclaimed. Exploiting the high CPU-GPU bandwidth offered by the modern hardware, such as the NVIDIA Grace Hopper Superchip, we show that Oneiros significantly outperforms state-of-the-art solutions, achieving a reduction of 44.8%-82.5% in tail time-between-token latency, 20.7%-99.3% in tail time-to-first-token latency, and 6.6%-86.7% higher throughput compared to vLLM.


Bio
Dr. Ruihao Li, Research Scientist, Meta, AI and Systems Co-Design Group

 Ruihao Li is a Research Scientist at Meta, AI and Systems Co-Design group. He received his Ph.D. from The University of Texas at Austin, advised by Dr. Lizy K. John and Dr. Neeraja J. Yadwadkar. His research interests include (a) power planning and management for datacenters; (b) efficient memory allocation and management for CPU/GPU systems; and (c) workload characterization.