Red Hat Launches llm-d, a Kubernetes-Based Platform for Scalable AI Inference

watch 1m, 1s
views 2

13:10, 22.05.2025

Article Content
arrow

  • Key Features of llm-d
  • Cooperation with Leading Players in the AI Industry
  • Technology and Architecture

Red Hat has introduced llm-d, a new open source project designed for high-performance distributed inference of large language models (LLMs). The platform is developed on Kubernetes and is focused on simplifying the scaling of generative AI. The source code is available on GitHub under the Apache 2.0 license.

Key Features of llm-d

The main features of the platform include

  • Optimized Inference Scheduler for vLLM;
  • Disaggregated service architecture;
  • Reuse of prefix caches;
  • Flexible scaling depending on traffic, tasks, and available resources.

Cooperation with Leading Players in the AI Industry

The development is carried out in partnership with such companies as Nvidia, AMD, Intel, IBM Research, Google Cloud, CoreWeave, Hugging Face, and others. Such cooperation emphasizes the seriousness of the approach to llm-d and the potential of the platform as an industry standard.

Technology and Architecture

The project uses the vLLM library for distributed inference, as well as components such as LMCache for KV cache offloading, AI-enabled intelligent traffic routing, highly efficient communication APIs, and automatic scaling to load and infrastructure.

All this allows you to adapt the system to different usage scenarios and performance requirements. And the launch of llm-d can be a significant step towards democratizing powerful AI systems and making them accessible to a wide audience of developers and researchers.

Share

Was this article helpful to you?

VPS popular offers

Other articles on this topic

cookie

Accept cookies & privacy policy?

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we'll assume that you are happy to receive all cookies on the HostZealot website.