Skip to content

Commit cad6605

Browse files
committed
format and phrase update
address the review comments Signed-off-by: roger feng <roger.feng@intel.com>
1 parent 2645012 commit cad6605

File tree

1 file changed

+39
-26
lines changed

1 file changed

+39
-26
lines changed

_posts/2025-10-31-vllm-on-intel-Arc-pro-B.md renamed to _posts/2025-10-31-intel-arc-pro-b.md

Lines changed: 39 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -4,20 +4,17 @@ title: "Fast and Affordable LLMs serving on Intel Arc Pro B-Series GPUs with vLL
44
author: "Intel vLLM Team"
55
---
66

7-
87
[Intel® Arc™ Pro B-Series GPU Family](https://www.intel.com/content/www/us/en/products/docs/discrete-gpus/arc/workstations/b-series/overview.html) GPUs deliver powerful AI capabilities with a focus on accessibility and exceptional price-to-performance ratios. Their large memory capacity and scalability with multi-GPU setups make it possible to run the latest, large and capable AI models locally, making advanced AI inference accessible to professionals looking to deploy Large Language Models (LLMs) without the premium costs typically associated with AI hardware.
98

109
vLLM is at the core of the software stack enabling fast and cost-effective LLM serving on Intel Arc Pro B-Series GPUs. Over the past few months, Intel developers have been actively collaborating with the vLLM community to enable and optimize key features and ensure seamless performance with multi-GPU scaling and PCIe P2P data transfer on Intel Arc Pro B-Series GPUs.
1110

12-
vLLM is at the core of the software stack enabling fast and cost-effective LLM serving on Intel Arc Pro B-Series GPUs. Over the past few months, Intel developers have been actively collaborating with the vLLM community to enable and optimize key features and ensure seamless performance with multi-GPU scaling and PCIe P2P data transfer on Intel Arc Pro B-Series GPUs.
13-
14-
Based on vLLM v1 engine, Intel® Arc™ Pro B-series GPUs provide vLLM key features and optimizations including:
15-
- Solid inference performance for DeepSeek distilled llama/qwen models
11+
Intel® Arc™ Pro B-series GPUs provide vLLM key features and optimizations including:
12+
- Solid inference performance for DeepSeek distilled Llama/Qwen models
1613
- Long context length (>50K) with good scaling on batch size
1714
- Support for embedding, reranker, pooling models
18-
- Support multi-modality models
15+
- Support for multi-modal models
1916
- Well optimized Mixture of Experts (MoE) models (GPT-OSS, DeepSeek-v2-lite, Qwen3-30B-A3B etc)
20-
- By-layer online quantization to reduce the required GPU memory
17+
- Per-layer online quantization to reduce the required GPU memory
2118
- Support for Data Parallelism, Tensor Parallelism and Pipeline Parallelism
2219
- FP16 and BF16 path support for Torch.compile
2320
- Speculative decoding in methods n-gram, EAGLE and EAGLE3
@@ -30,7 +27,6 @@ Based on vLLM v1 engine, Intel® Arc™ Pro B-series GPUs provide vLLM key featu
3027
- Tool calling
3128
- Mixed precision support for BF16, FP16, INT4 and FP8 vLLM recipes
3229

33-
3430
## Advanced Optimizations for MoE Models
3531
Mixture of Experts (MoE) is a model approach where multiple specialized expert networks collaborate to process input sequences, guided by a gating mechanism. For each token in the input sequence, the gating network dynamically selects which subset of experts should process that token. Rather than relying on a single dense feedforward layer, MoE architectures employ multiple parallel GEMM operations distributed across expert networks to achieve equivalent computational functionality. This design introduces structured sparsity into the model, as only a subset of experts is activated for any given input, thereby improving computational efficiency while maintaining model capacity. Beyond general optimizations for General Matrix Multiplications (GEMM) and Flash Attention, these MoE components (experts and gating network) represent the key performance contributors in MoE-based language models.
3632

@@ -39,7 +35,8 @@ Mixture of Experts (MoE) is a model approach where multiple specialized expert n
3935
However, naive implementations of MoE GEMM operations can suffer from significant efficiency bottlenecks. The typical approach, where individual GEMM kernels are sequentially launched per iteration on a for-loop, generates excessive kernel launch overhead and introduces substantial scheduling latency. Furthermore, since expert routing decisions are produced by the gating network, GEMM operations must wait for gate computation to complete before execution can begin. This data dependency creates pipeline stalls that disrupt the kernel execution stream and severely limit GPU parallelism, preventing optimal device utilization.
4036

4137
Targeting these limitations of MoE GEMM we designed a persistent zero gap kernel which achieved over 80% efficiency of hardware capacity of Intel® Arc™ Pro B60 GPU.
42-
## Optimization 1. Single kernel launched in persistent loop
38+
39+
### Optimization 1. Single kernel launched in persistent loop
4340
Single kernel design will remove launching and scheduling overhead mentioned above. Also, persistent loop removes the need for launching parameters which depends on the results of expert routing network. They help keep maximum device parallelism.
4441

4542
Before persistent kernel, we could see device idle for host waiting
@@ -50,7 +47,7 @@ Enabling persistent keep device busy:
5047

5148
Intel® Arc™ Pro B60 GPU has 20 XeCores, each with identical resources that can host multiple SYCL groups. In our design, we launch two groups per XeCore to balance compute and memory bandwidth needs.
5249

53-
## Optimization 2. Dynamic balancing of computing groups
50+
### Optimization 2. Dynamic balancing of computing groups
5451
One observation is that each group runs a different amount of work due to the imbalance of expert routing. If a group loops fixed stride of work, there is always a group that takes the largest amount of work and another, smallest. The gap between them will accumulate up to 15% of the total MoE GEMM time. A better alternative is whoever finishes a task in one loop starts the immediate available task in the next loop.
5552
For a concrete example, there are 40 groups to crunch 200 GEMM blocks, static stride will result that group 0 loop through 0, 40, 80, ... group 1 loop through 1, 41, 81, etc. A caveat is that due to the nature of MoE, each GEMM block may not have same amount of compute intensity. Also, randomized access patterns will let certain groups finish work faster than others. This will limit efficiency in such a way that the groups always finished job earlier can’t help those always meet heavy loads.
5653

@@ -59,27 +56,31 @@ For a concrete example, there are 40 groups to crunch 200 GEMM blocks, static st
5956

6057
We mitigate the effect by letting each group compete for the next job through an atomic number. Whoever finishes computing one GEMM block will get a rank from the atomic number who decides which next block it’ll take. In this case, we eliminated small gaps in kernel looping and achieved perfect scheduling among all scenarios of experts routing.
6158

62-
## Optimization 3. Fast MXFP4 to BFLOAT16 algorithm with prepack for memory load efficiency
63-
Prepacking has long been known to improve memory load efficiency. For 4-bit memory loads, a hardware-friendly format can increase efficiency by up to 30%, as observed in our case. Also, naive FP4 to BF16 incurs too many instructions which prompt a need for better alternative (borrow from oneDNN, stride E2M1 encoding on single precision E/M position and multiple the scale difference between two types):
64-
Bitcast-bf16 ((x << 12) >> 6 & 0x81c0) * 2^126
59+
### Optimization 3. Fast MXFP4 to BFLOAT16 algorithm with prepack for memory load efficiency
60+
Prepacking has long been known to improve memory load efficiency. For 4-bit memory loads, a hardware-friendly format can increase efficiency by up to 30%, as observed in our case. Also, naive FP4 to BF16 incurs too many instructions which prompt a need for better alternative (borrow from oneDNN, stride E2M1 encoding on single precision E/M position and multiple the scale difference between two types):
61+
62+
Bitcast-bf16 ((x << 12) >> 6 & 0x81c0) * 2^126
63+
6564
The solution minimizes instructions needed to convert fp4 to bf16.
6665

6766
## Performance
6867

6968
With 24GB of high-bandwidth VRAM, 456 GB/s memory bandwidth and 160 Intel® Xe Matrix Extensions (Intel® XMX) AI engines, Intel Arc Pro B-Series GPUs offers good hardware capacity for the optimization of high touch models on vLLM . The full support model list can be found at [intel/ai-containers](https://github.com/intel/ai-containers/blob/main/vllm/0.10.2-xpu.md#supported-models)
7069

7170
DeepSeek distilled models sized from 8B to 70B are optimized for good output token throughput on a system with eight Intel® Arc™ Pro GPUs.
71+
7272
![model perf](/assets/figures/2025-vllm-on-intel-arc/perf-figure1.png)
73-
Figure 1: FP8 model output token throughput with max concurrency under SLA on a system configured with 8 Intel® Arc™ Pro B60 GPU cards
73+
Figure 1: FP8 model output token throughput with max concurrency under SLA on a system configured with 8 Intel® Arc™ Pro B60 GPU cards.
7474

7575
The system sustains less than 100 ms next token latencies with good concurrency load.
76+
7677
![model perf](/assets/figures/2025-vllm-on-intel-arc/perf-figure2.png)
77-
Figure 2: Qwen-32B next token latency with increasing number of prompts on a system configured with 4 Intel® Arc™ Pro B60 GPU cards
78+
Figure 2: Qwen-32B next token latency with increasing number of prompts on a system configured with 4 Intel® Arc™ Pro B60 GPU cards.
7879

7980
The model inference maintains consistent next-token latency across a wide range of input sequence lengths, scaling from 1K to over 40K tokens. This performance is underpinned by highly optimized flash attention kernels that parallelize operations across the sequence length dimension.
80-
![model perf](/assets/figures/2025-vllm-on-intel-arc/perf-figure3.png)
81-
Figure 3: TTFT/TPOT for llama-70B single batch with long context input from 1K to 40K sequences on a system configured with 8 Intel® Arc™ Pro B60 GPU cards
8281

82+
![model perf](/assets/figures/2025-vllm-on-intel-arc/perf-figure3.png)
83+
Figure 3: TTFT/TPOT for llama-70B single batch with long context input from 1K to 40K sequences on a system configured with 8 Intel® Arc™ Pro B60 GPU cards.
8384

8485
GPT-OSS: Intel® Arc™ Pro B60 GPU also demonstrates exceptional performance with OpenAI's recently launched GPT-OSS model, providing developers and enterprises with a powerful, cost-effective solution for large-scale AI inference as shown in the table below.
8586

@@ -92,24 +93,36 @@ GPT-OSS: Intel® Arc™ Pro B60 GPU also demonstrates exceptional performance wi
9293
| GPT-OSS-120b |MXFP4 |4 |2048/2048 |50 |8.11 |41.98 |1085.58|
9394
| GPT-OSS-120b |MXFP4 |4 |5120/5120 |20 |8.60 |30.60 |619.10 |
9495

95-
Table 1: GPT-OSS vLLM inference throughput using 1-4 GPUs on x8 Intel® Arc™ Pro B-series System
96+
Table 1: GPT-OSS vLLM inference throughput using 1-4 GPUs on x8 Intel® Arc™ Pro B-series System.
9697

9798
MLPerf: Intel Arc Pro B-Series GPUs shines in the recently published MLPerf Inference v5.1 results ([link](https://mlcommons.org/benchmarks/inference-datacenter/)). In Llama 8B, Intel® Arc™ Pro B60 GPU demonstrates performance-per-dollar advantages. The results were achieved with vLLM as the serving framework.
9899

99100
## How to setup
100101
The vllm docker image for Intel XPU support can be downloaded from [intel/vllm - Docker Image | Docker Hub](https://hub.docker.com/r/intel/vllm). The MoE models like gpt-oss is supported since vllm 0.10.2 docker release. Below examples require host OS: Ubuntu 25.04, KMD Driver: 6.14.0, running on the Xeon system configured with 4 Intel® Arc™ Pro B60 GPU cards plugged on PCIe slots.
101102

102-
\# Get the released docker image with command
103-
`$ docker pull intel/vllm:0.10.2-xpu`
103+
Get the released docker image with command
104+
105+
```bash
106+
docker pull intel/vllm:0.10.2-xpu
107+
```
108+
109+
Instantiate a docker container with command
110+
111+
```bash
112+
docker run -t -d --shm-size 10g --net=host --ipc=host --privileged -v /dev/dri/by-path:/dev/dri/by-path --name=vllm-test --device /dev/dri:/dev/dri --entrypoint= intel/vllm:0.10.2-xpu /bin/bash
113+
```
114+
115+
Run the vllm server with gpt-oss-120b on 4 Intel® Arc™ Pro B60 cards
104116

105-
\# Instantiate a docker container with command
106-
`$ docker run -t -d --shm-size 10g --net=host --ipc=host --privileged -v /dev/dri/by-path:/dev/dri/by-path --name=vllm-test --device /dev/dri:/dev/dri --entrypoint= intel/vllm:0.10.2-xpu /bin/bash`
117+
```bash
118+
vllm serve openai/gpt-oss-120b --dtype=bfloat16 --enforce-eager --port 8000 --host 0.0.0.0 --trust-remote-code --gpu-memory-util=0.9 --no-enable-prefix-caching --max-num-batched-tokens=8192 --disable-log-requests --max-model-len=16384 --block-size 64 -tp 4
119+
```
107120

108-
\# Run the vllm server with gpt-oss-120b on 4 Intel® Arc™ Pro B60 cards
109-
`$ vllm serve openai/gpt-oss-120b --dtype=bfloat16 --enforce-eager --port 8000 --host 0.0.0.0 --trust-remote-code --gpu-memory-util=0.9 --no-enable-prefix-caching --max-num-batched-tokens=8192 --disable-log-requests --max-model-len=16384 --block-size 64 -tp 4`
121+
Start another shell and run the benchmarking
110122

111-
\# Start another shell and run the benchmarking
112-
`$ vllm bench serve --model --model openai/gpt-oss-120b --dataset-name sonnet --dataset-path="./benchmarks/sonnet.txt" --sonnet-input-len=1024 --sonnet-output-len=1024 --ignore-eos --num-prompt 1 --trust_remote_code --request-rate inf --backend vllm --port=8000 --host 0.0.0.0`
123+
```bash
124+
vllm bench serve --model openai/gpt-oss-120b --dataset-name sonnet --dataset-path="./benchmarks/sonnet.txt" --sonnet-input-len=1024 --sonnet-output-len=1024 --ignore-eos --num-prompt 1 --trust_remote_code --request-rate inf --backend vllm --port=8000 --host 0.0.0.0
125+
```
113126

114127
More validated supported model list can be found here: [Supported Models](https://github.com/intel/ai-containers/blob/main/vllm/0.10.2-xpu.md#supported-models)
115128

0 commit comments

Comments
 (0)