-
Notifications
You must be signed in to change notification settings - Fork 45
Description
Your current environment
python collect_env.py
Collecting environment information...
System Info
==============================
OS : Debian GNU/Linux 11 (bullseye) (x86_64)
GCC version : (Debian 10.2.1-6) 10.2.1 20210110
Clang version : Could not collect
CMake version : version 4.1.2
Libc version : glibc-2.31
==============================
PyTorch Info
PyTorch version : 2.8.0+cu128
Is debug build : False
CUDA used to build PyTorch : 12.8
ROCM used to build PyTorch : N/A
==============================
Python Environment
Python version : 3.12.11 (main, Jul 1 2025, 05:28:02) [GCC 10.2.1 20210110] (64-bit runtime)
Python platform : Linux-6.12.46+-x86_64-with-glibc2.31
==============================
CUDA / GPU Info
Is CUDA available : False
CUDA runtime version : No CUDA
CUDA_MODULE_LOADING set to : N/A
GPU models and configuration : No CUDA
Nvidia driver version : No CUDA
cuDNN version : No CUDA
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 52 bits physical, 57 bits virtual
CPU(s): 224
On-line CPU(s) list: 0-223
Thread(s) per core: 2
Core(s) per socket: 56
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 207
Model name: INTEL(R) XEON(R) PLATINUM 8581C
Stepping: 2
CPU MHz: 999.998
BogoMIPS: 1999.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 5.3 MiB
L1i cache: 3.5 MiB
L2 cache: 224 MiB
L3 cache: 520 MiB
NUMA node0 CPU(s): 0-55,112-167
NUMA node1 CPU(s): 56-111,168-223
Vulnerability Gather data sampling: Not affected
Vulnerability Indirect target selection: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsa: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 wbnoinvd arat avx512vbmi umip avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid cldemote movdiri movdir64b fsrm md_clear serialize tsxldtrk amx_bf16 avx512_fp16 amx_tile amx_int8 arch_capabilities
==============================
Versions of relevant libraries
[pip3] numpy==2.2.6
[pip3] nvidia-cublas-cu12==12.8.4.1
[pip3] nvidia-cuda-cupti-cu12==12.8.90
[pip3] nvidia-cuda-nvrtc-cu12==12.8.93
[pip3] nvidia-cuda-runtime-cu12==12.8.90
[pip3] nvidia-cudnn-cu12==9.10.2.21
[pip3] nvidia-cufft-cu12==11.3.3.83
[pip3] nvidia-cufile-cu12==1.13.1.3
[pip3] nvidia-curand-cu12==10.3.9.90
[pip3] nvidia-cusolver-cu12==11.7.3.90
[pip3] nvidia-cusparse-cu12==12.5.8.93
[pip3] nvidia-cusparselt-cu12==0.7.1
[pip3] nvidia-nccl-cu12==2.27.3
[pip3] nvidia-nvjitlink-cu12==12.8.93
[pip3] nvidia-nvshmem-cu12==3.3.20
[pip3] nvidia-nvtx-cu12==12.8.90
[pip3] pyzmq==27.1.0
[pip3] torch==2.8.0
[pip3] torch_xla==2.8.0
[pip3] torchax==0.0.7
[pip3] torchvision==0.23.0
[pip3] transformers==4.57.1
[pip3] triton==3.4.0
[conda] Could not collect
==============================
vLLM Info
ROCM Version : Could not collect
vLLM Version : 0.11.1rc7.dev48+gdf4d3a44a (git sha: df4d3a44a)
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled
GPU Topology:
Could not collect
==============================
Environment Variables
PYTORCH_NVML_BASED_CUDA_CHECK=1
TORCHINDUCTOR_COMPILE_THREADS=1
python -c "import jax; jax.print_environment_info()"
jax: 0.8.1.dev20251105
jaxlib: 0.8.1.dev20251105
numpy: 2.2.6
python: 3.12.11 (main, Jul 1 2025, 05:28:02) [GCC 10.2.1 20210110]
device info: TPU7x-8, 8 local devices"
process_count: 1
platform: uname_result(system='Linux', node='qwen-coder-6df96689fd-lmznd', release='6.12.46+', version='#1 SMP Fri Oct 17 07:47:53 UTC 2025', machine='x86_64')
🐛 Describe the bug
vllm-tpu github commit: 03ee48111de7372a1231872f26262e7c46ab1c83
tpu-inference github commit: 36bd457
trying to run vLLM GPT-OSS using 2 TPU chips (TP=4) in a 4-tpu vm v7-8
USE_MOE_EP_KERNEL=0 MODEL_IMPL_TYPE=vllm vllm serve
--model=unsloth/gpt-oss-120b-BF16
--tensor-parallel-size=4
--max-model-len=10240
--max-num-batched-tokens=8192
--max-num-seqs=128
--async-scheduling
--no-enable-prefix-caching
--disable-log-requests
--gpu-memory-utilization=0.9
When running the client in the same VM:
git clone https://github.com/kimbochen/bench_serving.git
for config in "1024 1024" "2048 1024" "8192 1024" "1024 8192"; do
set -- $config
echo "Running benchmark with input_len=$1 and output_len=$2"
python3 bench_serving/benchmark_serving.py
--model=unsloth/gpt-oss-120b-BF16
--backend=vllm
--dataset-name=random
--random-input-len=$1
--random-output-len=$2
--random-range-ratio=0.8
--num-prompts=320
--max-concurrency=64
--request-rate=inf --ignore-eos
--percentile-metrics='ttft,tpot,itl,e2el'
done
The vLLM server failed with the following error output:
(EngineCore_DP0 pid=205825) ERROR 11-17 00:35:03 [core.py:857] EngineCore encountered a fatal error.
(EngineCore_DP0 pid=205825) ERROR 11-17 00:35:03 [core.py:857] Traceback (most recent call last):
(EngineCore_DP0 pid=205825) ERROR 11-17 00:35:03 [core.py:857] File "/workspace/vllm/vllm/v1/engine/core.py", line 848, in run_engine_core
(EngineCore_DP0 pid=205825) ERROR 11-17 00:35:03 [core.py:857] engine_core.run_busy_loop()
(EngineCore_DP0 pid=205825) ERROR 11-17 00:35:03 [core.py:857] File "/workspace/vllm/vllm/v1/engine/core.py", line 875, in run_busy_loop
(EngineCore_DP0 pid=205825) ERROR 11-17 00:35:03 [core.py:857] self._process_engine_step()
(EngineCore_DP0 pid=205825) ERROR 11-17 00:35:03 [core.py:857] File "/workspace/vllm/vllm/v1/engine/core.py", line 904, in _process_engine_step
(EngineCore_DP0 pid=205825) ERROR 11-17 00:35:03 [core.py:857] outputs, model_executed = self.step_fn()
(EngineCore_DP0 pid=205825) ERROR 11-17 00:35:03 [core.py:857] ^^^^^^^^^^^^^^
(EngineCore_DP0 pid=205825) ERROR 11-17 00:35:03 [core.py:857] File "/workspace/vllm/vllm/v1/engine/core.py", line 447, in step_with_batch_queue
(EngineCore_DP0 pid=205825) ERROR 11-17 00:35:03 [core.py:857] engine_core_outputs = self.scheduler.update_from_output(
(EngineCore_DP0 pid=205825) ERROR 11-17 00:35:03 [core.py:857] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=205825) ERROR 11-17 00:35:03 [core.py:857] File "/workspace/vllm/vllm/v1/core/sched/scheduler.py", line 1014, in update_from_output
(EngineCore_DP0 pid=205825) ERROR 11-17 00:35:03 [core.py:857] sampled_token_ids[req_index].tolist() if sampled_token_ids else []
(EngineCore_DP0 pid=205825) ERROR 11-17 00:35:03 [core.py:857] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=205825) ERROR 11-17 00:35:03 [core.py:857] AttributeError: 'list' object has no attribute 'tolist'
(EngineCore_DP0 pid=205825) Process EngineCore_DP0:
(APIServer pid=205672) ERROR 11-17 00:35:03 [async_llm.py:525] AsyncLLM output_handler failed.
(APIServer pid=205672) ERROR 11-17 00:35:03 [async_llm.py:525] Traceback (most recent call last):
(APIServer pid=205672) ERROR 11-17 00:35:03 [async_llm.py:525] File "/workspace/vllm/vllm/v1/engine/async_llm.py", line 477, in output_handler
(APIServer pid=205672) ERROR 11-17 00:35:03 [async_llm.py:525] outputs = await engine_core.get_output_async()
(APIServer pid=205672) ERROR 11-17 00:35:03 [async_llm.py:525] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=205672) ERROR 11-17 00:35:03 [async_llm.py:525] File "/workspace/vllm/vllm/v1/engine/core_client.py", line 883, in get_output_async
(APIServer pid=205672) ERROR 11-17 00:35:03 [async_llm.py:525] raise self._format_exception(outputs) from None
(APIServer pid=205672) ERROR 11-17 00:35:03 [async_llm.py:525] vllm.v1.engine.exceptions.EngineDeadError: EngineCore encountered an issue. See stack trace (above) for the root cause.
Before submitting a new issue...
- Make sure you already searched for relevant issues and checked the documentation page, which can answer lots of frequently asked questions.