Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion tests/serve/test_vllm.py
Original file line number Diff line number Diff line change
Expand Up @@ -337,7 +337,7 @@ def vllm_config_test(request):

@pytest.mark.vllm
@pytest.mark.e2e
@pytest.mark.gpu_1
# @pytest.mark.gpu_1 # Disabled for now to unblock CI in main.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Find params_with_model_mark function definition
rg -n "def params_with_model_mark" -A 15

Repository: ai-dynamo/dynamo

Length of output: 793


🏁 Script executed:

# Also search for where params_with_model_mark is imported or used
rg -n "params_with_model_mark" -B 2 -A 2

Repository: ai-dynamo/dynamo

Length of output: 2051


🏁 Script executed:

# Check for any issue references or TODOs in the file
rg -n "(TODO|FIXME|issue|#[0-9])" tests/serve/test_vllm.py

Repository: ai-dynamo/dynamo

Length of output: 296


🏁 Script executed:

# Look at the file structure to understand mark application
head -350 tests/serve/test_vllm.py | tail -50

Repository: ai-dynamo/dynamo

Length of output: 1833


This workaround is ineffective—config-level gpu_1 marks will still apply.

The params_with_model_mark() function (tests/serve/common.py:70) preserves all config-level marks by copying them into the parameterized test parameters (line 77). Commenting out the function-level @pytest.mark.gpu_1 does not disable GPU requirements for test instances whose configs explicitly include pytest.mark.gpu_1 in their marks array—specifically "aggregated" (line 46), "aggregated_lmcache" (line 58), "agg-request-plane-tcp" (line 71), and "agg-request-plane-http" (line 83).

Additionally, this temporary workaround lacks a tracking issue. The PR description should reference an issue for:

  1. Re-enabling the marker once CI is fixed
  2. Addressing the root cause of the GPU requirement conflict
🤖 Prompt for AI Agents
In tests/serve/test_vllm.py around line 340, commenting out the function-level
@pytest.mark.gpu_1 is ineffective because params_with_model_mark in
tests/serve/common.py preserves config-level marks (so configs like
"aggregated", "aggregated_lmcache", "agg-request-plane-tcp", and
"agg-request-plane-http" still force GPU). Fix by either removing or
conditionally filtering out pytest.mark.gpu_1 from the per-config marks in
params_with_model_mark (so a test-level disable actually takes effect), or add a
top-level check in test_vllm.py to skip parameterized cases whose combined marks
include gpu_1; also add a tracking issue reference in the PR description for (1)
re-enabling the marker when CI is fixed and (2) investigating/fixing the root
cause of the GPU requirement conflict.

@pytest.mark.nightly
def test_serve_deployment(
vllm_config_test, request, runtime_services, predownload_models, image_server
Expand Down
Loading