Skip to content

Conversation

@gmagogsfm
Copy link
Contributor

@gmagogsfm gmagogsfm commented Nov 14, 2025

if inp unintentionally checks zero value in addition to none-ness. This would introduce an unnecessary and failing dynamic shape guard in some cases. Checking none-ness only is sufficient and correct.

Checking zero value would introduce an unnecessary and failing dynamic
shape guard in some cases. In this case, checking nonness is sufficient
and correct.

Signed-off-by: Yanan Cao <gmagogsfm@gmail.com>
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a subtle bug in the logic for reconstructing compilation inputs during the deserialization of a compiled function. The previous implementation using inp or example_inputs[i] would incorrectly discard falsy values such as 0, which can be valid inputs (e.g., for tensor dimensions). The change to inp if inp is not None else example_inputs[i] is correct, as it explicitly checks for None, which is the intended placeholder for runtime tensor inputs. This fix improves the robustness and correctness of the compilation caching mechanism.

@gmagogsfm
Copy link
Contributor Author

@zou3519 This is one of the smaller bugs I found while investigating #28077. Please take a look

@gmagogsfm gmagogsfm changed the title [Bugfix]Fix a check to not checks zero value [Bugfix]Fix a conditional to not check zero value Nov 14, 2025
@gmagogsfm
Copy link
Contributor Author

@zou3519 @ProExpertProg Could you take a quick look at this one-line change? Thanks

Copy link
Collaborator

@ProExpertProg ProExpertProg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM< cc @zhxchen17 @zou3519

@ProExpertProg
Copy link
Collaborator

@gmagogsfm could you just elaborate what the repro for causing this issue is?

@gmagogsfm
Copy link
Contributor Author

@gmagogsfm could you just elaborate what the repro for causing this issue is?

I discovered this issue when debugging a dynamic shape bug. I think inp was set to be an unbacked sym int and this line of code is trying to install a guard on an unbacked symint, triggering an assert failure.

Copy link
Contributor

@zhxchen17 zhxchen17 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good.

@ProExpertProg ProExpertProg added the ready ONLY add when PR is ready to merge/full CI is needed label Nov 21, 2025
@vllm-bot vllm-bot merged commit 933f67e into vllm-project:main Nov 22, 2025
44 of 45 checks passed
@github-project-automation github-project-automation bot moved this from To triage to Done in torch.compile integration Nov 22, 2025
ywang96 pushed a commit to ywang96/vllm that referenced this pull request Nov 23, 2025
lpapavassiliou pushed a commit to lpapavassiliou/vllm that referenced this pull request Nov 24, 2025
RunkaiTao pushed a commit to RunkaiTao/vllm that referenced this pull request Nov 24, 2025
Signed-off-by: Yanan Cao <gmagogsfm@gmail.com>
Signed-off-by: Runkai Tao <rt572@physics.rutgers.edu>
bringlein pushed a commit to bringlein/vllm that referenced this pull request Nov 26, 2025
devpatelio pushed a commit to SumanthRH/vllm that referenced this pull request Nov 29, 2025
kitaekatt pushed a commit to kitaekatt/vllm that referenced this pull request Dec 1, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready ONLY add when PR is ready to merge/full CI is needed torch.compile

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

6 participants