-
-
Notifications
You must be signed in to change notification settings - Fork 11.7k
[Bugfix] Fix mamba2 prefill chunking #23279
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
tdoublep
merged 12 commits into
vllm-project:main
from
tomeras91:fix-mamba2-prefill-chunking
Sep 8, 2025
Merged
Changes from all commits
Commits
Show all changes
12 commits
Select commit
Hold shift + click to select a range
bb1d32b
Add failing mamba2 prefill chunking unittest
tomeras91 5ce3ce7
Fix chunked prefill + valren batching bugs in mamba2 triton kernels
tomeras91 7cee118
refactor test for readability
tomeras91 4b18938
Add another failing test case
tomeras91 1fff3d7
fix the failing test case: more careful sequence index handling (+ref…
tomeras91 a2101a7
Add docstring to somewhat cryptic function
tomeras91 7d7bf56
mypy typehint
tomeras91 6ff01bb
fix masking when loading chunk offset
tomeras91 2bfe36b
fix example in docstring
tomeras91 5ad41fc
rename parameter and add documentation
tomeras91 39bca2d
Add docstring to test
tomeras91 425b075
Merge branch 'main' into fix-mamba2-prefill-chunking
tomeras91 File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this test looks like its replicating
test_mamba_chunk_scan_cont_batch, what is the key difference?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The key difference is that this test makes sure prefill chunking is working as expected, without using the pytorch reference implementation. Instead, it compares the kernel output without prefill chunking to concatenated outputs with prefill chunking. This is the most straight-forward way to verify that prefill chunking is working as expected.
Another crucial difference from
test_mamba_chunk_scan_cont_batchis that this test tests cases where the sequence length is not a multiple of the mamba chunk size. In other words - cases where a sequence changes in the middle of a mamba chunk. These are the cases which currently fail onmain, and require the fixes in this PR. These cases are also no supported in the pytorch implementation (see other discussion), so they can't be easily added totest_mamba_chunk_scan_cont_batchwhich compares kernel results with the reference pytorch implementation.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see thats for the explaination, in this case I suggest to put some documentation to explain how
test_mamba_chunk_scan_cont_batch_prefill_chunkingdiffers from the previous test, since this test if a little long and its hard to understanding by quick glance.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
makes sense. Done