-
Notifications
You must be signed in to change notification settings - Fork 431
Modifying decoders and attention for vllm #2616
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
NicoGrande
wants to merge
1
commit into
main
Choose a base branch
from
nicogrande/update-decoders-attention-vllm
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
655760f to
d0c2503
Compare
d0c2503 to
91c199b
Compare
cgarciae
reviewed
Nov 11, 2025
gagika
approved these changes
Nov 12, 2025
Collaborator
gagika
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks Nico,
Feel free to address the comments in a follow up PR.
91c199b to
8f3305d
Compare
8f3305d to
fc3db23
Compare
gagika
approved these changes
Nov 13, 2025
dbd531b to
a9a75c5
Compare
shralex
approved these changes
Nov 15, 2025
2adf1bb to
575ca28
Compare
removing calls into specialized attention modules. adding vllm_rpa unit test. fixing additional unit tests. adding validation support for vllm_rpa. rebasing deepseek and gpt-oss. adding skip for vllm-tpu test. addressing comments on lazy init. adding check for kv_cache and attention_metadata. adding comment on vllm_rpa. adding pyconfig deprecated validation. fixing pytype errors. adding new output type to Qwen3-Omni vision encoder. fixing deepseek batchsplit.
575ca28 to
f6ead2e
Compare
copybara-service bot
pushed a commit
that referenced
this pull request
Nov 18, 2025
…te-decoders-attention-vllm f6ead2e PiperOrigin-RevId: 833932020
4 tasks
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
This PR introduces new optional arguments
kv_cacheandattention_metadatafor model decoder blocks and for theattentions.pymodule. These arguments are provided by vLLM when executing a MaxText model from vLLM Engine and are used when calling the ragged paged attention kernel intpu-inference.This PR builds on the work started in #2612.
Note: This PR changes the expected method signature from Attention layers and decoder layers.
Tests
Includes a new unit-test in
attention_test.py.Additionally, end-to-end tests were performed locally on a v6e VM using the following test command:
In one process, start the vLLM server. This requires a local
config.jsonfile for the corresponding model you are trying to test from HuggingFace. Modify this file such thatarchitectures: "MaxTextForCausalLM"is set.In a second process, issue the query to the model:
Results for different tested models are shown below:
llama3.1-8b
gemma3-4b
qwen3-8b
Checklist
Before submitting this PR, please make sure (put X in square brackets):
gemini-reviewlabel.