-
Notifications
You must be signed in to change notification settings - Fork 622
upgrade vLLM to main #4608
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
upgrade vLLM to main #4608
Conversation
|
👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:
If CI fails, you can run linting and testing checks locally according Contributing and Testing. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request upgrades vLLM to main, which involves a lot of refactoring to align with upstream changes. Most changes are related to module path updates, API signature changes (e.g., rope_parameters), and data type changes (from numpy arrays to Python lists). I've found a critical issue in vllm_ascend/spec_decode/eagle_proposer.py where a hardcoded index is used instead of iterating through the batch, which will lead to incorrect behavior in speculative decoding. Please address this issue.
|
This pull request has conflicts, please resolve those before we can evaluate the pull request. |
tests/ut/attention/test_mla_v1.py
Outdated
| self.mock_vllm_config.scheduler_config = SchedulerConfig( | ||
| max_num_seqs=8, chunked_prefill_enabled=True) | ||
| mock_scheduler_config = MagicMock(spec=SchedulerConfig) | ||
| mock_scheduler_config.max_num_seqs = 8 # 设置为整数,不是 MagicMock |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
plz use english comment
MengqingCao
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Plz fix the above comment and LGTM if CI passes
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
Signed-off-by: wangli <wangli858794774@gmail.com>
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
Signed-off-by: wangli <wangli858794774@gmail.com>
Signed-off-by: wangli <wangli858794774@gmail.com>
Signed-off-by: wangli <wangli858794774@gmail.com>
Signed-off-by: wangli <wangli858794774@gmail.com>
Signed-off-by: hfadzxy <starmoon_zhang@163.com>
Signed-off-by: wangli <wangli858794774@gmail.com>
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
Signed-off-by: wangli <wangli858794774@gmail.com>
Signed-off-by: wangli <wangli858794774@gmail.com>
Signed-off-by: wangli <wangli858794774@gmail.com>
Signed-off-by: wangli <wangli858794774@gmail.com>
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
fix Update
rope_scalingtorope_parametersin preparation for Transformers v5 vllm#28542The model structure modifications we involved in are:
fix Revert "[Redo] #26368 (#28771)" vllm#29121
the output token now type changed from np to
list[list[int]]fix [Core] Deprecate
xformersvllm#29262xformersbackend for multimodal now has been deprecatedfix [Attention] Remove imports from
vllm/attention/__init__.pyvllm#29342fix [Core] Refactor padding logic and pad for CUDA graphs before attention metadata building vllm#28579
fix [Feature] Prefill Context Parallel (PCP) basic support vllm#28718
fix [Config] Clean up SchedulerConfig initialization vllm#28665
fix [Frontend][torch.compile] CompilationConfig Overhaul (#20283): Set up -O infrastructure vllm#26847
vllm introduced the
optimization-level, some default config has been changed, and the param--enforce-eagerhas been deprecatedfix http://github.com/vllm-project/vllm/pull/29223 it retuns tuple for sampler.
fix Remove upstream fa checks vllm#29471 we'll remove the related patch to avoid this kind of error.
Co-authored-by: hfadzxy starmoon_zhang@163.com
Co-authored-by: wangli wangli858794774@gmail.com