Skip to content

Conversation

@sneha-rudra
Copy link

  1. This change updates the vLLM installation section in README.md to clearly distinguish between environments that already have CUDA libraries installed and those that do not.
  2. It provides separate installation commands for each scenario using vLLM version 0.11.0, clarifies when to use the direct install versus the extra index for PyTorch nightly wheels
  3. Adds tips for troubleshooting CUDA or torch import errors on different setups.

This helps ensure users select the correct installation steps for their environment and resolve common setup issues.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines +82 to +88
**If your container/environment ALREADY HAS CUDA libraries pre-installed**:

```bash
uv pip install vllm==0.11.0 huggingface_hub[hf_transfer]==0.35.0 flashinfer-python==0.3.1
```

No extra steps required—vllm will detect your CUDA setup, and manage the correct torch version automatically.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Installing vLLM without CUDA wheel leaves torch CPU-only

The new "CUDA already installed" path tells users to run uv pip install vllm==0.11.0 ... with no extra index, claiming vLLM will “detect your CUDA setup and manage the correct torch version automatically”. PyPI only ships CPU-only torch wheels; without the download.pytorch.org index, pip installs the CPU build even when CUDA libraries are present. Launching vllm serve after this install fails with Torch not compiled with CUDA enabled or runs on CPU, defeating the purpose for most GPU containers. The docs should still instruct users to install a CUDA-enabled torch wheel (e.g. via the extra index or explicit torch==...+cuXXX).

Useful? React with 👍 / 👎.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant