Skip to content

Conversation

@yzh119
Copy link
Collaborator

@yzh119 yzh119 commented Nov 16, 2025

📌 Description

Duplicate of #2091, created PR from flashinfer-ai to enable workflow.

🔍 Related Issues

🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.

✅ Pre-commit Checks

  • I have installed pre-commit by running pip install pre-commit (or used your preferred method).
  • I have installed the hooks with pre-commit install.
  • I have run the hooks manually with pre-commit run --all-files and fixed any reported issues.

If you are unsure about how to set up pre-commit, see the pre-commit documentation.

🧪 Tests

  • Tests have been added or updated as needed.
  • All tests are passing (unittest, etc.).

Reviewer Notes

Summary by CodeRabbit

  • Bug Fixes

    • Corrected CUDA compute capability targeting from 11.0f to 11.0a for improved compatibility across build configurations.
  • Documentation

    • Updated installation and build documentation to reflect updated CUDA architecture configurations for both older and newer CUDA versions.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 16, 2025

Walkthrough

CUDA compute architecture lists are updated across workflows, documentation, and build scripts, consistently replacing 11.0f with 11.0a to adjust supported GPU architecture targets during CUDA compilation and CI/CD builds.

Changes

Cohort / File(s) Summary
CI/CD Workflows
.github/workflows/nightly-release.yml, .github/workflows/release.yml
FLASHINFER_CUDA_ARCH_LIST updated: 11.0f changed to 11.0a for older CUDA versions (< 13.0) in release and nightly pipelines
Build Scripts
scripts/task_test_jit_cache_package_build_import.sh
CUDA compute capability list updated: 11.0f replaced with 11.0a in architecture selection for CUDA >= 13.0
Documentation & Examples
README.md, docs/installation.rst
Shell commands and installation instructions updated: FLASHINFER_CUDA_ARCH_LIST environment variable changed from containing 11.0f to 11.0a

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~5 minutes

  • Highly repetitive, consistent change pattern applied across multiple files
  • Simple string substitution with no logic modifications or control flow alterations
  • Straightforward verification: confirm all instances of 11.0f in architecture lists are correctly replaced with 11.0a

Possibly related PRs

Suggested reviewers

  • aleozlx
  • yongwww
  • cyx-6
  • wenscarl
  • bkryu
  • nvmbreughe

Poem

🐰 A tiny tweak, from f to a,
Compute arch lists dance all day,
Eleven's evolution shines so bright,
Workflows, scripts, and docs all right!
🎯✨

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately describes the main change across all modified files: updating CUDA architecture from 11.0f to 11.0a throughout workflow configs, documentation, and build scripts.
Description check ✅ Passed The description follows the template structure with sections completed (description, related issues, pre-commit checks, tests). However, it lacks specific technical details about what the change accomplishes and why this CUDA arch update is necessary.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch update-thor-arch

📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between cce4952 and 7bef117.

📒 Files selected for processing (5)
  • .github/workflows/nightly-release.yml (1 hunks)
  • .github/workflows/release.yml (1 hunks)
  • README.md (1 hunks)
  • docs/installation.rst (1 hunks)
  • scripts/task_test_jit_cache_package_build_import.sh (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
  • GitHub Check: build-flashinfer-cubin
  • GitHub Check: build-flashinfer-jit-cache (13.0, aarch64)
  • GitHub Check: build-flashinfer-jit-cache (12.9, x86_64)
  • GitHub Check: build-flashinfer-cubin
  • GitHub Check: build-flashinfer-jit-cache (12.8, aarch64)
  • GitHub Check: Deploy Docs
🔇 Additional comments (4)
.github/workflows/release.yml (1)

185-185: CUDA architecture list updated consistently across release workflows.

Line 185 mirrors the nightly-release.yml update, replacing 11.0f with 11.0a in the CUDA architecture list for CUDA ≥ 13.0. The change maintains the environment variable's expected format and semantics.

README.md (1)

93-93: Documentation example updated to reflect new CUDA architecture.

Line 93 updates the build-from-source example to use 11.0a instead of 11.0f. The export statement remains syntactically valid and provides correct guidance to users building the flashinfer-jit-cache package.

docs/installation.rst (1)

95-95: Documentation consistently updated across RST format.

Line 95 updates the RST code block to use 11.0a instead of 11.0f, maintaining consistency with README.md and mirroring the workflow changes. The build instructions for flashinfer-jit-cache remain clear and actionable.

.github/workflows/nightly-release.yml (1)

148-148: Remove invalid CUDA architecture identifier "11.0a" from line 148.

The architecture string on line 148 includes 11.0a, which is not a valid NVIDIA CUDA compute capability identifier. The H100 (Hopper GH100) uses compute capability 9.0 (sm_90), and the "a" suffix denotes architecture-accelerated variants (e.g., sm_90a, sm_100a), not a compute capability value of 11.0a. This invalid identifier will cause build failures when CUDA ≥ 13.0 is targeted.

          FLASHINFER_CUDA_ARCH_LIST: ${{ matrix.cuda < '13.0' && '7.5 8.0 8.9 9.0a 10.0a 12.0a' || '7.5 8.0 8.9 9.0a 10.0a 10.3a 11.0a 12.0f' }}

Either remove 11.0a or replace it with a valid architecture identifier (e.g., 10.3a or another supported target).

Likely an incorrect or invalid review comment.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @yzh119, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a minor but important correction in the specified CUDA architecture for 'thor'. By updating the identifier from "11.0f" to "11.0a" in relevant configuration and documentation files, it ensures that users and build processes correctly recognize and utilize the intended CUDA architecture, preventing potential compilation or runtime issues related to incorrect target specifications.

Highlights

  • CUDA Architecture Correction: The CUDA architecture identifier for 'thor' has been updated from "11.0f" to "11.0a" across documentation and build scripts to ensure proper compilation and compatibility.
Ignored Files
  • Ignored by pattern: .github/workflows/** (2)
    • .github/workflows/nightly-release.yml
    • .github/workflows/release.yml
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the CUDA architecture for 'Thor' from 11.0f to 11.0a across documentation and build scripts. The changes are consistent and address the goal of the PR. I've provided a few suggestions to improve the code and documentation:

  • Refactoring the architecture selection logic in the build script to reduce code duplication and improve maintainability.
  • Pointing out an inconsistency in the CUDA architecture list between the documentation files (README.md, docs/installation.rst) and the build script, suggesting to synchronize them to avoid user confusion.

`flashinfer-jit-cache` (customize `FLASHINFER_CUDA_ARCH_LIST` for your target GPUs):
```bash
export FLASHINFER_CUDA_ARCH_LIST="7.5 8.0 8.9 10.0a 10.3a 11.0f 12.0f"
export FLASHINFER_CUDA_ARCH_LIST="7.5 8.0 8.9 10.0a 10.3a 11.0a 12.0f"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There seems to be an inconsistency in the CUDA architecture list between the documentation and the build script. This example list is missing the 9.0a architecture, which is included in the base list in scripts/task_test_jit_cache_package_build_import.sh. To avoid confusion, it would be best to keep them synchronized.

Suggested change
export FLASHINFER_CUDA_ARCH_LIST="7.5 8.0 8.9 10.0a 10.3a 11.0a 12.0f"
export FLASHINFER_CUDA_ARCH_LIST="7.5 8.0 8.9 9.0a 10.0a 10.3a 11.0a 12.0f"

.. code-block:: bash
export FLASHINFER_CUDA_ARCH_LIST="7.5 8.0 8.9 10.0a 10.3a 11.0f 12.0f"
export FLASHINFER_CUDA_ARCH_LIST="7.5 8.0 8.9 10.0a 10.3a 11.0a 12.0f"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There seems to be an inconsistency in the CUDA architecture list between the documentation and the build script. This example list is missing the 9.0a architecture, which is included in the base list in scripts/task_test_jit_cache_package_build_import.sh. To avoid confusion, it would be best to keep them synchronized.

Suggested change
export FLASHINFER_CUDA_ARCH_LIST="7.5 8.0 8.9 10.0a 10.3a 11.0a 12.0f"
export FLASHINFER_CUDA_ARCH_LIST="7.5 8.0 8.9 9.0a 10.0a 10.3a 11.0a 12.0f"

Copy link
Member

@yongwww yongwww left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@yzh119 yzh119 enabled auto-merge (squash) November 16, 2025 22:33
@yzh119 yzh119 disabled auto-merge November 16, 2025 22:39
@yzh119 yzh119 merged commit d42b71f into main Nov 16, 2025
37 of 38 checks passed
@yzh119 yzh119 deleted the update-thor-arch branch November 16, 2025 22:39
qsang-nv pushed a commit to qsang-nv/flashinfer that referenced this pull request Nov 18, 2025
<!-- .github/pull_request_template.md -->

## 📌 Description

Duplicate of flashinfer-ai#2091, created PR from flashinfer-ai to enable workflow.

## 🔍 Related Issues

<!-- Link any related issues here -->

## 🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull
request, please make sure the following items are complete.

### ✅ Pre-commit Checks

- [x] I have installed `pre-commit` by running `pip install pre-commit`
(or used your preferred method).
- [x] I have installed the hooks with `pre-commit install`.
- [x] I have run the hooks manually with `pre-commit run --all-files`
and fixed any reported issues.

> If you are unsure about how to set up `pre-commit`, see [the
pre-commit documentation](https://pre-commit.com/).

## 🧪 Tests

- [x] Tests have been added or updated as needed.
- [ ] All tests are passing (`unittest`, etc.).

## Reviewer Notes

<!-- Optional: anything you'd like reviewers to focus on, concerns, etc.
-->


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Bug Fixes**
* Corrected CUDA compute capability targeting from 11.0f to 11.0a for
improved compatibility across build configurations.

* **Documentation**
* Updated installation and build documentation to reflect updated CUDA
architecture configurations for both older and newer CUDA versions.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants