Skip to content

Conversation

@yzh119
Copy link
Collaborator

@yzh119 yzh119 commented Nov 14, 2025

📌 Description

Followup of #2028 , considering there are no more members than thor in sm_110 series, we should use 110a directly, instead of 110f (family).

🔍 Related Issues

#2028

🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.

✅ Pre-commit Checks

  • I have installed pre-commit by running pip install pre-commit (or used your preferred method).
  • I have installed the hooks with pre-commit install.
  • I have run the hooks manually with pre-commit run --all-files and fixed any reported issues.

If you are unsure about how to set up pre-commit, see the pre-commit documentation.

🧪 Tests

  • Tests have been added or updated as needed.
  • All tests are passing (unittest, etc.).

Reviewer Notes

Summary by CodeRabbit

Release Notes

  • Chores
    • Updated CUDA compute architecture configuration from 11.0f to 11.0a across build workflows, installation documentation, and build scripts for consistent JIT cache build support.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @yzh119, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request refines the specification of CUDA architecture sm_110 from a general family identifier to a precise architecture identifier. This ensures that the system correctly targets the Thor GPU, as it is the sole member of its series, improving accuracy in architecture configuration across documentation and build scripts.

Highlights

  • CUDA Architecture Update: The CUDA architecture specification for the sm_110 series has been updated from 11.0f (family) to 11.0a (specific architecture). This change reflects that Thor is currently the only member in the sm_110 series, making the specific identifier more appropriate.
  • Documentation and Script Alignment: Examples in README.md and docs/installation.rst, as well as the task_test_jit_cache_package_build_import.sh script, have been updated to reflect the new 11.0a architecture identifier.
Ignored Files
  • Ignored by pattern: .github/workflows/** (2)
    • .github/workflows/nightly-release.yml
    • .github/workflows/release.yml
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 14, 2025

Walkthrough

CUDA architecture specifications are updated from 11.0f to 11.0a across build workflows, documentation, and test scripts. The change is consistently applied in five files without modifications to logic, control flow, or conditional branches.

Changes

Cohort / File(s) Summary
Build Workflows
.github/workflows/nightly-release.yml, .github/workflows/release.yml
CUDA architecture list in conditional branches updated: 11.0f11.0a
Documentation
README.md, docs/installation.rst
CUDA compute capabilities list updated in installation instructions: 11.0f11.0a
Build Scripts
scripts/task_test_jit_cache_package_build_import.sh
CUDA architecture detection logic updated: appends 11.0a instead of 11.0f for CUDA ≥ 13.0

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~3 minutes

  • Highly repetitive, single-pattern change (11.0f → 11.0a) applied consistently across all five files
  • No logic, control flow, or conditional changes
  • Configuration and documentation updates only

Possibly related PRs

Suggested reviewers

  • nvmbreughe
  • yongwww
  • bkryu
  • cyx-6
  • wenscarl
  • aleozlx

Poem

🐰 From 11.0f to 11.0a we hop,
A simple tweak across the top,
Through workflows, docs, and scripts so fine,
One consistent change, by design!
CUDA architectures realign. ✨

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly summarizes the main change: updating CUDA architecture designation from 11.0f to 11.0a, which is the only modification across all affected files.
Description check ✅ Passed The description follows the template with all required sections completed: description of changes, related issue link, pre-commit checklist marked, and tests section addressed.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9a79b78 and 7bef117.

📒 Files selected for processing (5)
  • .github/workflows/nightly-release.yml (1 hunks)
  • .github/workflows/release.yml (1 hunks)
  • README.md (1 hunks)
  • docs/installation.rst (1 hunks)
  • scripts/task_test_jit_cache_package_build_import.sh (1 hunks)
🔇 Additional comments (7)
.github/workflows/nightly-release.yml (1)

148-148: Consistent CUDA architecture update.

The CUDA architecture list correctly reflects the change from 11.0f to 11.0a for CUDA >= 13.0, aligning with the PR objective to use the specific architecture since sm_110 has no other variants.

scripts/task_test_jit_cache_package_build_import.sh (1)

46-50: Consistent architecture detection logic.

The CUDA version detection correctly appends 11.0a when CUDA >= 13.0, matching the workflow configuration. The conditional structure and list management are correct.

README.md (1)

93-93: User documentation correctly reflects architecture change.

The README example command for building flashinfer-jit-cache is updated consistently to use 11.0a instead of 11.0f, providing accurate guidance for end users.

docs/installation.rst (1)

95-95: Installation documentation consistent with README.

The build instruction in installation.rst is updated to use 11.0a, maintaining consistency with the README and actual build configurations across the project.

.github/workflows/release.yml (3)

185-185: Release workflow architecture configuration consistent.

The release workflow correctly mirrors the nightly workflow's CUDA architecture update, ensuring consistent builds across release channels.


1-421: Flag: Tests not yet passing according to PR checklist.

The PR checklist indicates "All tests are passing" is unchecked. Before merging, verify that:

  1. Build tests pass with the 11.0a architecture across CUDA 12.8, 12.9, 13.0
  2. Kernel compilation succeeds for both x86_64 and aarch64 architectures
  3. JIT cache generation completes without errors

This change affects build outputs, so comprehensive testing is important.


1-421: Migration from 11.0f → 11.0a is complete across the entire codebase.

Verification confirms no remaining "11.0f" references exist. The migration is consistently applied in:

  • .github/workflows/release.yml (lines ~190): Updated FLASHINFER_CUDA_ARCH_LIST for CUDA >= 13.0
  • README.md and docs/installation.rst: Documentation reflects 11.0a
  • scripts/task_test_jit_cache_package_build_import.sh: Test configurations updated

The presence of "12.0f" for CUDA >= 13.0 targets is intentional and correct.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the CUDA architecture for Thor from 110f to 110a across documentation and build scripts. The change is consistent and correctly reflects that Thor is the only member of the sm_110 series. I've added one suggestion to improve code readability in the build script. Overall, the changes look good.

Comment on lines 47 to 50
arches.append("10.0a")
arches.append("10.3a")
arches.append("11.0f")
arches.append("11.0a")
arches.append("12.0f")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For better readability and to make the code more concise, you can use list.extend() to add multiple items to the list at once instead of calling list.append() multiple times.

Suggested change
arches.append("10.0a")
arches.append("10.3a")
arches.append("11.0f")
arches.append("11.0a")
arches.append("12.0f")
arches.extend(["10.0a", "10.3a", "11.0a", "12.0f"])

@yzh119 yzh119 closed this Nov 16, 2025
yzh119 added a commit that referenced this pull request Nov 16, 2025
<!-- .github/pull_request_template.md -->

## 📌 Description

Duplicate of #2091, created PR from flashinfer-ai to enable workflow.

## 🔍 Related Issues

<!-- Link any related issues here -->

## 🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull
request, please make sure the following items are complete.

### ✅ Pre-commit Checks

- [x] I have installed `pre-commit` by running `pip install pre-commit`
(or used your preferred method).
- [x] I have installed the hooks with `pre-commit install`.
- [x] I have run the hooks manually with `pre-commit run --all-files`
and fixed any reported issues.

> If you are unsure about how to set up `pre-commit`, see [the
pre-commit documentation](https://pre-commit.com/).

## 🧪 Tests

- [x] Tests have been added or updated as needed.
- [ ] All tests are passing (`unittest`, etc.).

## Reviewer Notes

<!-- Optional: anything you'd like reviewers to focus on, concerns, etc.
-->


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Bug Fixes**
* Corrected CUDA compute capability targeting from 11.0f to 11.0a for
improved compatibility across build configurations.

* **Documentation**
* Updated installation and build documentation to reflect updated CUDA
architecture configurations for both older and newer CUDA versions.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
qsang-nv pushed a commit to qsang-nv/flashinfer that referenced this pull request Nov 18, 2025
<!-- .github/pull_request_template.md -->

## 📌 Description

Duplicate of flashinfer-ai#2091, created PR from flashinfer-ai to enable workflow.

## 🔍 Related Issues

<!-- Link any related issues here -->

## 🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull
request, please make sure the following items are complete.

### ✅ Pre-commit Checks

- [x] I have installed `pre-commit` by running `pip install pre-commit`
(or used your preferred method).
- [x] I have installed the hooks with `pre-commit install`.
- [x] I have run the hooks manually with `pre-commit run --all-files`
and fixed any reported issues.

> If you are unsure about how to set up `pre-commit`, see [the
pre-commit documentation](https://pre-commit.com/).

## 🧪 Tests

- [x] Tests have been added or updated as needed.
- [ ] All tests are passing (`unittest`, etc.).

## Reviewer Notes

<!-- Optional: anything you'd like reviewers to focus on, concerns, etc.
-->


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Bug Fixes**
* Corrected CUDA compute capability targeting from 11.0f to 11.0a for
improved compatibility across build configurations.

* **Documentation**
* Updated installation and build documentation to reflect updated CUDA
architecture configurations for both older and newer CUDA versions.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants