|
2 | 2 | <h1>The Rust CUDA Project</h1> |
3 | 3 |
|
4 | 4 | <p> |
5 | | - <strong>An ecosystem of libraries and tools for writing and executing extremely fast GPU code fully in |
6 | | - <a href="https://www.rust-lang.org/">Rust</a></strong> |
| 5 | + <strong>An ecosystem of libraries and tools for writing and executing extremely fast GPU code |
| 6 | + fully in <a href="https://www.rust-lang.org/">Rust</a></strong> |
7 | 7 | </p> |
8 | | - |
9 | | - <h3> |
10 | | - <a href="https://rust-gpu.github.io/rust-cuda/index.html">Guide</a> |
11 | | - <span> | </span> |
12 | | - <a href="https://rust-gpu.github.io/rust-cuda/guide/getting_started.html">Getting Started</a> |
13 | | - <span> | </span> |
14 | | - <a href="https://rust-gpu.github.io/rust-cuda/features.html">Features</a> |
15 | | - </h3> |
16 | | -<strong>⚠️ The project is still in early development, expect bugs, safety issues, and things that don't work ⚠️</strong> |
17 | 8 | </div> |
18 | 9 |
|
19 | | -<br/> |
20 | | - |
21 | 10 | > [!IMPORTANT] |
22 | 11 | > This project is no longer dormant and is [being |
23 | 12 | > rebooted](https://rust-gpu.github.io/blog/2025/01/27/rust-cuda-reboot). Read the [latest status update](https://rust-gpu.github.io/blog/2025/08/11/rust-cuda-update). |
24 | 13 | > Please contribute! |
| 14 | +> |
| 15 | +> The project is still in early development, however. Expect bugs, safety issues, and things that |
| 16 | +> don't work. |
25 | 17 |
|
26 | | -## Goal |
27 | | - |
28 | | -The Rust CUDA Project is a project aimed at making Rust a tier-1 language for extremely fast GPU computing |
29 | | -using the CUDA Toolkit. It provides tools for compiling Rust to extremely fast PTX code as well as libraries |
30 | | -for using existing CUDA libraries with it. |
31 | | - |
32 | | -## Background |
33 | | - |
34 | | -Historically, general purpose high performance GPU computing has been done using the CUDA toolkit. The CUDA toolkit primarily |
35 | | -provides a way to use Fortran/C/C++ code for GPU computing in tandem with CPU code with a single source. It also provides |
36 | | -many libraries, tools, forums, and documentation to supplement the single-source CPU/GPU code. |
37 | | - |
38 | | -CUDA is exclusively an NVIDIA-only toolkit. Many tools have been proposed for cross-platform GPU computing such as |
39 | | -OpenCL, Vulkan Computing, and HIP. However, CUDA remains the most used toolkit for such tasks by far. This is why it is |
40 | | -imperative to make Rust a viable option for use with the CUDA toolkit. |
41 | | - |
42 | | -However, CUDA with Rust has been a historically very rocky road. The only viable option until now has been to use the LLVM PTX |
43 | | -backend, however, the LLVM PTX backend does not always work and would generate invalid PTX for many common Rust operations, and |
44 | | -in recent years it has been shown time and time again that a specialized solution is needed for Rust on the GPU with the advent |
45 | | -of projects such as rust-gpu (for Rust -> SPIR-V). |
46 | | - |
47 | | -Our hope is that with this project we can push the Rust GPU computing industry forward and make Rust an excellent language |
48 | | -for such tasks. Rust offers plenty of benefits such as `__restrict__` performance benefits for every kernel, An excellent module/crate system, |
49 | | -delimiting of unsafe areas of CPU/GPU code with `unsafe`, high level wrappers to low level CUDA libraries, etc. |
50 | | - |
51 | | -## Structure |
52 | | - |
53 | | -The scope of the Rust CUDA Project is quite broad, it spans the entirety of the CUDA ecosystem, with libraries and tools to make it |
54 | | -usable using Rust. Therefore, the project contains many crates for all corners of the CUDA ecosystem. |
55 | | - |
56 | | -The current line-up of libraries is the following: |
57 | | - |
58 | | -- `rustc_codegen_nvvm` Which is a rustc backend that targets NVVM IR (a subset of LLVM IR) for the [libnvvm](https://docs.nvidia.com/cuda/nvvm-ir-spec/index.html) library. |
59 | | - - Generates highly optimized PTX code which can be loaded by the CUDA Driver API to execute on the GPU. |
60 | | - - For the near future it will be CUDA-only, but it may be used to target amdgpu in the future. |
61 | | -- `cuda_std` for GPU-side functions and utilities, such as thread index queries, memory allocation, warp intrinsics, etc. |
62 | | - - _Not_ a low level library, provides many utility functions to make it easier to write cleaner and more reliable GPU kernels. |
63 | | - - Closely tied to `rustc_codegen_nvvm` which exposes GPU features through it internally. |
64 | | -- [`cudnn`](https://github.com/Rust-GPU/rust-cuda/tree/master/crates/cudnn) for a collection of GPU-accelerated primitives for deep neural networks. |
65 | | -- `cust` for CPU-side CUDA features such as launching GPU kernels, GPU memory allocation, device queries, etc. |
66 | | - - High level with features such as RAII and Rust Results that make it easier and cleaner to manage the interface to the GPU. |
67 | | - - A high level wrapper for the CUDA Driver API, the lower level version of the more common CUDA Runtime API used from C++. |
68 | | - - Provides much more fine grained control over things like kernel concurrency and module loading than the C++ Runtime API. |
69 | | -- `gpu_rand` for GPU-friendly random number generation, currently only implements xoroshiro RNGs from `rand_xoshiro`. |
70 | | -- `optix` for CPU-side hardware raytracing and denoising using the CUDA OptiX library. |
71 | | - |
72 | | -In addition to many "glue" crates for things such as high level wrappers for certain smaller CUDA libraries. |
73 | | - |
74 | | -## Related Projects |
75 | | - |
76 | | -Other projects related to using Rust on the GPU: |
77 | | - |
78 | | -- 2016: [glassful](https://github.com/kmcallister/glassful) Subset of Rust that compiles to GLSL. |
79 | | -- 2017: [inspirv-rust](https://github.com/msiglreith/inspirv-rust) Experimental Rust MIR -> SPIR-V Compiler. |
80 | | -- 2018: [nvptx](https://github.com/japaric-archived/nvptx) Rust to PTX compiler using the `nvptx` target for rustc (using the LLVM PTX backend). |
81 | | -- 2020: [accel](https://github.com/termoshtt/accel) Higher-level library that relied on the same mechanism that `nvptx` does. |
82 | | -- 2020: [rlsl](https://github.com/MaikKlein/rlsl) Experimental Rust -> SPIR-V compiler (predecessor to rust-gpu) |
83 | | -- 2020: [rust-gpu](https://github.com/Rust-GPU/rust-gpu) `rustc` compiler backend to compile Rust to SPIR-V for use in shaders, similar mechanism as our project. |
84 | | - |
85 | | -## Usage |
86 | | -```bash |
87 | | -## setup your environment like: |
88 | | -### export OPTIX_ROOT=/opt/NVIDIA-OptiX-SDK-9.0.0-linux64-x86_64 |
89 | | -### export OPTIX_ROOT_DIR=/opt/NVIDIA-OptiX-SDK-9.0.0-linux64-x86_64 |
90 | | - |
91 | | -## build proj |
92 | | -cargo build |
93 | | -``` |
94 | | - |
95 | | -## Use Rust CUDA in Container Environments |
96 | | - |
97 | | -The distribution related Dockerfile are located in `container` folder. |
98 | | -Taking ubuntu 24.04 as an example, run the following command in repository root: |
99 | | -```bash |
100 | | -docker build -f ./container/ubuntu24-cuda12/Dockerfile -t rust-cuda-ubuntu24 . |
101 | | -docker run --rm --runtime=nvidia --gpus all -it rust-cuda-ubuntu24 |
102 | | -``` |
| 18 | +## Documentation |
103 | 19 |
|
104 | | -A sample `.devcontainer.json` file is also included, configured for Ubuntu 24.02. Copy this to `.devcontainer/devcontainer.json` to make additional customizations. |
| 20 | +Please see [The Rust CUDA Guide](https://rust-gpu.github.io/rust-cuda/index.html) for all the |
| 21 | +documentation on using Rust CUDA. |
105 | 22 |
|
106 | 23 | ## License |
107 | 24 |
|
108 | 25 | Licensed under either of |
109 | 26 |
|
110 | | -- Apache License, Version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or http://www.apache.org/licenses/LICENSE-2.0) |
| 27 | +- Apache License, Version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or |
| 28 | + http://www.apache.org/licenses/LICENSE-2.0) |
111 | 29 | - MIT license ([LICENSE-MIT](LICENSE-MIT) or http://opensource.org/licenses/MIT) |
112 | 30 |
|
113 | 31 | at your discretion. |
114 | 32 |
|
115 | 33 | ### Contribution |
116 | 34 |
|
117 | | -Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions. |
| 35 | +Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in |
| 36 | +the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any |
| 37 | +additional terms or conditions. |
0 commit comments