Skip to content

Commit 04e57bb

Browse files
authored
[documentation] Add a paragraph about other GPU backends (#183)
* [documentation] Add a paragraph about other GPU backends * Update the documentation again
1 parent 80fcf96 commit 04e57bb

File tree

1 file changed

+11
-1
lines changed

1 file changed

+11
-1
lines changed

docs/src/gpu.jl

Lines changed: 11 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# # Accelerations
1+
# # CPU and GPU acceleration
22
# One of the key features of ExaModels.jl is being able to evaluate derivatives either on multi-threaded CPUs or GPU accelerators. Currently, GPU acceleration is only tested for NVIDIA GPUs. If you'd like to use multi-threaded CPU acceleration, start julia with
33
# ```
44
# $ julia -t 4 # using 4 threads
@@ -78,3 +78,13 @@ end
7878
# m = cuda_luksan_vlcek_model(10)
7979
# madnlp(m)
8080
# ```
81+
# Since ExaModels builds on [KernelAbstractions.jl](https://github.com/JuliaGPU/KernelAbstractions.jl),
82+
# it can in principle target multiple hardware backends.
83+
# The following backends are provided by the JuliaGPU ecosystem:
84+
#
85+
# - `CPU()` for multi-threaded CPU execution
86+
# - `CUDABackend()` for NVIDIA GPUs ([CUDA.jl](https://github.com/JuliaGPU/CUDA.jl))
87+
# - `ROCBackend()` for AMD GPUs ([AMDGPU.jl](https://github.com/JuliaGPU/AMDGPU.jl))
88+
# - `OneAPIBackend()` for Intel GPUs ([oneAPI.jl](https://github.com/JuliaGPU/oneAPI.jl))
89+
# - `MetalBackend()` for Apple GPUs ([Metal.jl](https://github.com/JuliaGPU/Metal.jl))
90+
# - `OpenCLBackend()` for generic OpenCL devices ([OpenCL.jl](https://github.com/JuliaGPU/OpenCL.jl))

0 commit comments

Comments
 (0)