Performance of llama.cpp with Vulkan #10879
Replies: 221 comments 348 replies
-
|
AMD FirePro W8100
|
Beta Was this translation helpful? Give feedback.
-
|
AMD RX 470
|
Beta Was this translation helpful? Give feedback.
-
|
ubuntu 24.04, vulkan and cuda installed from official APT packages.
build: 4da69d1 (4351) vs CUDA on the same build/setup
build: 4da69d1 (4351) |
Beta Was this translation helpful? Give feedback.
-
|
Macbook Air M2 on Asahi Linux ggml_vulkan: Found 1 Vulkan devices:
|
Beta Was this translation helpful? Give feedback.
-
|
Gentoo Linux on ROG Ally (2023) Ryzen Z1 Extreme ggml_vulkan: Found 1 Vulkan devices:
|
Beta Was this translation helpful? Give feedback.
-
|
ggml_vulkan: Found 4 Vulkan devices:
|
Beta Was this translation helpful? Give feedback.
-
|
build: 0d52a69 (4439) NVIDIA GeForce RTX 3090 (NVIDIA)
AMD Radeon RX 6800 XT (RADV NAVI21) (radv)
AMD Radeon (TM) Pro VII (RADV VEGA20) (radv)
Intel(R) Arc(tm) A770 Graphics (DG2) (Intel open-source Mesa driver)
|
Beta Was this translation helpful? Give feedback.
-
|
@netrunnereve Some of the tg results here are a little low, I think they might be debug builds. The cmake step (at least on Linux) might require |
Beta Was this translation helpful? Give feedback.
-
|
Build: 8d59d91 (4450)
Lack of proper Xe coopmat support in the ANV driver is a setback honestly.
edit: retested both with the default batch size. |
Beta Was this translation helpful? Give feedback.
-
|
Here's something exotic: An AMD FirePro S10000 dual GPU from 2012 with 2x 3GB GDDR5. build: 914a82d (4452)
|
Beta Was this translation helpful? Give feedback.
-
|
Latest arch with For the sake of consistency I run every bit in a script and also build every target from scratch (for some reason kill -STOP -1
timeout 240s $COMMAND
kill -CONT -1
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = Intel(R) Iris(R) Xe Graphics (TGL GT2) (Intel open-source Mesa driver) | uma: 1 | fp16: 1 | warp size: 32 | matrix cores: none
build: ff3fcab (4459)
This bit seems to underutilise both GPU and CPU in real conditions based on
|
Beta Was this translation helpful? Give feedback.
-
|
Intel ARC A770 on Windows:
build: ba8a1f9 (4460) |
Beta Was this translation helpful? Give feedback.
-
Single GPU VulkanRadeon Instinct MI25 ggml_vulkan: 0 = AMD Radeon Instinct MI25 (RADV VEGA10) (radv) | uma: 0 | fp16: 1 | warp size: 64 | matrix cores: none
build: 2739a71 (4461) Radeon PRO VII ggml_vulkan: 0 = AMD Radeon Pro VII (RADV VEGA20) (radv) | uma: 0 | fp16: 1 | warp size: 64 | matrix cores: none
build: 2739a71 (4461) Multi GPU Vulkanggml_vulkan: 0 = AMD Radeon Pro VII (RADV VEGA20) (radv) | uma: 0 | fp16: 1 | warp size: 64 | matrix cores: none
build: 2739a71 (4461) ggml_vulkan: 0 = AMD Radeon Pro VII (RADV VEGA20) (radv) | uma: 0 | fp16: 1 | warp size: 64 | matrix cores: none
build: 2739a71 (4461) Single GPU RocmDevice 0: AMD Radeon Instinct MI25, compute capability 9.0, VMM: no
build: 2739a71 (4461) Device 0: AMD Radeon Pro VII, compute capability 9.0, VMM: no
build: 2739a71 (4461) Multi GPU RocmDevice 0: AMD Radeon Pro VII, compute capability 9.0, VMM: no
build: 2739a71 (4461) Layer split
build: 2739a71 (4461) Row split
build: 2739a71 (4461) Single GPU speed is decent, but multi GPU trails Rocm by a wide margin, especially with large models due to the lack of row split. |
Beta Was this translation helpful? Give feedback.
-
|
AMD Radeon RX 5700 XT on Arch using mesa-git and setting a higher GPU power limit compared to the stock card.
I also think it could be interesting adding the flash attention results to the scoreboard (even if the support for it still isn't as mature as CUDA's).
|
Beta Was this translation helpful? Give feedback.
-
|
I tried but there's nothing after 1 hrs , ok, might be 40 mins... Anyway I run the llama_cli for a sample eval...
Meanwhile OpenBLAS |
Beta Was this translation helpful? Give feedback.
-
|
Nvidia MX150 (+200 core, +1000 mem), i7-8550U with 1 channel DDR4-2400:
build: 21d31e0 (7122) CUDA:
build: 4949ac0 (1) |
Beta Was this translation helpful? Give feedback.
-
|
Tiger Lake GT2 (96EU), i7-1185G7, Ubuntu 24.04, Mesa 25.0.7. This is very slow so I didn't bother trying SYCL.
build: f1ffbba (7124) |
Beta Was this translation helpful? Give feedback.
-
|
Apple M1 Mac Mini with 7-core GPU, Asahi Linux (Fedora 43) and mesa 25.2.6. Slight improvement since my last benchmark in August. #10879 (comment)
build: 2370665 (7123) |
Beta Was this translation helpful? Give feedback.
-
|
NVIDIA GeForce RTX 3060
build: 92c0b38 (7118) |
Beta Was this translation helpful? Give feedback.
-
|
NVIDIA Tesla M40 24GB Driver Version: 580.105.08 ggml_vulkan: Found 1 Vulkan devices:
build: b8372ee (7146) Strangely, PP seems to be only 1/3 of what I get with CUDA, and also slower than GTX 980. |
Beta Was this translation helpful? Give feedback.
-
|
Apple M2 Ultra on Arch Linux ARM (with Asahi drivers):
build: dbb852b (7142) |
Beta Was this translation helpful? Give feedback.
-
|
AMD Radeon RX 6650 XT on Arch Linux:
build: dbb852b (7142) |
Beta Was this translation helpful? Give feedback.
-
|
I've noticed a regression in build b6764 (#16203) for pp512 (AMD only) and in build b6975 (#17046) for tg128 (AMD and NVIDIA). If you can replicate these (see script below), can you add them as a comment, please, and I'll update the table.
Using pre-built binaries, e.g. https://github.com/ggml-org/llama.cpp/releases/download/b6763/llama-b6763-bin-ubuntu-vulkan-x64.zip
#!/usr/bin/env bash
# Usage: ./bench_releases.sh b6763 b6764 b6974 b6975 ...
set -euo pipefail
MODEL="models/llama-2-7b.Q4_0.gguf"
DVICE="Vulkan0"
for ver in "$@"; do
TARGET="llama-${ver}-bin-ubuntu-vulkan-x64"
[ -d "$TARGET" ] && continue
ZIP="${TARGET}.zip"
[ -f "$ZIP" ] && continue
URL="https://github.com/ggml-org/llama.cpp/releases/download/${ver}/${ZIP}"
wget -q "$URL"
rm -rf build
unzip -q "$ZIP"
mv build "$TARGET"
done
for ver in "$@"; do
BENCH="./llama-${ver}-bin-ubuntu-vulkan-x64/bin/llama-bench"
[ -x "$BENCH" ] || continue
"$BENCH" -dev "$DEVICE" -m "$MODEL"
done |
Beta Was this translation helpful? Give feedback.
-
|
NVIDIA A30 ggml_vulkan: Found 1 Vulkan devices:
build: 583cb83 (7157) For some reason it fails to allocate memory for 32768 context: |
Beta Was this translation helpful? Give feedback.
-
|
Nvidia GeForce GTX 1070 8gb ggml_vulkan: Found 3 Vulkan devices:
build: eec1e33 (7166) |
Beta Was this translation helpful? Give feedback.
-
|
AMD Radeon RX Vega 56
build: 92c0b38 (7118) |
Beta Was this translation helpful? Give feedback.
-
|
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = Radeon 8060S Graphics (RADV GFX1151) (radv) | uma: 1 | fp16: 1 | bf16: 0 | warp size: 64 | shared memory: 65536 | int dot: 1 | matrix cores: KHR_coopmat
|
Beta Was this translation helpful? Give feedback.
-
|
Tesla P100 16GB SXM2 For some reason I'm getting terrible PP performance, less than 1/10th of what it's supposed to be.
build: eec1e33 (7166) Edit: much better performance with
build: eec1e33 (7166) |
Beta Was this translation helpful? Give feedback.
-
|
GGML_VK_DISABLE_INTEGER_DOT_PRODUCT=1 makes RADV TG speed close to AMDVLK (stopped working) ggml_vulkan: 0 = AMD Radeon RX 9070 XT (RADV GFX1201) (radv) | uma: 0 | fp16: 1 | bf16: 1 | warp size: 64 | shared memory: 65536 | int dot: 0 | matrix cores: KHR_coopmat
build: d82b7a7 (7193) |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
This is similar to the Apple Silicon benchmark thread, but for Vulkan! We'll be testing the Llama 2 7B model like the other thread to keep things consistent, and use Q4_0 as it's simple to compute and small enough to fit on a 4GB GPU. You can download it here.
Instructions
Either run the commands below or download one of our Vulkan releases. If you have multiple GPUs please run the test on a single GPU using
-sm none -mg YOUR_GPU_NUMBERunless the model is too big to fit in VRAM.Share your llama-bench results along with the git hash and Vulkan info string in the comments. Feel free to try other models and compare backends, but only valid runs will be placed on the scoreboard.
If multiple entries are posted for the same setup I'll prioritize newer commits with substantial Vulkan updates, otherwise I'll pick the one with the highest overall score at my discretion. Performance may vary depending on driver, operating system, board manufacturer, etc. even if the chip is the same. For integrated graphics note that the memory speed and number of channels will greatly affect your inference speed!
Vulkan Scoreboard (Click on the headings to expand the section)
Llama 2 7B, Q4_0, no FA
Llama 2 7B, Q4_0, FA enabled
Beta Was this translation helpful? Give feedback.
All reactions