|
3 | 3 | Forked to have better defaults sourced from the [SRE |
4 | 4 | Handbook](https://s905060.gitbooks.io/site-reliability-engineer-handbook/content/fio.html). |
5 | 5 |
|
| 6 | +# Usage |
| 7 | + |
| 8 | +``` |
| 9 | +# Add the configmap |
| 10 | +kubectl apply -f https://raw.githubusercontent.com/openinfrastructure/fio-kubernetes/master/configs.yaml |
| 11 | +# Run the jobs for 60 seconds. |
| 12 | +kubectl apply -f https://raw.githubusercontent.com/openinfrastructure/fio-kubernetes/master/configs.yaml |
| 13 | +``` |
| 14 | + |
| 15 | +Collect the results using `kubectl get pods` and `kubectl logs`. |
| 16 | + |
| 17 | +Clean up: |
| 18 | + |
| 19 | +``` |
| 20 | +kubectl delete configmap/fio-job-config |
| 21 | +kubectl delete job/fio |
| 22 | +``` |
| 23 | + |
6 | 24 | # Proxmox 6.4 VE |
7 | 25 |
|
8 | 26 | My use case is to test a base install of the following stack: |
@@ -264,6 +282,56 @@ Run status group 0 (all jobs): |
264 | 282 | WRITE: bw=22.0MiB/s (23.1MB/s), 22.0MiB/s-22.0MiB/s (23.1MB/s-23.1MB/s), io=1323MiB (1387MB), run=60004-60004msec |
265 | 283 | ``` |
266 | 284 |
|
| 285 | +# Notes |
| 286 | + |
| 287 | +Running the same job1 again, `zpool iostat 3` reports good write throughput: |
| 288 | + |
| 289 | +``` |
| 290 | + capacity operations bandwidth |
| 291 | +pool alloc free read write read write |
| 292 | +---------- ----- ----- ----- ----- ----- ----- |
| 293 | +rpool 49.0G 1.04T 0 15.9K 0 617M |
| 294 | +rpool 49.0G 1.04T 0 3.42K 0 639M |
| 295 | +rpool 49.0G 1.04T 0 17.5K 0 632M |
| 296 | +rpool 49.0G 1.04T 0 2.32K 0 607M |
| 297 | +rpool 49.0G 1.04T 0 16.6K 0 620M |
| 298 | +rpool 49.0G 1.04T 0 9.26K 0 620M |
| 299 | +rpool 53.1G 1.03T 0 9.65K 0 612M |
| 300 | +rpool 53.1G 1.03T 0 9.19K 0 613M |
| 301 | +rpool 53.1G 1.03T 0 8.67K 0 612M |
| 302 | +rpool 53.1G 1.03T 0 8.83K 0 601M |
| 303 | +rpool 53.1G 1.03T 0 10.1K 0 620M |
| 304 | +rpool 53.1G 1.03T 0 18.1K 0 562M |
| 305 | +rpool 57.1G 1.03T 0 14.2K 0 462M |
| 306 | +rpool 57.1G 1.03T 0 17.2K 0 559M |
| 307 | +rpool 57.1G 1.03T 0 4.79K 0 370M |
| 308 | +rpool 57.1G 1.03T 0 2.32K 0 224M |
| 309 | +rpool 57.1G 1.03T 0 2.25K 0 218M |
| 310 | +rpool 57.1G 1.03T 0 5.92K 0 367M |
| 311 | +rpool 57.1G 1.03T 0 12.5K 0 426M |
| 312 | +rpool 57.1G 1.03T 0 11.9K 0 423M |
| 313 | +rpool 57.1G 1.03T 0 1.44K 0 172M |
| 314 | +rpool 58.7G 1.03T 0 1.72K 0 172M |
| 315 | +rpool 58.7G 1.03T 0 2.58K 0 272M |
| 316 | +rpool 58.7G 1.03T 0 2.63K 0 276M |
| 317 | +rpool 58.7G 1.03T 0 2.98K 0 306M |
| 318 | +rpool 58.7G 1.03T 0 5.51K 0 295M |
| 319 | +rpool 58.7G 1.03T 0 9.81K 0 445M |
| 320 | +rpool 58.7G 1.03T 0 2.57K 0 273M |
| 321 | +rpool 58.7G 1.03T 0 3.13K 0 298M |
| 322 | +rpool 58.7G 1.03T 0 4.10K 0 298M |
| 323 | +rpool 58.7G 1.03T 0 2.34K 0 228M |
| 324 | +rpool 58.7G 1.03T 0 5.12K 0 317M |
| 325 | +rpool 58.7G 1.03T 0 3.29K 0 302M |
| 326 | +rpool 58.7G 1.03T 0 7.20K 0 319M |
| 327 | +rpool 60.2G 1.03T 0 10.1K 0 514M |
| 328 | +rpool 60.2G 1.03T 0 4.14K 0 595M |
| 329 | +rpool 58.3G 1.03T 0 3.52K 0 449M |
| 330 | +rpool 58.3G 1.03T 0 20 0 370K |
| 331 | +rpool 53.1G 1.03T 0 127 0 3.46M |
| 332 | +rpool 53.1G 1.03T 0 19 0 285K |
| 333 | +``` |
| 334 | + |
267 | 335 | # Reference |
268 | 336 |
|
269 | 337 | Forked from [this post](https://medium.com/@joshua_robinson/storage-benchmarking-with-fio-in-kubernetes-14cf29dc5375) |
0 commit comments