-
Notifications
You must be signed in to change notification settings - Fork 71
fix: add NVIDIA CDI device for WSL2 GPU support #3895
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from 2 commits
b66a977
b5da453
69347f8
2b01fdc
fa8ec8f
dc55f6d
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -119,6 +119,12 @@ export class LlamaCppPython extends InferenceProvider { | |
| Type: 'bind', | ||
| }); | ||
|
|
||
| devices.push({ | ||
| PathOnHost: 'nvidia.com/gpu=all', | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. question: what happens if the podman machine do not have the nvidia CDI installed?
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I guess If CDI isn't configured, Podman will fail to resolve nvidia.com/gpu=all and the container won't start.
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Test Scenario: What happens without CDI?Check current CDI status in Podman machine
Temporarily disable CDI
Test ResultsInference server with [ GPU ENABLED | no CDI ] in AI Lab.
Inference server with [ GPU DISABLED | no CDI ] in AI Lab.
Why this behavior is correct: Thanks to the conditional checks at:
The CDI device is only added when GPU is explicitly enabled in settings. Conclusion:This is the correct behavior since RamaLama requires CDI:
Background:The "magic trick" in #1824 worked with the old ai-lab-playground-chat-cuda image (CUDA embedded). We should update the AI Lab documentation to mention CDI is required for WSL GPU support. Maybe? |
||
| PathInContainer: '', | ||
| CgroupPermissions: '', | ||
| }); | ||
|
|
||
| devices.push({ | ||
| PathOnHost: '/dev/dxg', | ||
| PathInContainer: '/dev/dxg', | ||
|
|
||




There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
question: This is a flag not a path to share, what is the rationale to do that ?
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a CDI (Container Device Interface) device identifier. Podman uses nvidia.com/gpu=all as a CDI spec name to automatically mount all NVIDIA GPU devices.

https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/cdi-support.html
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Based on that link - i'm not that excpert - i belielve that Podman Devices array accepts CDI device names like nvidia.com/gpu=all in PathOnHost , so when Podman sees such format, it automatically resolves it via CDI and mounts all GPU devices.
This is the same pattern used for Linux (as screen shoot above).
The alternative would be using the --device CLI flag, but since we're using the API, this is the equivalent approach.