You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Nov 27, 2024. It is now read-only.
The `appsettings.json` is the easiest option for configuring model sets. Below is an example of `Stable Diffusion 1.5`. The example adds the necessary paths to each model file required for Stable Diffusion, as well as any model-specific configurations. Each model can be assigned to its own device, which is handy if you have only a small GPU. This way, you can offload only what you need. There are limitations depending on the version of the `Microsoft.ML.OnnxRuntime` package you are using, but in most cases, you can split the load between CPU and GPU.
26
+
The `appsettings.json` is the easiest option for configuring model sets. Below is an example of `clip tokenizer`.
27
27
28
28
```json
29
29
{
@@ -36,50 +36,14 @@ The `appsettings.json` is the easiest option for configuring model sets. Below i
Copy file name to clipboardExpand all lines: OnnxStack.StableDiffusion/README.md
+67Lines changed: 67 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -132,4 +132,71 @@ internal class AppService : IHostedService
132
132
returnTask.CompletedTask;
133
133
}
134
134
}
135
+
```
136
+
137
+
138
+
## Configuration
139
+
The `appsettings.json` is the easiest option for configuring model sets. Below is an example of `Stable Diffusion 1.5`.
140
+
The example adds the necessary paths to each model file required for Stable Diffusion, as well as any model-specific configurations.
141
+
Each model can be assigned to its own device, which is handy if you have only a small GPU. This way, you can offload only what you need. There are limitations depending on the version of the `Microsoft.ML.OnnxRuntime` package you are using, but in most cases, you can split the load between CPU and GPU.
0 commit comments