Skip to content
This repository was archived by the owner on Nov 27, 2024. It is now read-only.

Commit 04d64fc

Browse files
committed
Update README
1 parent a362793 commit 04d64fc

File tree

1 file changed

+31
-73
lines changed

1 file changed

+31
-73
lines changed

OnnxStack.Core/README.md

Lines changed: 31 additions & 73 deletions
Original file line numberDiff line numberDiff line change
@@ -1,105 +1,63 @@
11
# OnnxStack.Core - Onnx Services for .NET Applications
22

3-
OnnxStack.Core is a library that provides higher-level ONNX services for use in .NET applications. It offers extensive support for features such as dependency injection, .NET configuration implementations, ASP.NET Core integration, and IHostedService support.
4-
5-
You can configure a model set for runtime, offloading individual models to different devices to make better use of resources or run on lower-end hardware. The first use-case is StableDiffusion; however, it will be expanded, and other model sets, such as object detection and classification, will be added.
3+
OnnxStack.Core is a library that provides simplified wrappers for OnnxRuntime
64

75
## Getting Started
86

9-
107
OnnxStack.Core can be found via the nuget package manager, download and install it.
118
```
129
PM> Install-Package OnnxStack.Core
1310
```
1411

15-
16-
### .NET Core Registration
17-
18-
You can easily integrate `OnnxStack.Core` into your application services layer. This registration process sets up the necessary services and loads the `appsettings.json` configuration.
19-
20-
Example: Registering OnnxStack
21-
```csharp
22-
builder.Services.AddOnnxStack();
23-
```
24-
2512
## Dependencies
26-
Video processing support requires FFMPEG and FFPROBE binaries, files must be present in your output folder or the destinations configured in the `appsettings.json`
13+
Video processing support requires FFMPEG and FFPROBE binaries, files must be present in your output folder or the destinations configured at runtime
2714
```
2815
https://ffbinaries.com/downloads
2916
https://github.com/ffbinaries/ffbinaries-prebuilt/releases/download/v6.1/ffmpeg-6.1-win-64.zip
3017
https://github.com/ffbinaries/ffbinaries-prebuilt/releases/download/v6.1/ffprobe-6.1-win-64.zip
3118
```
3219

33-
## Configuration example
34-
The `appsettings.json` is the easiest option for configuring model sets. Below is an example of `clip tokenizer`.
3520

36-
```json
37-
{
38-
"Logging": {
39-
"LogLevel": {
40-
"Default": "Information",
41-
"Microsoft.AspNetCore": "Warning"
42-
}
43-
},
44-
"AllowedHosts": "*",
45-
46-
"OnnxStackConfig": {
47-
"OnnxModelSets": [
48-
{
49-
"Name": "ClipTokenizer",
50-
"IsEnabled": true,
51-
"DeviceId": 0,
52-
"InterOpNumThreads": 0,
53-
"IntraOpNumThreads": 0,
54-
"ExecutionMode": "ORT_SEQUENTIAL",
55-
"ExecutionProvider": "DirectML",
56-
"ModelConfigurations": [
57-
{
58-
"Type": "Tokenizer",
59-
"OnnxModelPath": "cliptokenizer.onnx"
60-
},
61-
]
62-
}
63-
]
64-
}
65-
}
66-
```
67-
68-
69-
70-
### Basic C# Example
21+
### OnnxModelSession Example
7122
```csharp
7223

73-
// Tokenizer model Example
24+
// CLIP Tokenizer Example
7425
//----------------------//
7526
76-
// From DI
77-
OnnxStackConfig _onnxStackConfig;
78-
IOnnxModelService _onnxModelService;
27+
// Model Configuration
28+
var config = new OnnxModelConfig
29+
{
30+
DeviceId = 0,
31+
InterOpNumThreads = 0,
32+
IntraOpNumThreads = 0,
33+
ExecutionMode = ExecutionMode.ORT_SEQUENTIAL,
34+
ExecutionProvider = ExecutionProvider.DirectML,
35+
OnnxModelPath = "cliptokenizer.onnx"
36+
};
7937

80-
// Get Model
81-
var model = _onnxStackConfig.OnnxModelSets.First();
38+
// Create Model Session
39+
var modelSession = new OnnxModelSession(config);
8240

83-
// Get Model Metadata
84-
var metadata = _onnxModelService.GetModelMetadata(model, OnnxModelType.Tokenizer);
41+
// Get Metatdata
42+
var modelMetadata = await modelSession.GetMetadataAsync();
8543

86-
// Create Input
44+
// Create Input Tensor
8745
var text = "Text To Tokenize";
8846
var inputTensor = new DenseTensor<string>(new string[] { text }, new int[] { 1 });
8947

90-
// Create Inference Parameters container
91-
using (var inferenceParameters = new OnnxInferenceParameters(metadata))
48+
// Create Inference Parameters
49+
using (var inferenceParameters = new OnnxInferenceParameters(modelMetadata))
9250
{
93-
// Set Inputs and Outputs
94-
inferenceParameters.AddInputTensor(inputTensor);
95-
inferenceParameters.AddOutputBuffer();
96-
97-
// Run Inference
98-
using (var results = _onnxModelService.RunInference(model, OnnxModelType.Tokenizer, inferenceParameters))
99-
{
100-
// Extract Result
101-
var resultData = results[0].ToDenseTensor();
102-
}
51+
// Set Inputs and Outputs
52+
inferenceParameters.AddInputTensor(inputTensor);
53+
inferenceParameters.AddOutputBuffer();
54+
55+
// Run Inference
56+
using (var results = modelSession.RunInference(inferenceParameters))
57+
{
58+
// Extract Result Tokens
59+
var resultData = results[0].ToArray<long>();
60+
}
10361
}
10462

10563
```

0 commit comments

Comments
 (0)