Skip to content
This repository was archived by the owner on Oct 14, 2023. It is now read-only.

Commit f0488f9

Browse files
committed
Added sample for inferring with ONNX models
1 parent 5822254 commit f0488f9

File tree

7 files changed

+191
-1
lines changed

7 files changed

+191
-1
lines changed

README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -25,6 +25,7 @@ This is a collection of Python function samples on Azure Functions 2.X. For a co
2525
| [timer-trigger-cosmos-output-binding](v2functions/timer-trigger-cosmosdb-output-binding) | Azure Functions Timer Trigger Python Sample. The function gets blog RSS feed and store the results into CosmosDB using Cosmos DB output binding | Timer | NONE | CosmosDB |
2626
| [http-trigger-blob-sas-token](v2functions/http-trigger-blob-sas-token) | Azure Function HTTP Trigger Python Sample that returns a SAS token for Azure Storage for the specified container and blob name | HTTP | NONE | HTTP |
2727
| [http-trigger-dump-request](v2functions/http-trigger-dump-request) | Azure Function HTTP Trigger Python Sample that returns request dump info with JSON format | HTTP | NONE | HTTP |
28+
| [http-trigger-onnx-model](v2functions/http-trigger-onnx-model) | This function demonstrates running an inference using an ONNX model. It is triggered by an HTTP request. | HTTP | NONE | HTTP |
2829
| [blob-trigger-watermark-blob-out-binding](v2functions/blob-trigger-watermark-blob-out-binding) | Azure Function Python Sample that watermarks an image. This function triggers on an input blob (image) and adds a watermark by calling into the Pillow library. The resulting composite image is then written back to blob storage using a blob output binding. | Blob Storage | Blob Storage | Blob Storage |
2930
| [sbqueue-trigger-sbqueue-out-binding](v2functions/sbqueue-trigger-sbqueue-out-binding) | Azure Functions Service Bus Queue Trigger Python Sample. The function demonstrates reading from a Service Bus queue and placing a message into an output Service Bus queue. | Service Bus Queue | None | Service Bus Queue |
3031

Lines changed: 65 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,65 @@
1+
import logging
2+
import azure.functions as func
3+
import onnxruntime
4+
from PIL import Image
5+
import numpy as np
6+
import io
7+
8+
def main(req: func.HttpRequest, context: func.Context) -> func.HttpResponse:
9+
logging.info('Python HTTP trigger function processed a request.')
10+
11+
body = req.get_body()
12+
13+
try:
14+
image = Image.open(io.BytesIO(body))
15+
except IOError:
16+
return func.HttpResponse(
17+
"Bad input. Unable to cast request body to an image format.",
18+
status_code=400
19+
)
20+
21+
result = run_inference(image, context)
22+
23+
return func.HttpResponse(result)
24+
25+
def run_inference(image, context):
26+
# See https://github.com/onnx/models/tree/master/vision/style_transfer/fast_neural_style
27+
# for implementation details
28+
model_path = f'{context.function_directory}/rain_princess.onnx'
29+
session = onnxruntime.InferenceSession(model_path)
30+
metadata = session.get_modelmeta()
31+
logging.info(f'Model metadata:\n' +
32+
f' Graph name: {metadata.graph_name}\n' +
33+
f' Model version: {metadata.version}\n' +
34+
f' Producer: {metadata.producer_name}')
35+
36+
# Preprocess image
37+
original_image_size = image.size[0], image.size[1]
38+
logging.info('Preprocessing image...')
39+
# Model expects a 224x224 shape input
40+
image = image.resize((224, 224), Image.ANTIALIAS)
41+
x = np.array(image).astype('float32')
42+
x = np.transpose(x, [2, 0, 1])
43+
x = np.expand_dims(x, axis=0)
44+
45+
output_name = session.get_outputs()[0].name
46+
input_name = session.get_inputs()[0].name
47+
logging.info('Running inference on ONNX model...')
48+
result = session.run([output_name], {input_name: x})[0][0]
49+
50+
# Postprocess image
51+
result = np.clip(result, 0, 255)
52+
result = result.transpose(1,2,0).astype("uint8")
53+
img = Image.fromarray(result)
54+
max_width = 800
55+
height = int(max_width * original_image_size[1] / original_image_size[0])
56+
# Upsample and correct aspect ratio for final image
57+
img = img.resize((max_width, height), Image.BICUBIC)
58+
59+
# Store inferred image as in memory byte array
60+
img_byte_arr = io.BytesIO()
61+
# Convert composite to RGB so we can return JPEG
62+
img.convert('RGB').save(img_byte_arr, format='JPEG')
63+
final_image = img_byte_arr.getvalue()
64+
65+
return final_image
1.47 MB
Loading
Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
{
2+
"scriptFile": "__init__.py",
3+
"bindings": [
4+
{
5+
"authLevel": "function",
6+
"type": "httpTrigger",
7+
"direction": "in",
8+
"name": "req",
9+
"methods": [
10+
"get",
11+
"post"
12+
]
13+
},
14+
{
15+
"type": "http",
16+
"direction": "out",
17+
"name": "$return"
18+
}
19+
]
20+
}
6.42 MB
Binary file not shown.
Lines changed: 102 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,102 @@
1+
# http-trigger-onnx-model (Python)
2+
3+
| Sample | Description | Trigger | In Bindings | Out Bindings
4+
| ------------- | ------------- | ------------- | ----------- | ----------- |
5+
| `http-trigger-onnx-model`` | This function demonstrates running an inferrence against an ONNX model. | HTTP | NONE | HTTP |
6+
7+
The style transfer model used in this function is called _Rain Princess_ and is downloaded from the [ONNX Model Zoo][3]. Artistic style transfer models mix the content of an image with the style of another image. Examples of the styles can be seen [here][4].
8+
9+
Open Neural Network Exchange (ONNX) is an open standard format for representing machine learning models. ONNX is supported by a community of partners who have implemented it in many frameworks and tools.
10+
11+
The ONNX Model Zoo is a collection of pre-trained, state-of-the-art models in the ONNX format contributed by community members like you. See https://github.com/onnx/models for more.
12+
13+
You should be able to use other ONNX models in your function by rewriting the preprocess/postprocess code and wiring the expected inputs and outputs.
14+
15+
## Sample run
16+
![Screenshot](example.png)
17+
This example is probably not going to age well. However the pun stands on its own. Shown here: [httpie][1], [imgcat][2].
18+
19+
## Dependencies
20+
```
21+
Pillow=7.0.0
22+
onnxruntime==1.1.0
23+
numpy==1.18.1
24+
```
25+
26+
## Logging includes model metadata
27+
28+
```
29+
[1/19/20 8:00:25 PM] Python HTTP trigger function processed a request.
30+
[1/19/20 8:00:25 PM] Model metadata:
31+
[1/19/20 8:00:25 PM] Graph name: torch-jit-export
32+
[1/19/20 8:00:25 PM] Model version: 9223372036854775807
33+
[1/19/20 8:00:25 PM] Producer: pytorch
34+
[1/19/20 8:00:25 PM] Preprocessing image...
35+
[1/19/20 8:00:25 PM] Running inference on ONNX model...
36+
```
37+
38+
## Configuration
39+
As specified in `functions.json`, this function is triggered by an HTTP request. It expects a POST request with raw image bytes (JPEG/PNG/whatever the Pillow library can open). Output is an HTTP response with the resulting styled transferred image (JPEG encoded).
40+
41+
```json
42+
{
43+
"scriptFile": "__init__.py",
44+
"bindings": [
45+
{
46+
"authLevel": "function",
47+
"type": "httpTrigger",
48+
"direction": "in",
49+
"name": "req",
50+
"methods": [
51+
"post"
52+
]
53+
},
54+
{
55+
"type": "http",
56+
"direction": "out",
57+
"name": "$return"
58+
}
59+
}
60+
```
61+
62+
## How to develop and publish the functions
63+
64+
### Local development
65+
66+
```sh
67+
func host start
68+
```
69+
70+
### Try it out
71+
```bash
72+
# Make a POST request
73+
$ curl -s --data-binary @babyyoda.jpg http://localhost:7071/api/http-trigger-onnx-model -o out.jpg
74+
75+
# Open the resulting image (on a Mac)
76+
# Use feh or xdg-open on Linux
77+
$ open out.jpg
78+
```
79+
80+
### Publish the function to the cloud
81+
82+
Publish the function to the cloud
83+
```sh
84+
FUNCTION_APP_NAME="MyFunctionApp"
85+
func azure functionapp publish $FUNCTION_APP_NAME --build-native-deps --no-bundler
86+
```
87+
88+
Add Functions App Settings
89+
```sh
90+
FUNCTION_STORAGE_CONNECTION="*************"
91+
az webapp config appsettings set \
92+
-n $FUNCTION_APP_NAME \
93+
-g $RESOURCE_GROUP \
94+
--settings \
95+
MyStorageConnectionString=$FUNCTION_STORAGE_CONNECTION
96+
```
97+
98+
99+
[1]: https://httpie.org/
100+
[2]: https://iterm2.com/documentation-images.html
101+
[3]: https://github.com/onnx/models/tree/master/vision/style_transfer/fast_neural_style
102+
[4]: https://github.com/pytorch/examples/tree/master/fast_neural_style#models

v2functions/requirements.txt

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,4 +8,6 @@ six==1.11.0
88
# Additional packages
99
requests==2.20.1
1010
feedparser==5.2.1
11-
pillow>=6.2.0
11+
Pillow==7.0.0
12+
numpy==1.18.1
13+
onnxruntime==1.1.0

0 commit comments

Comments
 (0)