You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/python/ReadMeOV.rst
+5-3Lines changed: 5 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,6 +7,7 @@ OpenVINO™ Execution Provider for ONNX Runtime accelerates inference across man
7
7
- Intel® CPUs
8
8
- Intel® integrated GPUs
9
9
- Intel® discrete GPUs
10
+
- Intel® integrated NPUs (Windows only)
10
11
11
12
Installation
12
13
------------
@@ -15,26 +16,27 @@ Requirements
15
16
^^^^^^^^^^^^
16
17
17
18
- Ubuntu 18.04, 20.04, RHEL(CPU only) or Windows 10 - 64 bit
18
-
- Python 3.8 or 3.9 or 3.10 for Linux and only Python3.10 for Windows
19
+
- Python 3.9 or 3.10 or 3.11 for Linux and Python 3.10, 3.11 for Windows
19
20
20
21
This package supports:
21
22
- Intel® CPUs
22
23
- Intel® integrated GPUs
23
24
- Intel® discrete GPUs
25
+
- Intel® integrated NPUs (Windows only)
24
26
25
27
``pip3 install onnxruntime-openvino``
26
28
27
29
Please install OpenVINO™ PyPi Package separately for Windows.
28
30
For installation instructions on Windows please refer to `OpenVINO™ Execution Provider for ONNX Runtime for Windows <https://github.com/intel/onnxruntime/releases/>`_.
29
31
30
-
**OpenVINO™ Execution Provider for ONNX Runtime** Linux Wheels comes with pre-built libraries of OpenVINO™ version 2023.0.0 eliminating the need to install OpenVINO™ separately. The OpenVINO™ libraries are prebuilt with CXX11_ABI flag set to 0.
32
+
**OpenVINO™ Execution Provider for ONNX Runtime** Linux Wheels comes with pre-built libraries of OpenVINO™ version 2024.1.0 eliminating the need to install OpenVINO™ separately.
31
33
32
34
For more details on build and installation please refer to `Build <https://onnxruntime.ai/docs/build/eps.html#openvino>`_.
33
35
34
36
Usage
35
37
^^^^^
36
38
37
-
By default, Intel® CPU is used to run inference. However, you can change the default option to either Intel® integrated or discrete GPU.
39
+
By default, Intel® CPU is used to run inference. However, you can change the default option to either Intel® integrated GPU, discrete GPU, integrated NPU (Windows only).
38
40
Invoke `the provider config device type argument <https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html#summary-of-options>`_ to change the hardware on which inferencing is done.
39
41
40
42
For more API calls and environment variables, see `Usage <https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html#configuration-options>`_.
0 commit comments