Skip to content

Commit 9ff9631

Browse files
authored
Merge pull request #395 from intel/vishnu/python-readme-update
Update ReadMeOV.rst
2 parents a5acee0 + 937b0e6 commit 9ff9631

File tree

1 file changed

+5
-3
lines changed

1 file changed

+5
-3
lines changed

docs/python/ReadMeOV.rst

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,7 @@ OpenVINO™ Execution Provider for ONNX Runtime accelerates inference across man
77
- Intel® CPUs
88
- Intel® integrated GPUs
99
- Intel® discrete GPUs
10+
- Intel® integrated NPUs (Windows only)
1011

1112
Installation
1213
------------
@@ -15,26 +16,27 @@ Requirements
1516
^^^^^^^^^^^^
1617

1718
- Ubuntu 18.04, 20.04, RHEL(CPU only) or Windows 10 - 64 bit
18-
- Python 3.8 or 3.9 or 3.10 for Linux and only Python3.10 for Windows
19+
- Python 3.9 or 3.10 or 3.11 for Linux and Python 3.10, 3.11 for Windows
1920

2021
This package supports:
2122
- Intel® CPUs
2223
- Intel® integrated GPUs
2324
- Intel® discrete GPUs
25+
- Intel® integrated NPUs (Windows only)
2426

2527
``pip3 install onnxruntime-openvino``
2628

2729
Please install OpenVINO™ PyPi Package separately for Windows.
2830
For installation instructions on Windows please refer to `OpenVINO™ Execution Provider for ONNX Runtime for Windows <https://github.com/intel/onnxruntime/releases/>`_.
2931

30-
**OpenVINO™ Execution Provider for ONNX Runtime** Linux Wheels comes with pre-built libraries of OpenVINO™ version 2023.0.0 eliminating the need to install OpenVINO™ separately. The OpenVINO™ libraries are prebuilt with CXX11_ABI flag set to 0.
32+
**OpenVINO™ Execution Provider for ONNX Runtime** Linux Wheels comes with pre-built libraries of OpenVINO™ version 2024.1.0 eliminating the need to install OpenVINO™ separately.
3133

3234
For more details on build and installation please refer to `Build <https://onnxruntime.ai/docs/build/eps.html#openvino>`_.
3335

3436
Usage
3537
^^^^^
3638

37-
By default, Intel® CPU is used to run inference. However, you can change the default option to either Intel® integrated or discrete GPU.
39+
By default, Intel® CPU is used to run inference. However, you can change the default option to either Intel® integrated GPU, discrete GPU, integrated NPU (Windows only).
3840
Invoke `the provider config device type argument <https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html#summary-of-options>`_ to change the hardware on which inferencing is done.
3941

4042
For more API calls and environment variables, see `Usage <https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html#configuration-options>`_.

0 commit comments

Comments
 (0)