You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Deep Learning Inference Engine (IE) supports models in the Intermediate Representation (IR) format. A model from any supported framework can be converted to IR using the Model Optimizer tool included in the OpenVINO™ toolkit. Find more information about conversion in the [Model Optimizer Developer Guide](https://docs.openvino.ai/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html). After a successful conversion you get a model in the IR format, with the `*.xml` file representing the net graph and the `*.bin` file containing the net parameters.
191
+
OpenVINO™ Runtime supports models in the Intermediate Representation (IR) format. A model from any supported framework can be converted to IR using the Model Optimizer tool included in the OpenVINO™ toolkit. Find more information about conversion in the [Model Optimizer Developer Guide](@ref openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide). After a successful conversion, you get a model in the IR format, with the `*.xml` file representing the net graph and the `*.bin` file containing the net parameters.
181
192
182
-
> **NOTE 1**: Image preprocessing parameters (mean and scale) must be built into a converted model to simplify model usage.
193
+
> **NOTE**: Image preprocessing parameters (mean and scale) must be built into a converted model to simplify model usage.
183
194
184
195
> **NOTE 2**: If a model input is a color image, color channel order should be `BGR`.
For demo input image or video files, refer to the section **Media Files Available for Demos** in the [Open Model Zoo Demos Overview](../../README.md).
20
20
The list of models supported by the demo is in `<omz_dir>/demos/3d_segmentation_demo/python/models.lst` file.
21
-
This file can be used as a parameter for [Model Downloader](../../../tools/model_tools/README.md) and Converter to download and, if necessary, convert models to OpenVINO Inference Engine format (\*.xml + \*.bin).
21
+
This file can be used as a parameter for [Model Downloader](../../../tools/model_tools/README.md) and Converter to download and, if necessary, convert models to OpenVINO IR format (\*.xml + \*.bin).
The Open Model Zoo demo applications are console applications that provide robust application templates to help you implement specific deep learning scenarios. These applications involve increasingly complex processing pipelines that gather analysis data from several models that run inference simultaneously, such as detecting a person in a video stream along with detecting the person's physical attributes, such as age, gender, and emotional state
4
65
5
-
For the Intel® Distribution of OpenVINO™ toolkit, the demos are available after installation in the following directory: `<INSTALL_DIR>/deployment_tools/open_model_zoo/demos`.
6
-
The demos can also be obtained from the Open Model Zoo [GitHub repository](https://github.com/openvinotoolkit/open_model_zoo/).
66
+
Source code of the demos can be obtained from the Open Model Zoo [GitHub repository](https://github.com/openvinotoolkit/open_model_zoo/).
C++, C++ G-API and Python\* versions are located in the `cpp`, `cpp_gapi` and `python` subdirectories respectively.
8
75
9
76
The Open Model Zoo includes the following demos:
@@ -64,6 +131,7 @@ The Open Model Zoo includes the following demos:
64
131
-[Single Human Pose Estimation Python\* Demo](./single_human_pose_estimation_demo/python/README.md) - 2D human pose estimation demo.
65
132
-[Smart Classroom C++ Demo](./smart_classroom_demo/cpp/README.md) - Face recognition and action detection demo for classroom environment.
66
133
-[Smart Classroom C++ G-API Demo](./smart_classroom_demo/cpp_gapi/README.md) - Face recognition and action detection demo for classroom environment. G-PI version.
134
+
-[Smartlab Python\* Demo](./smartlab_demo/python/README.md) - action recognition and object detection for smartlab.
67
135
-[Social Distance C++ Demo](./social_distance_demo/cpp/README.md) - This demo showcases a retail social distance application that detects people and measures the distance between them.
-[Text Detection C++ Demo](./text_detection_demo/cpp/README.md) - Text Detection demo. It detects and recognizes multi-oriented scene text on an input image and puts a bounding box around detected area.
@@ -78,22 +146,33 @@ To run the demo applications, you can use images and videos from the media files
78
146
79
147
## Demos that Support Pre-Trained Models
80
148
81
-
> **NOTE:**Inference Engine HDDL plugin is available in [proprietary](https://software.intel.com/en-us/openvino-toolkit) distribution only.
149
+
> **NOTE:**OpenVINO™ Runtime HDDL plugin is available in [proprietary](https://software.intel.com/en-us/openvino-toolkit) distribution only.
82
150
83
151
You can download the [Intel pre-trained models](../models/intel/index.md) or [public pre-trained models](../models/public/index.md) using the OpenVINO [Model Downloader](../tools/model_tools/README.md).
84
152
85
153
## Build the Demo Applications
86
154
87
-
To be able to build demos you need to source Inference Engine and OpenCV environment from a binary package which is available as [proprietary](https://software.intel.com/en-us/openvino-toolkit) distribution.
88
-
Please run the following command before the demos build (assuming that the binary package was installed to `<INSTALL_DIR>`):
155
+
To build the demos, you need to source OpenVINO™ and OpenCV environment. You can install the OpenVINO™ toolkit using the installation package for [Intel® Distribution of OpenVINO™ toolkit](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit-download.html) or build the open-source version available in the [OpenVINO GitHub repository](https://github.com/openvinotoolkit/openvino) using the [build instructions](https://github.com/openvinotoolkit/openvino/wiki/BuildingCode).
156
+
For the Intel® Distribution of OpenVINO™ toolkit installed to the `<INSTALL_DIR>` directory on your machine, run the following commands to download OpenCV and set environment variables before building the demos:
You can also build demos manually using Inference Engine built from the [openvino](https://github.com/openvinotoolkit/openvino) repo. In this case please set `InferenceEngine_DIR` environment variable to a folder containing `InferenceEngineConfig.cmake` and `ngraph_DIR` to a folder containing `ngraphConfig.cmake` in a build folder. Please also set the `OpenCV_DIR` to point to the OpenCV package to use. The same OpenCV version should be used both for Inference Engine and demos build. Alternatively these values can be provided via command line while running `cmake`. See [CMake's search procedure](https://cmake.org/cmake/help/latest/command/find_package.html#search-procedure).
95
-
Please refer to the Inference Engine [build instructions](https://github.com/openvinotoolkit/openvino/wiki/BuildingCode)
96
-
for details. Please also add path to built Inference Engine libraries to `LD_LIBRARY_PATH` (Linux*) or `PATH` (Windows*) variable before building the demos.
163
+
> **NOTE:** If you plan to use Python\* demos only, you can install the OpenVINO Python\* package.
164
+
> ```sh
165
+
> pip install openvino
166
+
>```
167
+
168
+
For the open-source version of OpenVINO, set the following variables:
169
+
*`InferenceEngine_DIR` pointing to a folder containing `InferenceEngineConfig.cmake`
170
+
*`OpenVINO_DIR` pointing to a folder containing `OpenVINOConfig.cmake`
171
+
*`ngraph_DIR` pointing to a folder containing `ngraphConfig.cmake`.
172
+
*`OpenCV_DIR` pointing to OpenCV. The same OpenCV version should be used both for OpenVINO and demos build.
173
+
174
+
Alternatively, these values can be provided via command line while running `cmake`. See [CMake search procedure](https://cmake.org/cmake/help/latest/command/find_package.html#search-procedure).
175
+
Also add paths to the built OpenVINO™ Runtime libraries to the `LD_LIBRARY_PATH` (Linux) or `PATH` (Windows) variable before building the demos.
97
176
98
177
### <a name="build_demos_linux"></a>Build the Demo Applications on Linux*
### Get Ready for Running the Demo Applications on Linux*
272
351
273
-
Before running compiled binary files, make sure your application can find the Inference Engine and OpenCV libraries.
352
+
Before running compiled binary files, make sure your application can find the OpenVINO™ and OpenCV libraries.
274
353
If you use a [proprietary](https://software.intel.com/en-us/openvino-toolkit) distribution to build demos,
275
354
run the `setupvars` script to set all necessary environment variables:
276
355
277
356
```sh
278
-
source<INSTALL_DIR>/bin/setupvars.sh
357
+
source<INSTALL_DIR>/setupvars.sh
279
358
```
280
359
281
-
If you use your own Inference Engine and OpenCV binaries to build the demos please make sure you have added them
360
+
If you use your own OpenVINO™ and OpenCV binaries to build the demos please make sure you have added them
282
361
to the `LD_LIBRARY_PATH` environment variable.
283
362
284
363
**(Optional)**: The OpenVINO environment variables are removed when you close the
@@ -293,7 +372,7 @@ vi <user_home_directory>/.bashrc
293
372
2. Add this line to the end of the file:
294
373
295
374
```sh
296
-
source<INSTALL_DIR>/bin/setupvars.sh
375
+
source<INSTALL_DIR>/setupvars.sh
297
376
```
298
377
299
378
3. Save and close the file: press the **Esc** key, type`:wq` and press the **Enter** key.
@@ -313,16 +392,16 @@ list above.
313
392
314
393
### Get Ready for Running the Demo Applications on Windows*
315
394
316
-
Before running compiled binary files, make sure your application can find the Inference Engine and OpenCV libraries.
317
-
Optionally download OpenCV community FFmpeg plugin. There is a downloader script in the OpenVINO package: `<INSTALL_DIR>\opencv\ffmpeg-download.ps1`.
318
-
If you use a [proprietary](https://software.intel.com/en-us/openvino-toolkit) distribution to build demos,
395
+
Before running compiled binary files, make sure your application can find the OpenVINO™ and OpenCV libraries.
396
+
Optionally, download the OpenCV community FFmpeg plugin using the downloader script in the OpenVINO package: `<INSTALL_DIR>\extras\opencv\ffmpeg-download.ps1`.
397
+
If you use the [Intel® Distribution of OpenVINO™ toolkit](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/overview.html) distribution to build demos,
319
398
run the `setupvars` script to set all necessary environment variables:
320
399
321
400
```bat
322
-
<INSTALL_DIR>\bin\setupvars.bat
401
+
<INSTALL_DIR>\setupvars.bat
323
402
```
324
403
325
-
If you use your own Inference Engine and OpenCV binaries to build the demos please make sure you have added
404
+
If you use your own OpenVINO™ and OpenCV binaries to build the demos please make sure you have added
326
405
to the `PATH` environment variable.
327
406
328
407
To run Python demo applications that require native Python extension modules, you must additionally
@@ -336,7 +415,7 @@ set PYTHONPATH=%PYTHONPATH%;<bin_dir>
336
415
To debug or run the demos on Windows in Microsoft Visual Studio, make sure you
337
416
have properly configured **Debugging** environment settings for the **Debug**
338
417
and **Release** configurations. Set correct paths to the OpenCV libraries, and
339
-
debug and release versions of the Inference Engine libraries.
418
+
debug and release versions of the OpenVINO™ libraries.
340
419
For example, for the **Debug** configuration, go to the project's
341
420
**Configuration Properties** to the **Debugging** category and set the `PATH`
342
421
variable in the **Environment** field to the following:
Copy file name to clipboardExpand all lines: demos/action_recognition_demo/python/README.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -38,7 +38,7 @@ You can change the value of `num_requests` in `action_recognition_demo.py` to fi
38
38
39
39
For demo input image or video files, refer to the section **Media Files Available for Demos** in the [Open Model Zoo Demos Overview](../../README.md).
40
40
The list of models supported by the demo is in `<omz_dir>/demos/action_recognition_demo/python/models.lst` file.
41
-
This file can be used as a parameter for [Model Downloader](../../../tools/model_tools/README.md) and Converter to download and, if necessary, convert models to OpenVINO Inference Engine format (\*.xml + \*.bin).
41
+
This file can be used as a parameter for [Model Downloader](../../../tools/model_tools/README.md) and Converter to download and, if necessary, convert models to OpenVINO IR format (\*.xml + \*.bin).
Copy file name to clipboardExpand all lines: demos/background_subtraction_demo/cpp_gapi/README.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -34,7 +34,7 @@ The demo workflow is the following:
34
34
35
35
For demo input image or video files, refer to the section **Media Files Available for Demos** in the [Open Model Zoo Demos Overview](../../README.md).
36
36
The list of models supported by the demo is in `<omz_dir>/demos/background_subtraction_demo/cpp_gapi/models.lst` file.
37
-
This file can be used as a parameter for [Model Downloader](../../../tools/model_tools/README.md) and Converter to download and, if necessary, convert models to OpenVINO Inference Engine format (\*.xml + \*.bin).
37
+
This file can be used as a parameter for [Model Downloader](../../../tools/model_tools/README.md) and Converter to download and, if necessary, convert models to OpenVINO IR format (\*.xml + \*.bin).
0 commit comments