You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: demos/3d_segmentation_demo/python/README.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ This topic demonstrates how to run the 3D Segmentation Demo, which segments 3D i
6
6
7
7
On startup, the demo reads command-line parameters and loads a model and images to OpenVINO™ Runtime plugin.
8
8
9
-
> **NOTE**: By default, Open Model Zoo demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the demo application or reconvert your model using the Model Optimizer tool with the `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](https://docs.openvino.ai/latest/openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model.html#general-conversion-parameters).
9
+
> **NOTE**: By default, Open Model Zoo demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the demo application or reconvert your model using the Model Optimizer tool with the `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Embedding Preprocessing Computation](@ref openvino_docs_MO_DG_Additional_Optimization_Use_Cases).
Copy file name to clipboardExpand all lines: demos/README.md
+15-13Lines changed: 15 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -167,7 +167,7 @@ You can download the [Intel pre-trained models](../models/intel/index.md) or [pu
167
167
## Build the Demo Applications
168
168
169
169
To build the demos, you need to source OpenVINO™ and OpenCV environment. You can install the OpenVINO™ toolkit using the installation package for [Intel® Distribution of OpenVINO™ toolkit](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit-download.html) or build the open-source version available in the [OpenVINO GitHub repository](https://github.com/openvinotoolkit/openvino) using the [build instructions](https://github.com/openvinotoolkit/openvino/wiki/BuildingCode).
170
-
For the Intel® Distribution of OpenVINO™ toolkit installed to the `<INSTALL_DIR>` directory on your machine, run the following commands to download OpenCV and set environment variables before building the demos:
170
+
For the Intel® Distribution of OpenVINO™ toolkit installed to the `<INSTALL_DIR>` directory on your machine, run the following commands to download prebuilt OpenCV and set environment variables before building the demos:
For the open-source version of OpenVINO, set the following variables:
183
-
*`InferenceEngine_DIR` pointing to a folder containing `InferenceEngineConfig.cmake`
184
183
*`OpenVINO_DIR` pointing to a folder containing `OpenVINOConfig.cmake`
185
-
*`ngraph_DIR` pointing to a folder containing `ngraphConfig.cmake`.
186
184
*`OpenCV_DIR` pointing to OpenCV. The same OpenCV version should be used both for OpenVINO and demos build.
187
185
188
186
Alternatively, these values can be provided via command line while running `cmake`. See [CMake search procedure](https://cmake.org/cmake/help/latest/command/find_package.html#search-procedure).
@@ -192,8 +190,8 @@ Also add paths to the built OpenVINO™ Runtime libraries to the `LD_LIBRARY_PAT
192
190
193
191
The officially supported Linux* build environment is the following:
To build the demo applications for Linux, go to the directory with the `build_demos.sh` script and
@@ -236,10 +234,8 @@ for the debug configuration — in `<path_to_build_directory>/intel64/Debug/`.
236
234
The recommended Windows* build environment is the following:
237
235
238
236
- Microsoft Windows* 10
239
-
- Microsoft Visual Studio* 2017, or 2019
240
-
- CMake* version 3.10 or higher
241
-
242
-
>**NOTE**: If you want to use Microsoft Visual Studio 2019, you are required to install CMake 3.14.
237
+
- Microsoft Visual Studio* 2019
238
+
- CMake* version 3.14 or higher
243
239
244
240
To build the demo applications for Windows, go to the directory with the `build_demos_msvc.bat`
245
241
batch file and run it:
@@ -250,13 +246,19 @@ build_demos_msvc.bat
250
246
251
247
By default, the script automatically detects the highest Microsoft Visual Studio version installed on the machine and uses it to create and build
252
248
a solution for a demo code. Optionally, you can also specify the preferred Microsoft Visual Studio version to be used by the script. Supported
253
-
versions are: `VS2017`, `VS2019`. For example, to build the demos using the Microsoft Visual Studio 2017, use the following command:
249
+
version is: `VS2019`. For example, to build the demos using the Microsoft Visual Studio 2019, use the following command:
250
+
251
+
```bat
252
+
build_demos_msvc.bat VS2019
253
+
```
254
+
255
+
By default, the demo applications binaries are build into the `C:\Users\<username>\Documents\Intel\OpenVINO\omz_demos_build\intel64\Release` directory.
256
+
The default build folder can be changed with `-b` option. For example, following command will buid Open Model Zoo demos into `c:\temp\omz-demos-build` folder:
254
257
255
258
```bat
256
-
build_demos_msvc.bat VS2017
259
+
build_demos_msvc.bat -b c:\temp\omz-demos-build
257
260
```
258
261
259
-
The demo applications binaries are in the `C:\Users\<username>\Documents\Intel\OpenVINO\omz_demos_build\intel64\Release` directory.
260
262
261
263
You can also build a generated solution by yourself, for example, if you want to
262
264
build binaries in Debug configuration. Run the appropriate version of the
@@ -415,7 +417,7 @@ For example, for the **Debug** configuration, go to the project's
415
417
variable in the **Environment** field to the following:
Copy file name to clipboardExpand all lines: demos/action_recognition_demo/python/README.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -32,7 +32,7 @@ that uses Asynchronous Inference Request API by scheduling infer requests in cyc
32
32
You can change the value of `num_requests` in `action_recognition_demo.py` to find an optimal number of parallel working infer requests for your inference accelerators
33
33
(Intel(R) Neural Compute Stick devices and GPUs benefit from higher number of infer requests).
34
34
35
-
> **NOTE**: By default, Open Model Zoo demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the demo application or reconvert your model using the Model Optimizer tool with the `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](https://docs.openvino.ai/latest/openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model.html#general-conversion-parameters).
35
+
> **NOTE**: By default, Open Model Zoo demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the demo application or reconvert your model using the Model Optimizer tool with the `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Embedding Preprocessing Computation](@ref openvino_docs_MO_DG_Additional_Optimization_Use_Cases).
36
36
37
37
## Preparing to Run
38
38
@@ -147,5 +147,5 @@ The application uses OpenCV to display the real-time action recognition results
Copy file name to clipboardExpand all lines: demos/background_subtraction_demo/cpp_gapi/README.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -28,7 +28,7 @@ The demo workflow is the following:
28
28
* If you specify `--target_bgr`, background will be replaced by a chosen image or video. By default background replaced by green field.
29
29
* If you specify `--blur_bgr`, background will be blurred according to a set value. By default equal to zero and is not applied.
30
30
31
-
> **NOTE**: By default, Open Model Zoo demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the demo application or reconvert your model using the Model Optimizer tool with the `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](https://docs.openvino.ai/latest/openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model.html#general-conversion-parameters).
31
+
> **NOTE**: By default, Open Model Zoo demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the demo application or reconvert your model using the Model Optimizer tool with the `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Embedding Preprocessing Computation](@ref openvino_docs_MO_DG_Additional_Optimization_Use_Cases).
Copy file name to clipboardExpand all lines: demos/background_subtraction_demo/python/README.md
+16-5Lines changed: 16 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -29,7 +29,10 @@ The demo application expects an instance segmentation or background matting mode
29
29
* At least two outputs including:
30
30
*`fgr` with normalized in [0, 1] range foreground
31
31
*`pha` with normalized in [0, 1] range alpha
32
-
4. for video background matting models based on RNN architecture:
32
+
4. for image background matting models without trimap (background segmentation):
33
+
* Single input for input image.
34
+
* Single output with normalized in [0, 1] range alpha
35
+
5. for video background matting models based on RNN architecture:
33
36
* Five inputs:
34
37
*`src` for input image
35
38
* recurrent inputs: `r1`, `r2`, `r3`, `r4`
@@ -53,7 +56,13 @@ The demo workflow is the following:
53
56
* If you specify `--blur_bgr`, background will be blurred according to a set value. By default equal to zero and is not applied.
54
57
* If you specify `--show_with_original_frame`, the result image will be merged with an input one.
55
58
56
-
> **NOTE**: By default, Open Model Zoo demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the demo application or reconvert your model using the Model Optimizer tool with the `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](https://docs.openvino.ai/latest/openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model.html#general-conversion-parameters).
59
+
> **NOTE**: By default, Open Model Zoo demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the demo application or reconvert your model using the Model Optimizer tool with the `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Embedding Preprocessing Computation](@ref openvino_docs_MO_DG_Additional_Optimization_Use_Cases).
60
+
61
+
## Model API
62
+
63
+
The demo utilizes model wrappers, adapters and pipelines from [Python* Model API](../../common/python/openvino/model_zoo/model_api/README.md).
64
+
65
+
The generalized interface of wrappers with its unified results representation provides the support of multiple different background subtraction model topologies in one demo.
> **NOTE**: Refer to the tables [Intel's Pre-Trained Models Device Support](../../../models/intel/device_support.md) and [Public Pre-Trained Models Device Support](../../../models/public/device_support.md) for the details on models inference support at different devices.
84
95
@@ -227,5 +238,5 @@ You can use these metrics to measure application-level performance.
0 commit comments