You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: demos/3d_segmentation_demo/python/README.md
+4-17Lines changed: 4 additions & 17 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,18 +4,12 @@ This topic demonstrates how to run the 3D Segmentation Demo, which segments 3D i
4
4
5
5
## How It Works
6
6
7
-
On startup, the demo reads command-line parameters and loads a network and images to the Inference Engine plugin.
7
+
On startup, the demo reads command-line parameters and loads a model and images to OpenVINO™ Runtime plugin.
8
8
9
9
> **NOTE**: By default, Open Model Zoo demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the demo application or reconvert your model using the Model Optimizer tool with the `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](https://docs.openvino.ai/latest/openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model.html#general-conversion-parameters).
10
10
11
11
## Preparing to Run
12
12
13
-
The demo dependencies should be installed before run. That can be achieved with the following command:
For demo input image or video files, refer to the section **Media Files Available for Demos** in the [Open Model Zoo Demos Overview](../../README.md).
20
14
The list of models supported by the demo is in `<omz_dir>/demos/3d_segmentation_demo/python/models.lst` file.
21
15
This file can be used as a parameter for [Model Downloader](../../../tools/model_tools/README.md) and Converter to download and, if necessary, convert models to OpenVINO IR format (\*.xml + \*.bin).
@@ -45,11 +39,10 @@ Run the application with the `-h` or `--help` option to see the usage message:
The Open Model Zoo demo applications are console applications that provide robust application templates to help you implement specific deep learning scenarios. These applications involve increasingly complex processing pipelines that gather analysis data from several models that run inference simultaneously, such as detecting a person in a video stream along with detecting the person's physical attributes, such as age, gender, and emotional state
65
79
@@ -137,7 +151,7 @@ The Open Model Zoo includes the following demos:
137
151
-[Text Detection C++ Demo](./text_detection_demo/cpp/README.md) - Text Detection demo. It detects and recognizes multi-oriented scene text on an input image and puts a bounding box around detected area.
138
152
-[Text Spotting Python\* Demo](./text_spotting_demo/python/README.md) - The demo demonstrates how to run Text Spotting models.
139
153
-[Text-to-speech Python\* Demo](./text_to_speech_demo/python/README.md) - Shows an example of using Forward Tacotron and WaveRNN neural networks for text to speech task.
140
-
-[Time Series Forecasting Python\* Demo](./time_series_forecasting_demo/python/README.md) - The demo shows how to use the OpenVINO™ toolkit to time series forecastig.
154
+
-[Time Series Forecasting Python\* Demo](./time_series_forecasting_demo/python/README.md) - The demo shows how to use the OpenVINO™ toolkit to time series forecasting.
141
155
-[Whiteboard Inpainting Python\* Demo](./whiteboard_inpainting_demo/python/README.md) - The demo shows how to use the OpenVINO™ toolkit to detect and hide a person on a video so that all text on a whiteboard is visible.
142
156
143
157
## Media Files Available for Demos
@@ -272,37 +286,17 @@ cmake -A x64 <open_model_zoo>/demos
272
286
cmake --build . --config Debug
273
287
```
274
288
275
-
### <a name="model_api_installation"></a>Python\* model API installation
289
+
### <a name="python_requirements"></a>Dependencies for Python* Demos
276
290
277
-
Python Model API with model wrappers and pipelines can be installed as a part of OpenVINO™ toolkit or from source.
278
-
Installation from source is as follows:
279
-
280
-
1. Install Python (version 3.6 or higher), [setuptools](https://pypi.org/project/setuptools/):
281
-
282
-
2. Build the wheel with the following command:
291
+
The dependencies for Python demos must be installed before running. It can be achieved with the following command:
Alternatively, instead of building the wheel you can use the following command inside `<omz_dir>/demos/common/python/` directory to build and install the package:
295
-
```sh
296
-
python -m pip install .
297
-
```
298
-
299
-
When the model API package is installed, you can import it as follows:
### <a name="python_model_api"></a>Python\* model API package
303
298
304
-
>**NOTE**: On Linux and macOS, you may need to type`python3` instead of `python`. You may also need to [install pip](https://pip.pypa.io/en/stable/installation/).
305
-
> For example, on Ubuntu execute the following command to get pip installed: `sudo apt install python3-pip`.
299
+
To run Python demo applications, you need to install the Python* Model API package. Refer to [Python* Model API documentation](common/python/openvino/model_zoo/model_api/README.md#installing-python*-model-api-package) to learn about its installation.
306
300
307
301
### <a name="build_python_extensions"></a>Build the Native Python\* Extension Modules
308
302
@@ -383,7 +377,7 @@ set up the `PYTHONPATH` environment variable as follows, where `<bin_dir>` is th
383
377
the built demo applications:
384
378
385
379
```sh
386
-
export PYTHONPATH="$PYTHONPATH:<bin_dir>/lib"
380
+
export PYTHONPATH="<bin_dir>:$PYTHONPATH"
387
381
```
388
382
389
383
You are ready to run the demo applications. To learn about how to run a particular
@@ -409,7 +403,7 @@ set up the `PYTHONPATH` environment variable as follows, where `<bin_dir>` is th
409
403
the built demo applications:
410
404
411
405
```bat
412
-
set PYTHONPATH=%PYTHONPATH%;<bin_dir>
406
+
set PYTHONPATH=<bin_dir>;%PYTHONPATH%
413
407
```
414
408
415
409
To debug or run the demos on Windows in Microsoft Visual Studio, make sure you
Copy file name to clipboardExpand all lines: demos/action_recognition_demo/python/README.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,17 +17,17 @@ Every step implements `PipelineStep` interface by creating a class derived from
17
17
18
18
*`DataStep` reads frames from the input video.
19
19
* Model step depends on architecture type:
20
-
- For encder-decoder models there are two steps:
20
+
- For encoder-decoder models there are two steps:
21
21
-`EncoderStep` preprocesses a frame and feeds it to the encoder model to produce a frame embedding. Simple averaging of encoder's outputs over a time window is applied.
22
22
-`DecoderStep` feeds embeddings produced by the `EncoderStep` to the decoder model and produces predictions. For models that use `DummyDecoder`, simple averaging of encoder's outputs over a time window is applied.
23
23
- For the specific implemented single models, the corresponding `<ModelNameStep>` does preprocessing and produces predictions.
24
24
*`RenderStep` renders prediction results.
25
25
26
-
Pipeline steps are composed in `AsyncPipeline`. Every step can be run in separate thread by adding it to the pipeline with `parallel=True` option.
26
+
Pipeline steps are composed in `AsyncPipeline`. Every step can be run in a separate thread by adding it to the pipeline with `parallel=True` option.
27
27
When two consequent steps occur in separate threads, they communicate via message queue (for example, deliver step result or stop signal).
28
28
29
-
To ensure maximum performance, Inference Engine models are wrapped in `AsyncWrapper`
30
-
that uses Inference Engine async API by scheduling infer requests in cyclical order
29
+
To ensure maximum performance, models are wrapped in `AsyncWrapper`
30
+
that uses Asynchronous Inference Request API by scheduling infer requests in cyclical order
31
31
(inference on every new input is started asynchronously, result of the longest working infer request is returned).
32
32
You can change the value of `num_requests` in `action_recognition_demo.py` to find an optimal number of parallel working infer requests for your inference accelerators
33
33
(Intel(R) Neural Compute Stick devices and GPUs benefit from higher number of infer requests).
0 commit comments