Skip to content

Conversation

@aobolensk
Copy link
Member

No description provided.

\begin{frame}{What is OpenVINO?}
\begin{columns}[T,totalwidth=\textwidth]
\begin{column}{0.7\textwidth}
OpenVINO (Open Visual Inference and Neural Network Optimization)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a little bit wrong - please write about different arch too

\begin{itemize}
\item \textbf{Purpose:} Optimize and deploy AI inference across Intel CPUs, GPUs, NPUs, and other accelerators
\item \textbf{Core components:} Model Optimizer, Runtime (Inference Engine), Post-Training Optimization Tool, Benchmark tools, Notebooks
\item \textbf{Model formats (Frontends):} IR (\texttt{.xml/.bin}), ONNX (\texttt{.onnx}), TensorFlow (SavedModel/MetaGraph/frozen \texttt{.pb/.pbtxt}), TensorFlow Lite (\texttt{.tflite}), PaddlePaddle (\texttt{.pdmodel}), PyTorch (TorchScript/FX)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please add extension for pytorch


\begin{frame}{OpenVINO at a Glance}
\begin{itemize}
\item \textbf{Purpose:} Optimize and deploy AI inference across Intel CPUs, GPUs, NPUs, and other accelerators
Copy link
Member

@allnes allnes Nov 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would be to add arm and riscv as additional architecture and every item need to be with that arch

\begin{frame}{OpenVINO at a Glance}
\begin{itemize}
\item \textbf{Purpose:} Optimize and deploy AI inference across Intel CPUs, GPUs, NPUs, and other accelerators
\item \textbf{Core components:} Model Optimizer, Runtime (Inference Engine), Post-Training Optimization Tool, Benchmark tools, Notebooks
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Model optimizer is legacy, now it's OVC (OpenVINO Model Converter - https://docs.openvino.ai/2024/notebooks/convert-to-openvino-with-output.html)

\item \textbf{Purpose:} Optimize and deploy AI inference across Intel CPUs, GPUs, NPUs, and other accelerators
\item \textbf{Core components:} Model Optimizer, Runtime (Inference Engine), Post-Training Optimization Tool, Benchmark tools, Notebooks
\item \textbf{Model formats (Frontends):} IR (\texttt{.xml/.bin}), ONNX (\texttt{.onnx}), TensorFlow (SavedModel/MetaGraph/frozen \texttt{.pb/.pbtxt}), TensorFlow Lite (\texttt{.tflite}), PaddlePaddle (\texttt{.pdmodel}), PyTorch (TorchScript/FX)
\item \textbf{Targets:} CPU, iGPU, dGPU (e.g., Intel Arc), NPU, and more via plugins
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

iGPU and dGPU are unite GPU plugin, you can decribe plugins or devices (ARM CPU, Intel CPU RISC_V cpu)


\begin{frame}{Device Plugins Architecture}
\centering
\ovbox{gray!15}{\textbf{Application} (C++/Python)}\\[0.6em]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see frontends

@allnes
Copy link
Member

allnes commented Nov 3, 2025

I would be to see:

  • how install
  • how build
  • API example (с++ + python) e.g. for yolov12
  • show examples (especially benchmark_app)
  • GenAI as a high level exercise

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants