|
6 | 6 | [id="nw-about-dpu_{context}"] |
7 | 7 | = Orchestrating DPUs with the DPU Operator |
8 | 8 |
|
9 | | -A Data Processing Unit (DPU) is a type of programmable processor that is considered one of the three fundamental pillars of computing, alongside CPUs and GPUs. While CPUs handle general computing tasks and GPUs accelerate specific workloads, the primary role of the DPU is to offload and accelerate data-centric workloads, such as networking, storage, and security functions. |
| 9 | +A Data Processing Unit (DPU) is a type of programmable processor that is considered one of the three fundamental pillars of computing, alongside CPUs and GPUs. While CPUs handle general computing tasks and GPUs accelerate specific workloads, the primary role of the DPU is to offload and accelerate data-centric workloads, such as networking, storage, and security functions. |
10 | 10 |
|
11 | 11 | DPUs are typically used in data centers and cloud environments to improve performance, reduce latency, and enhance security by offloading these tasks from the CPU. DPUs can also be used to create a more efficient and flexible infrastructure by enabling the deployment of specialized workloads closer to the data source. |
12 | 12 |
|
13 | 13 | The DPU Operator is responsible for managing the DPU devices and network attachments. The DPU Operator deploys the DPU daemon onto {product-title} compute nodes that interface through an API controlling the DPU daemon running on the DPU. The DPU Operator is responsible for the life-cycle management of the `ovn-kube` components and the necessary host network initialization on the DPU. |
14 | 14 |
|
15 | | -The currently supported DPU device is described in the following table. |
| 15 | +The currently supported DPU devices are described in the following table. |
16 | 16 |
|
17 | | -.Supported device |
| 17 | +.Supported devices |
18 | 18 | [cols="1,1,1,2", options="header"] |
19 | 19 | |=== |
20 | 20 | | Vendor | Device | Firmware | Description |
21 | 21 |
|
22 | | -| Intel | IPU E2100 | Version 2.0.0.11126 or later | A DPU designed to offload networking, storage, and security tasks from host CPUs in data centers, improving efficiency and performance. For instructions on deploying a full end-to-end solution, see the Red{nbsp}Hat Knowledgebase solution link:https://access.redhat.com/articles/7120276[Accelerating Confidential AI on OpenShift with the Intel E2100 IPU, DPU Operator, and F5 NGINX]. |
| 22 | +| Intel | IPU E2100 | Version 2.0.0.11126 or later | A DPU designed to offload networking, storage, and security tasks from host CPUs in data centers, improving efficiency and performance. For instructions on deploying a full end-to-end solution, see the Red{nbsp}Hat Knowledgebase solution link:https://access.redhat.com/articles/7120276[Accelerating Confidential AI on OpenShift with the Intel E2100 IPU, DPU Operator, and F5 NGINX]. |
| 23 | +| Senao | SX904 | 35.23.47.0008 or later | A SmartNIC designed to offload compute and network services from the host CPUs in data centers and edge computing environments, improving efficiency and isolation of workloads. |
| 24 | +| Marvell | Marvell Octeon 10 CN106 | SDK12.25.01 or later | A DPU designed to offload workloads that require high speed data processing from host CPUs in data centers and edge computing environments, improving performance and energy efficiency |
23 | 25 | |=== |
| 26 | +[NOTE] |
| 27 | +==== |
| 28 | +The NVIDIA BlueField-3 is not supported. |
| 29 | +==== |
| 30 | + |
24 | 31 |
|
0 commit comments