Skip to content

Commit 8649470

Browse files
authored
AI README.md Updates (#2038)
1 parent 8e2d7ed commit 8649470

File tree

23 files changed

+235
-567
lines changed

23 files changed

+235
-567
lines changed
Lines changed: 13 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -1,48 +1,32 @@
1-
# End-to-End Samples for the Intel® AI Analytics Toolkit (AI Kit)
1+
# End-to-End Samples for the Intel AI Tools
22

3-
The Intel® AI Analytics Toolkit (AI Kit) allows data scientists, AI
3+
The Intel AI Tools give data scientists, AI
44
developers, and researchers familiar Python* tools and frameworks to
55
accelerate end-to-end data science and analytics pipelines on Intel®
66
architectures. The components are built using oneAPI libraries for low-level
77
compute optimizations.
88

9-
The AI Toolkit maximizes performance from preprocessing
10-
through machine learning, and provides interoperability for efficient model
9+
The Intel AI Tools maximize performance from preprocessing
10+
through machine learning, and provide interoperability for efficient model
1111
development.
1212

1313
You can find more information at
14-
[Intel® AI Analytics Toolkit (AI Kit)](https://software.intel.com/content/www/us/en/develop/tools/oneapi/ai-analytics-toolkit.html).
14+
[Intel AI Tools](https://software.intel.com/content/www/us/en/develop/tools/oneapi/ai-analytics-toolkit.html).
1515

1616

1717
# End-to-end Samples
1818

19-
| Components | Folder | Description
20-
| :--- |:--- |:---
21-
| Intel® Distribution of Modin* <br> Intel® oneAPI Data Analytics Library (oneDAL) <br> IDP | [Census](Census) | Use Intel® Distribution of Modin* to ingest and process U.S. census data from 1970 to 2010 in order to build a ridge regression based model to find the relation between education and the total income earned in the US.
22-
| Intel Extension for PyTorch (IPEX), Intel Neural Compressor (INC) | [LanguageIdentification](LanguageIdentification) | Trains a model to perform language identification using the Hugging Face Speechbrain library and CommonVoice dataset, and optimized with IPEX and INC.
23-
| Intel® Distribution of OpenVINO™ toolkit | [LidarObjectDetection-PointPillars](LidarObjectDetection-PointPillars) | Performs 3D object detection and classification using point cloud data from a LIDAR sensor as input.
24-
25-
# Using Samples in Intel® DevCloud
26-
To get started using samples in the Intel® DevCloud, refer to [*Using AI samples in Intel oneAPI DevCloud*](https://github.com/intel-ai-tce/oneAPI-samples/tree/devcloud/AI-and-Analytics#using-samples-in-intel-oneapi-devcloud).
27-
28-
29-
### Use Visual Studio Code* (VS Code) (Optional)
30-
You can use Visual Studio Code* (VS Code) extensions to set your environment,
31-
create launch configurations, and browse and download samples.
32-
33-
The basic steps to build and run a sample using VS Code include:
34-
1. Configure the oneAPI environment with the extension **Environment Configurator for Intel® oneAPI Toolkits**.
35-
2. Download a sample using the extension **Code Sample Browser for Intel® oneAPI Toolkits**.
36-
3. Open a terminal in VS Code (**Terminal > New Terminal**).
37-
4. Run the sample in the VS Code terminal as you would on a Linux* system.
38-
5. (Linux only) Debug your GPU application with GDB for Intel® oneAPI toolkits using the Generate Launch Configurations extension.
39-
40-
To learn more about the extensions and how to configure the oneAPI environment, see the
41-
[Using Visual Studio Code with Intel® oneAPI Toolkits User Guide](https://www.intel.com/content/www/us/en/develop/documentation/using-vs-code-with-intel-oneapi/top.html).
19+
|AI Tools preset bundle | Components | Folder | Description
20+
| :--- | :--- |:--- |:---
21+
|Classical Machine Learning| Intel® Distribution of Modin* <br> Intel® oneAPI Data Analytics Library (oneDAL) <br> IDP | [Census](Census) | Use Intel® Distribution of Modin* to ingest and process U.S. census data from 1970 to 2010 in order to build a ridge regression based model to find the relation between education and the total income earned in the US.
22+
|Deep Learning| Intel® Extension for PyTorch, Intel® Neural Compressor | [LanguageIdentification](LanguageIdentification) | Trains a model to perform language identification using the Hugging Face Speechbrain library and CommonVoice dataset, and optimized with IPEX and INC.
23+
|Inference Optimization| Intel® Distribution of OpenVINO™ toolkit | [LidarObjectDetection-PointPillars](LidarObjectDetection-PointPillars) | Performs 3D object detection and classification using point cloud data from a LIDAR sensor as input.
4224

4325

4426
## License
4527

4628
Code samples are licensed under the MIT license. See [License.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/License.txt) for details.
4729

48-
Third-party program Licenses can be found here: [third-party-programs.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/third-party-programs.txt).
30+
Third-party program Licenses can be found here: [third-party-programs.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/third-party-programs.txt).
31+
32+
*Other names and brands may be claimed as the property of others. [Trademarks](https://www.intel.com/content/www/us/en/legal/trademarks.html)

AI-and-Analytics/Features-and-Functionality/INC_QuantizationAwareTraining_TextClassification/INC_QuantizationAwareTraining_TextClassification.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -146,7 +146,7 @@
146146
"outputs": [],
147147
"source": [
148148
"def tokenize_data(example):\n",
149-
" return tokenizer(example['text'], padding=True, max_length=128)\n",
149+
" return tokenizer(example['text'], padding='max_length', max_length=128)\n",
150150
"\n",
151151
"dataset = dataset.map(tokenize_data, batched=True)\n",
152152
"dataset"

AI-and-Analytics/Features-and-Functionality/INC_QuantizationAwareTraining_TextClassification/INC_QuantizationAwareTraining_TextClassification.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -104,7 +104,7 @@
104104

105105

106106
def tokenize_data(example):
107-
return tokenizer(example['text'], padding=True, max_length=128)
107+
return tokenizer(example['text'], padding='max_length', max_length=128)
108108

109109
dataset = dataset.map(tokenize_data, batched=True)
110110
dataset

AI-and-Analytics/Features-and-Functionality/INC_QuantizationAwareTraining_TextClassification/requirements.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,4 +3,4 @@ evaluate
33
accelerate
44
datasets
55
neural-compressor
6-
git+https://github.com/huggingface/optimum.git#egg=optimum[neural-compressor]
6+
optimum[neural-compressor]

AI-and-Analytics/Features-and-Functionality/IntelPytorch_Interactive_Chat_Quantization/sample.json

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
"guid": "7A85A71C-9D14-4950-8B10-FD7B16CEEB66",
33
"name": "Interactive chat based on DialoGPT model using Intel® Extension for PyTorch* Quantization",
44
"categories": ["Toolkit/oneAPI AI And Analytics/AI Getting Started Samples"],
5-
"description": "This sample demonstrates how to create interactive chat based on pre-treained DialoGPT model and add the Intel® Extension for PyTorch* quantization to it.",
5+
"description": "This sample demonstrates how to create interactive chat based on pre-trained DialoGPT model and add the Intel® Extension for PyTorch* quantization to it.",
66
"builder": ["cli"],
77
"toolchain": ["jupyter"],
88
"languages": [{"python":{}}],
@@ -26,4 +26,4 @@
2626
"expertise": "Getting Started"
2727
}
2828

29-
29+
Lines changed: 19 additions & 58 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
1-
# `Intel® TensorFlow* Model Zoo Inference With FP32 Int8` Sample
1+
# `Intel® AI Reference models for TensorFlow* Inference With FP32 Int8` Sample
22

3-
The `Intel® TensorFlow* Model Zoo Inference With FP32 Int8` sample demonstrates how to run ResNet50 inference on pretrained FP32 and Int8 models included in the Model Zoo for Intel® Architecture.
3+
The `Intel® AI Reference models for TensorFlow* Inference` sample demonstrates how to run ResNet50 inference on pretrained FP32 and Int8 models included in the Reference models for Intel® Architecture.
44

55
| Area | Description
66
|:--- |:---
@@ -24,33 +24,24 @@ The sample intends to help you understand some key concepts:
2424
|:--- |:---
2525
| OS | Ubuntu* 20.04 or higher
2626
| Hardware | Intel® Core™ Gen10 Processor <br> Intel® Xeon® Scalable Performance processors
27-
| Software | Intel® AI Analytics Toolkit (AI Kit)
27+
| Software | Intel® AI Reference models, Intel Extension for TensorFlow
2828

2929
### For Local Development Environments
3030

31-
You will need to download and install the following toolkits, tools, and components to use the sample.
31+
Before running the sample, install the Intel Extension for TensorFlow* via the Intel AI Tools Selector or Offline Installer.
32+
You can refer to the Intel AI Tools [product page](https://www.intel.com/content/www/us/en/developer/tools/oneapi/ai-analytics-toolkit.html) for software installation and the *[Get Started with the Intel® AI Tools for Linux*](https://software.intel.com/en-us/get-started-with-intel-oneapi-linux-get-started-with-the-intel-ai-analytics-toolkit)* for post-installation steps and scripts.
3233

33-
- **Intel® AI Analytics Toolkit (AI Kit)**
3434

35-
You can get the AI Kit from [Intel® oneAPI Toolkits](https://www.intel.com/content/www/us/en/developer/tools/oneapi/toolkits.html#analytics-kit). <br> See [*Get Started with the Intel® AI Analytics Toolkit for Linux**](https://www.intel.com/content/www/us/en/develop/documentation/get-started-with-ai-linux) for AI Kit installation information and post-installation steps and scripts.
36-
37-
TensorFlow* or Pytorch* are ready for use once you finish installing and configuring the Intel® AI Analytics Toolkit (AI Kit).
38-
39-
### For Intel® DevCloud
40-
41-
The necessary tools and components are already installed in the environment. You do not need to install additional components. See [Intel® DevCloud for oneAPI](https://devcloud.intel.com/oneapi/get_started/) for information.
4235

4336
## Key Implementation Details
4437

45-
The example uses some pretrained models published as part of the [Model Zoo for Intel® Architecture](https://github.com/IntelAI/models). The example also illustrates how to utilize TensorFlow* and Intel® Math Kernel Library (Intel® MKL) runtime settings to maximize CPU performance on ResNet50 workload.
38+
The example uses some pretrained models published as part of the [Intel® AI Reference models](https://github.com/IntelAI/models). The example also illustrates how to utilize TensorFlow* runtime settings to maximize CPU performance on ResNet50 workload.
4639

47-
## Set Environment Variables
48-
49-
When working with the command-line interface (CLI), you should configure the oneAPI toolkits using environment variables. Set up your CLI environment by sourcing the `setvars` script every time you open a new terminal window. This practice ensures that your compiler, libraries, and tools are ready for development.
5040

5141
## Run the `Intel® TensorFlow* Model Zoo Inference With FP32 Int8` Sample
5242

53-
### On Linux*
43+
If you have already set up the PIP or Conda environment and installed AI Tools go directly to Run the Notebook.
44+
### Steps for Intel AI Tools Offline Installer
5445

5546
> **Note**: If you have not already done so, set up your CLI
5647
> environment by sourcing the `setvars` script in the root of your oneAPI installation.
@@ -64,7 +55,7 @@ When working with the command-line interface (CLI), you should configure the one
6455
6556
#### Activate Conda with Root Access
6657

67-
By default, the AI Kit is installed in the `/opt/intel/oneapi` folder and requires root privileges to manage it. However, if you activated another environment, you can return with the following command.
58+
By default, the Intel AI Tools are installed in the `/opt/intel/oneapi` folder and requires root privileges to manage it. However, if you activated another environment, you can return with the following command.
6859
```
6960
conda activate tensorflow
7061
```
@@ -80,16 +71,16 @@ conda activate user_tensorflow
8071

8172
#### Navigate to Model Zoo
8273

83-
Navigate to the Model Zoo for Intel® Architecture source directory. By default, it is in your installation path, like `/opt/intel/oneapi/modelzoo`.
74+
Navigate to the Intel® AI Reference models source directory. By default, it is in your installation path, like `/opt/intel/oneapi/modelzoo`.
8475

85-
1. View the available Model Zoo release versions for the AI Kit:
76+
1. View the available Intel® AI Reference models release versions for the AI Tools:
8677
```
87-
ls /opt/intel/oneapi/modelzoo
88-
2.11.0 latest
78+
ls /opt/intel/oneapi/reference_models
79+
2.13.0 latest
8980
```
90-
2. Navigate to the [Model Zoo Scripts](https://github.com/IntelAI/models/tree/v2.11.0/benchmarks) GitHub repo to determine the preferred released version to run inference for ResNet50 or another supported topology.
81+
2. Navigate to the [Intel® AI Reference models Scripts](https://github.com/IntelAI/models/tree/v2.11.0/benchmarks) GitHub repo to determine the preferred released version to run inference for ResNet50 or another supported topology.
9182
```
92-
cd /opt/intel/oneapi/modelzoo/latest
83+
cd /opt/intel/oneapi/reference_models/latest
9384
```
9485

9586
#### Install Jupyter Notebook
@@ -124,43 +115,13 @@ conda install jupyter nb_conda_kernels
124115
4. Change the kernel to **Python [conda env:tensorflow]**.
125116
5. Click the **Run** button to move through the cells in sequence.
126117
127-
### Run the Sample on Intel® DevCloud (Optional)
128-
129-
1. If you do not already have an account, request an Intel® DevCloud account at [*Create an Intel® DevCloud Account*](https://intelsoftwaresites.secure.force.com/DevCloud/oneapi).
130-
2. On a Linux* system, open a terminal.
131-
3. SSH into Intel® DevCloud.
132-
```
133-
ssh DevCloud
134-
```
135-
> **Note**: You can find information about configuring your Linux system and connecting to Intel DevCloud at Intel® DevCloud for oneAPI [Get Started](https://devcloud.intel.com/oneapi/get_started).
136-
137-
4. You can specify a CPU node using a single line script.
138-
```
139-
qsub -I -l nodes=1:xeon:ppn=2 -d .
140-
```
141-
142-
- `-I` (upper case I) requests an interactive session.
143-
- `-l nodes=1:xeon:ppn=2` (lower case L) assigns one full GPU node.
144-
- `-d .` makes the current folder as the working directory for the task.
145-
146-
|Available Nodes |Command Options
147-
|:--- |:---
148-
|GPU |`qsub -l nodes=1:gpu:ppn=2 -d .`
149-
|CPU |`qsub -l nodes=1:xeon:ppn=2 -d .`
150-
151-
5. Activate conda.
152-
` $ conda activate`
153-
6. Follow the instructions to open the URL with the token in your browser.
154-
7. Locate and select the Notebook.
155-
```
156-
ResNet50_Inference.ipynb
157-
````
158-
8. Change the kernel to **Python [conda env:tensorflow]**.
159-
9. Run every cell in the Notebook in sequence.
160118
161119
## License
162120
163121
Code samples are licensed under the MIT license. See
164122
[License.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/License.txt) for details.
165123
166-
Third party program Licenses can be found here: [third-party-programs.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/third-party-programs.txt).
124+
Third party program Licenses can be found here: [third-party-programs.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/third-party-programs.txt).
125+
126+
127+
*Other names and brands may be claimed as the property of others. [Trademarks](https://www.intel.com/content/www/us/en/legal/trademarks.html)

AI-and-Analytics/Features-and-Functionality/IntelTensorFlow_ModelZoo_Inference_with_FP32_Int8/ResNet50_Inference.ipynb

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,9 +4,9 @@
44
"cell_type": "markdown",
55
"metadata": {},
66
"source": [
7-
"# Getting Started with [Intel Model Zoo](https://github.com/IntelAI/models)\n",
7+
"# Getting Started with [ Intel® AI Reference models](https://github.com/IntelAI/models)\n",
88
"\n",
9-
"This code sample will serve as a sample use case to perform TensorFlow ResNet50 inference on a synthetic data implementing a FP32 and Int8 pre-trained model. The pre-trained model published as part of Intel Model Zoo will be used in this sample. "
9+
"This code sample will serve as a sample use case to perform TensorFlow ResNet50 inference on a synthetic data implementing a FP32 and Int8 pre-trained model. The pre-trained model published as part of Intel® AI Reference models will be used in this sample. "
1010
]
1111
},
1212
{
@@ -67,7 +67,7 @@
6767
"metadata": {},
6868
"outputs": [],
6969
"source": [
70-
"%cd /opt/intel/oneapi/modelzoo/latest"
70+
"%cd /opt/intel/oneapi/reference_models/latest"
7171
]
7272
},
7373
{

AI-and-Analytics/Features-and-Functionality/IntelTensorFlow_ModelZoo_Inference_with_FP32_Int8/ResNet50_Inference_gpu.ipynb

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,9 +4,9 @@
44
"cell_type": "markdown",
55
"metadata": {},
66
"source": [
7-
"# Getting Started with [Intel Model Zoo](https://github.com/IntelAI/models)\n",
7+
"# Getting Started with [Intel® AI Reference models](https://github.com/IntelAI/models)\n",
88
"\n",
9-
"This code sample will serve as a sample use case to perform TensorFlow ResNet50v1.5 inference on a synthetic data implementing a FP32/FP16 and Int8 pre-trained model. The pre-trained model published as part of Intel Model Zoo will be used in this sample. "
9+
"This code sample will serve as a sample use case to perform TensorFlow ResNet50v1.5 inference on a synthetic data implementing a FP32/FP16 and Int8 pre-trained model. The pre-trained model published as part of Intel® AI Reference models will be used in this sample. "
1010
]
1111
},
1212
{
@@ -70,7 +70,7 @@
7070
"metadata": {},
7171
"outputs": [],
7272
"source": [
73-
"%cd /opt/intel/oneapi/modelzoo/latest"
73+
"%cd /opt/intel/oneapi/reference_models/latest"
7474
]
7575
},
7676
{

0 commit comments

Comments
 (0)