You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+10-9Lines changed: 10 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@ To see what's new and easily filter applications by domain and framework, please
10
10
11
11
For more detailed benchmark information, please visit our [Performance Results page](https://www.graphcore.ai/performance-results).
12
12
13
-
> The code presented here requires using Poplar SDK 2.6.x
13
+
> The code presented here requires using Poplar SDK 3.0.x
14
14
15
15
Please install and enable the Poplar SDK following the instructions in the [Getting Started](https://docs.graphcore.ai/en/latest/getting-started.html#pod-system-getting-started-guides) guide for your IPU system.
16
16
@@ -67,7 +67,7 @@ If you require POD128/256 setup and configuration for our applications, please c
67
67
| Group BERT | NLP | Training |[TensorFlow 1](nlp/bert/tensorflow1/README.md#GroupBERT_model)|
68
68
| Packed BERT | NLP | Training |[PyTorch](nlp/bert/pytorch), [PopART](nlp/bert/popart)|
69
69
| GPT2 | NLP | Training |[PyTorch](nlp/gpt2/pytorch) , [Hugging Face Optimum](https://huggingface.co/Graphcore/gpt2-medium-ipu)|
70
-
| GPTJ | NLP | Training |[PopXL](nlp/gpt_j/popxl)|
70
+
| GPTJ | NLP | Training |[PopXL](nlp/gpt_j/popxl)|
71
71
| GPT3-2.7B | NLP | Training |[PopXL](nlp/gpt3_2.7B/popxl)|
72
72
| RoBERTa | NLP | Training |[Hugging Face Optimum](https://huggingface.co/Graphcore/roberta-large-ipu)|
73
73
| DeBERTa | NLP | Training |[Hugging Face Optimum](https://huggingface.co/Graphcore/deberta-base-ipu)|
@@ -147,7 +147,7 @@ If you require POD128/256 setup and configuration for our applications, please c
147
147
| Model | Domain | Type |Links |
148
148
| ------- | ------- |------- | ------- |
149
149
| MNIST RigL | Dynamic Sparsity | Training |[TensorFlow 1](sparsity/dynamic_sparsity/tensorflow1/mnist_rigl)|
150
-
| Autoregressive Language Modelling | Dynamic Sparsity | Training | [TensorFlow 1](sparsity/dynamic_sparsity/tensorflow1/language_modelling)
150
+
| Autoregressive Language Modelling | Dynamic Sparsity | Training | [TensorFlow 1](sparsity/dynamic_sparsity/tensorflow1/language_modelling)
@@ -178,7 +178,7 @@ The following applications have been archived. More information can be provided
178
178
| Minigo | Reinforcement Learning | Training | TensorFlow 1|
179
179
180
180
181
-
<br>
181
+
<br>
182
182
183
183
## Developer Resources
184
184
-[Documentation](https://docs.graphcore.ai/en/latest/): Explore our software documentation, user guides, and technical notes
@@ -202,7 +202,7 @@ For more information on using the examples-utils benchmarking module, please ref
202
202
## PopVision™ Tools
203
203
Visualise your code's inner workings with a user-friendly, graphical interface to optimise your machine learning models.
204
204
205
-
[Download](https://www.graphcore.ai/developer/popvision-tools) PopVision to analyse IPU performance and utilisation.
205
+
[Download](https://www.graphcore.ai/developer/popvision-tools) PopVision to analyse IPU performance and utilisation.
206
206
207
207
<br>
208
208
@@ -229,17 +229,18 @@ Unless otherwise specified by a LICENSE file in a subdirectory, the LICENSE refe
229
229
230
230
<details>
231
231
<summary>Sep 2022</summary>
232
-
<br>
232
+
<br>
233
233
234
234
* Added those models below to reference models
235
235
* Vision : MAE (PyTorch), G16 EfficientNet (PyTorch)
236
236
* NLP : GPTJ (PopXL), GPT3-2.7B (PopXL)
237
237
* Multimodal : Frozen in time (PyTorchs), ruDalle(Preview) (PopXL)
238
+
* Deprecating all TensorFlow 1 applications. Support will be removed in the next release.
238
239
</details>
239
240
240
241
<details>
241
242
<summary>Aug 2022</summary>
242
-
<br>
243
+
<br>
243
244
244
245
* Change the folder name of models
245
246
* NLP : from gpt to gpt2
@@ -248,10 +249,10 @@ Unless otherwise specified by a LICENSE file in a subdirectory, the LICENSE refe
248
249
</details>
249
250
<details>
250
251
<summary>July 2022</summary>
251
-
<br>
252
+
<br>
252
253
253
254
* Major reorganisation of all the apps so that they are arranged as: problem domain / model / framework.
254
-
* Problem domains: Vision, NLP, Speech, GNN, Sparsity, AI for Simultation, Recomender systems, Reinforcement learning, Probability, Multimodal, and Miscellaneous.
255
+
* Problem domains: Vision, NLP, Speech, GNN, Sparsity, AI for Simultation, Recomender systems, Reinforcement learning, Probability, Multimodal, and Miscellaneous.
255
256
* Added those models below to reference models
256
257
* Vision : Swin (PyTorch) , ViT (Hugging Face Optimum)
257
258
* NLP : GPT2 Small/Medium/Large (PyTorch), BERT-Base/Large (PopXL), BERT-Base(PaddlePaddle), BERT-Base/Large(Hugging Face Optimum), GPT2 Small/Medium (Hugging Face Optimum), RoBERTa Base/Large(Hugging Face Optimum), DeBERTa(Hugging Face Optimum), HuBERT(Hugging Face Optimum), BART(Hugging Face Optimum), T5 small(Hugging Face Optimum)
Copy file name to clipboardExpand all lines: vision/cnns/tensorflow2/README.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -362,7 +362,7 @@ For ease of use, the entire instruction is implemented in the default configurat
362
362
363
363
Following the training and validation runs, `export_for_serving.py` script can be used to export a model to SavedModel format, which can be subsequently deployed to TensorFlow Serving instances and thus made available for inference. The model should be defined using the same options or configuration file that have been provided to `train.py` for training.
364
364
The following command line creates a SavedModel containing Resnet50 with weights initialized from `checkpoint.h5` file:
Please keep in mind that the exported SavedModel can't be used to load the model back into a TensorFlow script, as it only contains the IPU runtime op and an opaque executable and no model state.
0 commit comments