Skip to content

Commit aac432e

Browse files
committed
Additional documentation changes with SDK 3.0 release
1 parent 8ae588c commit aac432e

File tree

2 files changed

+11
-10
lines changed

2 files changed

+11
-10
lines changed

README.md

Lines changed: 10 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ To see what's new and easily filter applications by domain and framework, please
1010

1111
For more detailed benchmark information, please visit our [Performance Results page](https://www.graphcore.ai/performance-results).
1212

13-
> The code presented here requires using Poplar SDK 2.6.x
13+
> The code presented here requires using Poplar SDK 3.0.x
1414
1515
Please install and enable the Poplar SDK following the instructions in the [Getting Started](https://docs.graphcore.ai/en/latest/getting-started.html#pod-system-getting-started-guides) guide for your IPU system.
1616

@@ -67,7 +67,7 @@ If you require POD128/256 setup and configuration for our applications, please c
6767
| Group BERT | NLP | Training |[TensorFlow 1](nlp/bert/tensorflow1/README.md#GroupBERT_model) |
6868
| Packed BERT | NLP | Training |[PyTorch](nlp/bert/pytorch), [PopART](nlp/bert/popart) |
6969
| GPT2 | NLP | Training |[PyTorch](nlp/gpt2/pytorch) , [Hugging Face Optimum](https://huggingface.co/Graphcore/gpt2-medium-ipu) |
70-
| GPTJ | NLP | Training |[PopXL](nlp/gpt_j/popxl)|
70+
| GPTJ | NLP | Training |[PopXL](nlp/gpt_j/popxl)|
7171
| GPT3-2.7B | NLP | Training |[PopXL](nlp/gpt3_2.7B/popxl) |
7272
| RoBERTa | NLP | Training | [Hugging Face Optimum](https://huggingface.co/Graphcore/roberta-large-ipu)|
7373
| DeBERTa | NLP | Training | [Hugging Face Optimum](https://huggingface.co/Graphcore/deberta-base-ipu)|
@@ -147,7 +147,7 @@ If you require POD128/256 setup and configuration for our applications, please c
147147
| Model | Domain | Type |Links |
148148
| ------- | ------- |------- | ------- |
149149
| MNIST RigL | Dynamic Sparsity | Training | [TensorFlow 1](sparsity/dynamic_sparsity/tensorflow1/mnist_rigl) |
150-
| Autoregressive Language Modelling | Dynamic Sparsity | Training | [TensorFlow 1](sparsity/dynamic_sparsity/tensorflow1/language_modelling)
150+
| Autoregressive Language Modelling | Dynamic Sparsity | Training | [TensorFlow 1](sparsity/dynamic_sparsity/tensorflow1/language_modelling)
151151
| Block-Sparse library | Sparsity | Training & Inference | [PopART](sparsity/block_sparse/popart) , [TensorFlow 1](sparsity/block_sparse/tensorflow1)|
152152

153153

@@ -178,7 +178,7 @@ The following applications have been archived. More information can be provided
178178
| Minigo | Reinforcement Learning | Training | TensorFlow 1|
179179

180180

181-
<br>
181+
<br>
182182

183183
## Developer Resources
184184
- [Documentation](https://docs.graphcore.ai/en/latest/): Explore our software documentation, user guides, and technical notes
@@ -202,7 +202,7 @@ For more information on using the examples-utils benchmarking module, please ref
202202
## PopVision™ Tools
203203
Visualise your code's inner workings with a user-friendly, graphical interface to optimise your machine learning models.
204204

205-
[Download](https://www.graphcore.ai/developer/popvision-tools) PopVision to analyse IPU performance and utilisation.
205+
[Download](https://www.graphcore.ai/developer/popvision-tools) PopVision to analyse IPU performance and utilisation.
206206

207207
<br>
208208

@@ -229,17 +229,18 @@ Unless otherwise specified by a LICENSE file in a subdirectory, the LICENSE refe
229229

230230
<details>
231231
<summary>Sep 2022</summary>
232-
<br>
232+
<br>
233233

234234
* Added those models below to reference models
235235
* Vision : MAE (PyTorch), G16 EfficientNet (PyTorch)
236236
* NLP : GPTJ (PopXL), GPT3-2.7B (PopXL)
237237
* Multimodal : Frozen in time (PyTorchs), ruDalle(Preview) (PopXL)
238+
* Deprecating all TensorFlow 1 applications. Support will be removed in the next release.
238239
</details>
239240

240241
<details>
241242
<summary>Aug 2022</summary>
242-
<br>
243+
<br>
243244

244245
* Change the folder name of models
245246
* NLP : from gpt to gpt2
@@ -248,10 +249,10 @@ Unless otherwise specified by a LICENSE file in a subdirectory, the LICENSE refe
248249
</details>
249250
<details>
250251
<summary>July 2022</summary>
251-
<br>
252+
<br>
252253

253254
* Major reorganisation of all the apps so that they are arranged as: problem domain / model / framework.
254-
* Problem domains: Vision, NLP, Speech, GNN, Sparsity, AI for Simultation, Recomender systems, Reinforcement learning, Probability, Multimodal, and Miscellaneous.
255+
* Problem domains: Vision, NLP, Speech, GNN, Sparsity, AI for Simultation, Recomender systems, Reinforcement learning, Probability, Multimodal, and Miscellaneous.
255256
* Added those models below to reference models
256257
* Vision : Swin (PyTorch) , ViT (Hugging Face Optimum)
257258
* NLP : GPT2 Small/Medium/Large (PyTorch), BERT-Base/Large (PopXL), BERT-Base(PaddlePaddle), BERT-Base/Large(Hugging Face Optimum), GPT2 Small/Medium (Hugging Face Optimum), RoBERTa Base/Large(Hugging Face Optimum), DeBERTa(Hugging Face Optimum), HuBERT(Hugging Face Optimum), BART(Hugging Face Optimum), T5 small(Hugging Face Optimum)

vision/cnns/tensorflow2/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -362,7 +362,7 @@ For ease of use, the entire instruction is implemented in the default configurat
362362

363363
Following the training and validation runs, `export_for_serving.py` script can be used to export a model to SavedModel format, which can be subsequently deployed to TensorFlow Serving instances and thus made available for inference. The model should be defined using the same options or configuration file that have been provided to `train.py` for training.
364364
The following command line creates a SavedModel containing Resnet50 with weights initialized from `checkpoint.h5` file:
365-
`python3 scripts/export_for_serving.py --config resnet50_16ipus_8k_bn_pipeline --export-dir="./resnet50_for_serving/001" --micro-batch-size=1 --checkpoint-file=checkpoint.h5 --keep-pipeline-model=False --iterations=128`
365+
`python3 scripts/export_for_serving.py --config resnet50_16ipus_8k_bn_pipeline --export-dir="./resnet50_for_serving/001" --micro-batch-size=1 --checkpoint-file=checkpoint.h5 --pipeline-serving-model=False --iterations=128`
366366

367367
Please keep in mind that the exported SavedModel can't be used to load the model back into a TensorFlow script, as it only contains the IPU runtime op and an opaque executable and no model state.
368368

0 commit comments

Comments
 (0)