Skip to content

Commit 187775b

Browse files
committed
Updates with Poplar SDK 2.5 release
1 parent 03e95d4 commit 187775b

File tree

681 files changed

+42128
-6225
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

681 files changed

+42128
-6225
lines changed

README.md

Lines changed: 96 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -4,66 +4,134 @@ This repository contains sample applications and code examples for use with Grap
44

55
If you are interested in finding out more about Graphcore, including
66
getting preview access to IPUs to run these examples, please register
7-
your interest [here](https://www.graphcore.ai/product_info).
7+
your interest [here](https://www.graphcore.ai/product_info)
88

99
Please note we are not currently accepting pull requests or issues on this
1010
repository. If you are actively using this repository and want to report any issues, please raise a ticket through the Graphcore support portal: https://www.graphcore.ai/support.
1111

1212
The latest version of the documentation for the Poplar software stack, and other developer resources, is available at https://www.graphcore.ai/developer.
1313

14-
> The code presented here requires using Poplar SDK 2.4.x
14+
> The code presented here requires using Poplar SDK 2.5.x
1515
1616
Please install and enable the Poplar SDK following the instructions in the Getting Started guide for your IPU system.
1717

1818
Unless otherwise specified by a LICENSE file in a subdirectory, the LICENSE referenced at the top level applies to the files in this repository.
19+
<br>
20+
<br>
1921

2022
## Repository contents
23+
1. [Computer Vision](#cv)
24+
2. [Natural Language Processing](#nlp)
25+
3. [Speech](#speech)
26+
4. [Graph Neural Network](#gnn)
27+
5. [AI for Simulation](#simulation)
28+
6. [Recommender Systems](#recommender_systems)
29+
7. [Miscellaneous](#miscellaneous)
2130

22-
### Application examples
31+
<br>
2332

24-
The [applications/](applications) folder contains example applications written in different frameworks targeting the IPU. See the READMEs in each folder for details on how to use these applications.
2533

34+
### Computer Vision <a name="cv"></a>
2635
| Model | Domain | Type |Links |
2736
| ------- | ------- |------- | ------- |
28-
| ResNet | Image Classifcation | Training & Inference | [TensorFlow 1](applications/tensorflow/cnns/) , [TensorFlow 2](applications/tensorflow2/classification/), [PyTorch](applications/pytorch/cnns/)|
29-
| ResNeXt | Image Classifcation | Training & Inference | [TensorFlow 1](applications/tensorflow/cnns/) , [PopART (Inference)](applications/popart/resnext_inference)
30-
| EfficientNet | Image Classifcation | Training & Inference | [TensorFlow 1](applications/tensorflow/cnns/) , [PyTorch](applications/pytorch/cnns/)|
31-
| MobileNet | Image Classifcation | Inference | [TensorFlow 1](applications/tensorflow/cnns/inference) |
32-
| MobileNetv2 | Image Classifcation | Inference | [TensorFlow 1](applications/tensorflow/cnns/inference) |
33-
| MobileNetv3 | Image Classifcation | Training & Inference | [PyTorch](applications/pytorch/cnns/) |
34-
| ViT(Vision Transformer) | Image Classifcation | Training| [PyTorch](applications/pytorch/vit) |
37+
| ResNet | Image Classification | Training & Inference | [TensorFlow 1](applications/tensorflow/cnns/) , [TensorFlow 2](applications/tensorflow2/classification/), [PyTorch](applications/pytorch/cnns/)|
38+
| ResNeXt | Image Classification | Training & Inference | [TensorFlow 1](applications/tensorflow/cnns/) , [PopART (Inference)](applications/popart/resnext_inference)
39+
| EfficientNet | Image Classification | Training & Inference | [TensorFlow 1](applications/tensorflow/cnns/) , [PyTorch](applications/pytorch/cnns/)|
40+
| MobileNet | Image Classification | Inference | [TensorFlow 1](applications/tensorflow/cnns/inference) |
41+
| MobileNetv2 | Image Classification | Inference | [TensorFlow 1](applications/tensorflow/cnns/inference) |
42+
| MobileNetv3 | Image Classification | Training & Inference | [PyTorch](applications/pytorch/cnns/) |
43+
| ViT(Vision Transformer) | Image Classification | Training| [PyTorch](applications/pytorch/vit) |
44+
| DINO | Image Classification | Training| [PyTorch](applications/pytorch/dino) |
3545
| Yolov3 | Object Detection | Training & Inference | [TensorFlow 1](applications/tensorflow/detection/yolov3) |
3646
| Yolov4-P5 | Object Detection | Inference | [PyTorch](applications/pytorch/detection) |
3747
| Faster RCNN | Object Detection | Training & Inference | [PopART](applications/popart/faster-rcnn) |
48+
| EfficientDet | Object Detection | Inference | [TensorFlow 2](applications/tensorflow2/efficientdet) |
49+
| SSD | Object Detection | Inference | [TensorFlow 1](code_examples/tensorflow/ssd)|
3850
| UNet (Medical) | Image segmentation | Training & Inference | [TensorFlow 2](applications/tensorflow2/unet/) |
51+
| UNet (Industrial) | Image segmentation | Training | [TensorFlow 1](code_examples/tensorflow/unet_industrial) |
3952
| miniDALL-E | Generative model in Vision | Training & Inference | [PyTorch](applications/pytorch/miniDALL-E) |
53+
| Neural Image Fields | Neural Radiance Fields | Training | [TensorFlow 2](code_examples/tensorflow2/neural_image_fields) |
54+
<br>
55+
56+
### Natural Language Processing <a name="nlp"></a>
57+
| Model | Domain | Type |Links |
58+
| ------- | ------- |------- | ------- |
4059
| BERT | NLP | Training & Inference |[TensorFlow 1](applications/tensorflow/bert) , [PyTorch](applications/pytorch/bert) , [PopART](applications/popart/bert), [TensorFlow 2](applications/tensorflow2/bert)|
60+
| Group BERT | NLP | Training |[TensorFlow 1](applications/tensorflow/bert/README.md#GroupBERT_model) |
61+
| Packed BERT | NLP | Training |[PyTorch](applications/pytorch/bert), [PopART](applications/popart/bert) |
62+
63+
<br>
64+
65+
66+
### Speech <a name="speech"></a>
67+
| Model | Domain | Type |Links |
68+
| ------- | ------- |------- | ------- |
4169
| DeepVoice3 | TTS (TextToSpeech) | Training & Inference |[PopART](applications/popart/deep_voice) |
4270
| FastSpeech2 | TTS(TextToSpeech) | Training & Inference | [TensorFlow 2](applications/tensorflow2/fastspeech2/) |
43-
| Conformer | STT(SpeechToText) | Training & Inference | [PopART](applications/popart/conformer_asr) |
44-
| Conformer with Transformer | STT(SpeechToText) | Training & Inference | [TensorFlow 1](applications/tensorflow/conformer) , [PyTorch](applications/pytorch/conformer) |
71+
| Conformer | STT(SpeechToText) | Training & Inference | [PopART](applications/popart/conformer_asr), [TensorFlow 1](applications/tensorflow/conformer) , [PyTorch](applications/pytorch/conformer) |
4572
| Transfomer Transducer | STT(SpeechToText) | Training & Inference | [PopART](applications/popart/transformer_transducer) |
73+
<br>
74+
75+
### Graph Neural Network <a name="gnn"></a>
76+
| Model | Domain | Type |Links |
77+
| ------- | ------- |------- | ------- |
4678
| TGN (Temporal Graph Network) | GNN | Training & Inference | [TensorFlow 1](applications/tensorflow/tgn/) |
4779
| MPNN (Message Passing Neural Networks) | GNN | Training & Inference | [TensorFlow 2](code_examples/tensorflow2/message_passing_neural_network) |
80+
| Spektral GNN library with QM9 | GNN | Training | [TensorFlow 2](code_examples/tensorflow2/gnn) |
81+
| Cluster GCN | GNN | Training & Inference | [TensorFlow 2](applications/tensorflow2/cluster_gcn) |
82+
83+
<br>
84+
85+
### AI for Simulation <a name="simulation"></a>
86+
| Model | Domain | Type |Links |
87+
| ------- | ------- |------- | ------- |
88+
| DeepDriveMD | Biology (Protein folding) | Training | [TensorFlow 2](code_examples/tensorflow2/deep_drive_md) |
89+
| CosmoFlow example using 3D Convolutions | Cosmology| Training & Inference | [TensorFlow 1](code_examples/tensorflow/cosmoflow)|
90+
91+
<br>
92+
93+
### Recommender Systems <a name="recommender_systems"></a>
94+
| Model | Domain | Type |Links |
95+
| ------- | ------- |------- | ------- |
4896
| Deep AutoEncoders for Collaborative Filtering | Recommender Systems | Training & Inference | [TensorFlow 1](applications/tensorflow/autoencoder) |
4997
| Click through rate: Deep Interest Network | Recommender Systems | Training & Inference | [TensorFlow 1](applications/tensorflow/click_through_rate) |
5098
| Click through rate: Deep Interest Evolution Network | Recommender Systems | Training & Inference | [TensorFlow 1](applications/tensorflow/click_through_rate) |
99+
<br>
100+
101+
### Miscellaneous <a name="miscellaneous"></a>
102+
| Model | Domain | Type |Links |
103+
| ------- | ------- |------- | ------- |
51104
| RL Policy model | Reinforcement Learning | Training | [TensorFlow 1](applications/tensorflow/reinforcement_learning) |
52105
| MNIST RigL | Dynamic Sparsity | Training | [TensorFlow 1](applications/tensorflow/dynamic_sparsity/mnist_rigl) |
53-
| Autoregressive Language Modelling | Dynamic Sparsity | Training | [TensorFlow 1](applications/tensorflow/dynamic_sparsity/language_modelling) |
54-
| Sales forecasting | MLP (Multi-Layer Perceptron) | Training | [TensorFlow 1](applications/tensorflow/sales_forcasting/language_modelling) |
106+
| Autoregressive Language Modelling | Dynamic Sparsity | Training | [TensorFlow 1](applications/tensorflow/dynamic_sparsity/language_modelling)
107+
| Block-Sparse library | Sparsity | Training & Inference | [PopART](code_examples/popart/block_sparse) , [TensorFlow 1](code_examples/popart/block_sparse)|
108+
| Sales forecasting | MLP (Multi-Layer Perceptron) | Training | [TensorFlow 1](applications/tensorflow/sales_forecasting) |
55109
| Contrastive Divergence VAE using MCMC methods | Generative Model | Training | [TensorFlow 1](applications/tensorflow/contrastive_divergence_vae) |
56110
| Monte Carlo Ray Tracing | Vision | Inference | [Poplar](applications/poplar/monte_carlo_ray_tracing) |
111+
| mcmc | Statistics | Training & Inference | [TensorFlow 1](code_examples/tensorflow/mcmc)|
112+
| Approximate Bayesian Computation (ABC) COVID-19 | Medical | Inference | [TensorFlow 2](code_examples/tensorflow2/abc_covid_19) |
57113

114+
<br>
115+
<br>
58116

59117

118+
## Glossary
119+
<br>
120+
121+
### Application examples
122+
123+
The [applications/](applications) folder contains example applications written in different frameworks targeting the IPU. See the READMEs in each folder for details on how to use these applications.
124+
<br>
125+
60126
### Code examples
61127

62128
The [code_examples/](code_examples) folder contains smaller models and code examples. See the READMEs in each folder for details.
129+
<br>
63130

64131
### Tutorials
65132

66133
Tutorials and further code examples can be found in our dedicated [Tutorials repository](https://github.com/graphcore/tutorials).
134+
<br>
67135

68136
### Utilities
69137

@@ -72,15 +140,25 @@ The [utils/](utils) folder contains utilities libraries and scripts that are use
72140
* [utils/examples_tests](utils/examples_tests) - Common Python helper functions for the repository's unit tests.
73141
* [utils/benchmarks](utils/benchmarks) - Common Python helper functions for running benchmarks on the IPU in different frameworks.
74142

143+
<br>
144+
<br>
75145

76146
## Changelog
77147

78-
December 2021:
148+
### May 2022
149+
- Added those models below to reference models
150+
- Vision : ViT-pretraining(PyTorch), DINO(PyTorch), EfficientDet-inference(TensorFlow 2), Neural Image Fields (TensorFlow 2)
151+
- NLP : PackedBERT(PyTorch, PopART), BERT-Large(TensorFlow 2)
152+
- Speech : FastSpeech2-inference(TensorFlow 2), Conformer-Large(PyTorch)
153+
- GNN : Cluster GCN(TensorFlow 2)
154+
- AI for Simulation : DeepDriveMD(TensorFlow 2)
155+
156+
### December 2021
79157
- Added those models below to reference models
80158
- Vision : miniDALL-E(PyTorch), Faster RCNN(PopART), UNet(TensorFlow 2), ResNet50(TensorFlow 2)
81159
- NLP : BERT(TensorFlow 2)
82-
- TTS/STT : FastSpeech2(TensorFlow 2), Transfomer Transducer(PopART), Conformer with Transformer(PyTorch)
83-
- GNN : TGN(TensorFlow1), MPNN(TensorFlow 2)
160+
- Speech : FastSpeech2(TensorFlow 2), Transfomer Transducer(PopART), Conformer-Small(PyTorch)
161+
- GNN : TGN(TensorFlow 1), MPNN(TensorFlow 2)
84162

85163

86164

applications/popart/bert/README_Benchmarks.md

Lines changed: 52 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -144,6 +144,58 @@ Command:
144144
python bert.py --config configs/mk2/pretrain_base_384.json --input-files=$DATASETS_DIR/wikipedia/AA/sequence_384/wiki_00_tokenised --epochs 1 --no-model-save --no-validation --steps-per-log 1
145145
```
146146

147+
### BERT-Packed-Large Phase 1 Pre-training Sequence length 128
148+
149+
#### 1 x IPU-POD16
150+
151+
Command:
152+
```console
153+
python bert.py --config configs/mk2/packed/packed_pretrain_large_128.json --input-files=$DATASETS_DIR/wikipedia/popart_packed_bert/packed_128/duplication_0 --epochs 1 --no-model-save --no-validation --steps-per-log 1
154+
```
155+
156+
#### 1 x IPU-POD64
157+
158+
Command:
159+
```console
160+
python bert.py --config configs/mk2/packed/packed_pretrain_large_128.json --replication-factor 16 --loss-scaling 32768.0 --input-files=$DATASETS_DIR/wikipedia/popart_packed_bert/packed_128/duplication_* --checkpoint-dir "checkpoint/phase1"
161+
```
162+
163+
164+
### BERT-Packed-Large Phase 2 Pre-training Sequence length 384
165+
166+
#### 1 x IPU-POD16
167+
168+
Command:
169+
```console
170+
python bert.py --config configs/mk2/packed/packed_pretrain_large_384.json --input-files=$DATASETS_DIR/wikipedia/popart_packed_bert/packed_384/duplication_0 --epochs 1 --no-model-save --no-validation --steps-per-log 1
171+
```
172+
173+
#### 1 x IPU-POD64
174+
175+
Command:
176+
```console
177+
python bert.py --config configs/mk2/packed/packed_pretrain_large_384.json --replication-factor 16 --loss-scaling 32768.0 --input-files=$DATASETS_DIR/wikipedia/popart_packed_bert/packed_384/duplication_*
178+
```
179+
180+
181+
### BERT-Packed-Base Phase 1 Pre-training Sequence length 128
182+
183+
#### 1 x IPU-POD16
184+
185+
Command:
186+
```console
187+
python bert.py --config configs/mk2/packed/packed_pretrain_base_128.json --input-files=$DATASETS_DIR/wikipedia/popart_packed_bert/packed_128/duplication_0 --epochs 1 --no-model-save --no-validation --steps-per-log 1
188+
```
189+
190+
### BERT-Packed-Base Phase 2 Pre-training Sequence length 384
191+
192+
#### 1 x IPU-POD16
193+
194+
Command:
195+
```console
196+
python bert.py --config configs/mk2/packed/packed_pretrain_base_384.json --input-files=$DATASETS_DIR/wikipedia/popart_packed_bert/packed_384/duplication_0 --epochs 1 --no-model-save --no-validation --steps-per-log 1
197+
```
198+
147199
### BERT Large SQuAD Sequence length 384
148200

149201
#### 1 x IPU-POD16

applications/popart/bert/bert.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -336,7 +336,7 @@ def bert_session_options(args, model):
336336

337337
# These options are necessary to allow poplar to overlap processing of
338338
# multiple iterations in the host side
339-
options.defaultPrefetchBufferingDepth = 3
339+
options.defaultBufferingDepth = args.buffering_depth
340340
options.rearrangeAnchorsOnHost = False
341341
engine_options["exchange.streamBufferOverlap"] = "hostRearrangeOnly"
342342

applications/popart/bert/bert_model.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -564,7 +564,7 @@ def value(x, y):
564564
def generate_transformer_periodic_pos_data(self, dtype, shape, min_timescale=1.0, max_timescale=1.0e4):
565565
"""
566566
Periodic position initialiser, from 3.5 of "Attention is All You Need". Adapted from:
567-
https://github.com/tensorflow/models/tree/master/official/transformer/v2
567+
https://github.com/tensorflow/models/tree/v2.1.0/official/transformer/v2
568568
"""
569569
position = np.arange(0, shape[0], dtype=dtype)
570570
num_timescales = shape[1] // 2

applications/popart/bert/bert_tf_loader.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -238,7 +238,7 @@ def load_initializers_from_tf(
238238
task
239239
):
240240
"""
241-
Loads weights, etc. from Tensorflow files into a dictionary of Numpy Arrays.
241+
Loads weights, etc. from TensorFlow files into a dictionary of Numpy Arrays.
242242
243243
Can read either checkpoint files, or frozen graphs, according to the
244244
`is_checkpoint` flag, passed in as the second argument.
@@ -264,7 +264,7 @@ def load_model_from_tf(
264264
task
265265
):
266266
"""
267-
Loads weights, etc. from Tensorflow files into the Graphcore IPU BERT
267+
Loads weights, etc. from TensorFlow files into the Graphcore IPU BERT
268268
implementation.
269269
270270
Can read either checkpoint files, or frozen graphs, according to the

applications/popart/bert/configs/mk2/packed/packed_pretrain_large_128.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@
3939
"duplication_factor": 1,
4040
"epochs_to_cache": 0,
4141
"embedding_serialization_vocab_steps": 5,
42-
"available_memory_proportion": [0.15, 0.4, 0.4, 0.4],
42+
"available_memory_proportion": [0.15, 0.25, 0.25, 0.25],
4343
"pipeline": true,
4444
"checkpoint_dir": "checkpoints/mk2/packed/packed_pretrain_large_128",
4545
"no_validation": true

applications/popart/bert/configs/mk2/pretrain_large_128_POD128.json

Lines changed: 0 additions & 60 deletions
This file was deleted.

0 commit comments

Comments
 (0)