Skip to content

Commit 2591db7

Browse files
authored
updating gradient URLS and TGN readme (#74)
1 parent d07b2f4 commit 2591db7

File tree

4 files changed

+66
-24
lines changed

4 files changed

+66
-24
lines changed

gnn/cluster_gcn/tensorflow2/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ Cluster graph convolutional networks for node classification, using cluster samp
33

44
Run our Cluster GCN training on arXiv dataset on Paperspace.
55
<br>
6-
[![Gradient](https://assets.paperspace.io/img/gradient-badge.svg)](https://console.paperspace.com/github/gradient-ai/Graphcore-Tensorflow2?machine=Free-IPU-POD16&container=graphcore%2Ftensorflow-jupyter%3A2-amd-2.6.0-ubuntu-20.04-20220804&file=%2Fget-started%2Frun_cluster_gcn_notebook.ipynb)
6+
[![Gradient](https://assets.paperspace.io/img/gradient-badge.svg)](https://ipu.dev/3UYkV6d)
77

88
| Framework | domain | Model | Datasets | Tasks| Training| Inference | Reference |
99
|-------------|-|------|-------|-------|-------|---|---|

gnn/tgn/pytorch/README.md

Lines changed: 59 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -1,36 +1,74 @@
11
# Temporal Graph Networks
22

3-
This directory contains a PyTorch implementation of [Temporal Graph Networks](https://arxiv.org/abs/2006.10637) to train on IPU.
4-
This implementation is based on [`examples/tgn.py`](https://github.com/rusty1s/pytorch_geometric/blob/master/examples/tgn.py) from PyTorch-Geometric.
3+
Temporal graph networks for link prediction in dynamic graphs, based on [`examples/tgn.py`](https://github.com/rusty1s/pytorch_geometric/blob/master/examples/tgn.py) from PyTorch-Geometric, optimised for Graphcore's IPU.
54

6-
## Running on IPU
5+
Run our TGN on paperspace.
6+
<br>
7+
[![Gradient](https://assets.paperspace.io/img/gradient-badge.svg)](https://ipu.dev/3uUI2nt)
78

8-
### Setting up the environment
9-
Install the Poplar SDK following the [Getting Started](https://docs.graphcore.ai/en/latest/getting-started.html) guide for the IPU system.
10-
Source the `enable.sh` scripts for Poplar and PopART and activate a Python virtualenv with PopTorch installed.
9+
| Framework | domain | Model | Datasets | Tasks| Training| Inference | Reference |
10+
|-------------|-|------|-------|-------|-------|---|---|
11+
| Pytorch | GNNs | TGN | JODIE | Link prediction ||| [Temporal Graph Networks for Deep Learning on Dynamic Graphs](https://arxiv.org/abs/2006.10637v3) |
1112

12-
Now install the dependencies of the TGN model:
13+
14+
## Instructions summary
15+
16+
1. Install and enable the Poplar SDK (see Poplar SDK setup)
17+
18+
2. Install the system and Python requirements (see Environment setup)
19+
20+
21+
## Poplar SDK setup
22+
To check if your Poplar SDK has already been enabled, run:
23+
```bash
24+
echo $POPLAR_SDK_ENABLED
25+
```
26+
27+
If no path is provided, then follow these steps:
28+
1. Navigate to your Poplar SDK root directory
29+
30+
2. Enable the Poplar SDK with:
31+
```bash
32+
cd poplar-<OS version>-<SDK version>-<hash>
33+
. enable.sh
34+
```
35+
36+
3. Additionally, enable PopArt with:
37+
```bash
38+
cd popart-<OS version>-<SDK version>-<hash>
39+
. enable.sh
40+
```
41+
42+
More detailed instructions on setting up your environment are available in the [poplar quick start guide](https://docs.graphcore.ai/projects/graphcloud-poplar-quick-start/en/latest/).
43+
44+
45+
## Environment setup
46+
To prepare your environment, follow these steps:
47+
48+
1. Create and activate a Python3 virtual environment:
1349
```bash
14-
pip install -r requirements.txt
50+
python3 -m venv <venv name>
51+
source <venv path>/bin/activate
1552
```
1653

17-
### Train the model
18-
To train the model run
54+
2. Navigate to the Poplar SDK root directory
55+
56+
3. Install the PopTorch (Pytorch) wheel:
57+
```bash
58+
cd <poplar sdk root dir>
59+
pip3 install poptorch...x86_64.whl
60+
```
61+
62+
4. Navigate to this example's root directory
63+
64+
5. Install the Python requirements:
1965
```bash
20-
python train.py
66+
pip3 install -r requirements.txt
2167
```
2268

23-
The following flags can be used to adjust the behaviour of `train.py`
2469

25-
--data: directory to load/save the data (default: data/JODIE) <br>
26-
-t, --target: device to run on (choices: {ipu, cpu}, default: ipu) <br>
27-
-d, --dtype: floating point format (default: float32) <br>
28-
-e, --epochs: number of epochs to train for (default: 50) <br>
29-
--lr: learning rate (default: 0.0001) <br>
30-
--dropout: dropout rate in the attention module (default: 0.1) <br>
31-
--optimizer, Optimizer (choices: {SGD, Adam}, default: Adam) <br>
70+
## Running and benchmarking
3271

33-
### Running and benchmarking
3472
To run a tested and optimised configuration and to reproduce the performance shown on our [performance results page](https://www.graphcore.ai/performance-results), use the `examples_utils` module (installed automatically as part of the environment setup) to run one or more benchmarks. The benchmarks are provided in the `benchmarks.yml` file in this example's root directory.
3573

3674
For example:
@@ -51,4 +89,4 @@ For more information on using the examples-utils benchmarking module, please ref
5189
### License
5290
This application is licensed under the MIT license, see the LICENSE file at the top-level of this repository.
5391

54-
This directory includes derived work from the PyTorch Geometric repository, https://github.com/pyg-team/pytorch_geometric by Matthias Fey and Jiaxuan You, published under the MIT license
92+
This directory includes derived work from the PyTorch Geometric repository, https://github.com/pyg-team/pytorch_geometric by Matthias Fey and Jiaxuan You, published under the MIT license

nlp/bert/pytorch/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ Bidirectional Encoder Representations from Transformers for NLP pre-training and
33

44
Run our BERT-L Fine-tuning on SQuAD dataset on Paperspace.
55
<br>
6-
[![Gradient](https://assets.paperspace.io/img/gradient-badge.svg)](https://bash.paperspace.com/github/gradient-ai/Graphcore-PyTorch?machine=Free-IPU-POD16&container=graphcore%2Fpytorch-jupyter%3A2.6.0-ubuntu-20.04-20220804&file=%2Fget-started%2FFine-tuning-BERT.ipynb)
6+
[![Gradient](https://assets.paperspace.io/img/gradient-badge.svg)](https://ipu.dev/3WiyZIC)
77

88
| Framework | domain | Model | Datasets | Tasks| Training| Inference | Reference |
99
|-------------|-|------|-------|-------|-------|---|---|
@@ -67,7 +67,7 @@ pip3 install poptorch...x86_64.whl
6767
sudo apt install $(< required_apt_packages.txt)
6868
```
6969

70-
5. Install the Python requirements:
70+
6. Install the Python requirements:
7171
```bash
7272
pip3 install -r requirements.txt
7373
```

vision/vit/pytorch/README.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,10 @@
11
# ViT (Vision Transformer)
22
Vision Transformer for image recognition, optimised for Graphcore's IPU. Based on the models provided by the [`transformers`](https://github.com/huggingface/transformers) library and from [jeonsworld](https://github.com/jeonsworld/ViT-pytorch)
33

4+
Run our ViT on Paperspace.
5+
<br>
6+
[![Gradient](https://assets.paperspace.io/img/gradient-badge.svg)](https://ipu.dev/3uTF5Uj)
7+
48
| Framework | domain | Model | Datasets | Tasks| Training| Inference | Reference |
59
|-------------|-|------|-------|-------|-------|---|-------|
610
| Pytorch | Vision | ViT | ImageNet LSVRC 2012, CIFAR-10 | Image recognition ||| [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) |

0 commit comments

Comments
 (0)