Skip to content

Commit a11dfa0

Browse files
Update README.md
1 parent 77963df commit a11dfa0

File tree

1 file changed

+4
-8
lines changed

1 file changed

+4
-8
lines changed

README.md

Lines changed: 4 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -34,8 +34,7 @@ Usual workflow is to first setup AML (see [AML setup](#aml-setup)), source envir
3434
note that the example uses PyTorch - we recommend using Ampere Optimized PyTorch for best results (see [AML setup](#aml-setup))
3535
```bash
3636
source set_env_variables.sh
37-
cd computer_vision/classification/resnet_50_v15
38-
IGNORE_DATASET_LIMITS=1 AIO_IMPLICIT_FP16_TRANSFORM_FILTER=".*" AIO_NUM_THREADS=32 python3 run.py -m resnet50 -p fp32 -b 16 -f pytorch
37+
IGNORE_DATASET_LIMITS=1 AIO_IMPLICIT_FP16_TRANSFORM_FILTER=".*" AIO_NUM_THREADS=32 python3 computer_vision/classification/resnet_50_v15/run.py -m resnet50 -p fp32 -b 16 -f pytorch
3938
### the command above will run the model utilizing 32 threads, with batch size of 16
4039
### implicit conversion to FP16 datatype will be applied - you can default to fp32 precision by not setting the AIO_IMPLICIT_FP16_ variable
4140
```
@@ -47,8 +46,7 @@ IGNORE_DATASET_LIMITS=1 AIO_IMPLICIT_FP16_TRANSFORM_FILTER=".*" AIO_NUM_THREADS=
4746
note that the example uses PyTorch - we recommend using Ampere Optimized PyTorch for best results (see [AML setup](#aml-setup))
4847
```bash
4948
source set_env_variables.sh
50-
cd speech_recognition/whisper/
51-
AIO_IMPLICIT_FP16_TRANSFORM_FILTER=".*" AIO_NUM_THREADS=32 python3 run.py -m tiny.en
49+
AIO_IMPLICIT_FP16_TRANSFORM_FILTER=".*" AIO_NUM_THREADS=32 python3 speech_recognition/whisper/run.py -m tiny.en
5250
### the command above will run the model utilizing 32 threads
5351
### implicit conversion to FP16 datatype will be applied - you can default to fp32 precision by not setting the AIO_IMPLICIT_FP16_ variable
5452
```
@@ -58,9 +56,8 @@ AIO_IMPLICIT_FP16_TRANSFORM_FILTER=".*" AIO_NUM_THREADS=32 python3 run.py -m tin
5856
note that the example uses PyTorch - we recommend using Ampere Optimized PyTorch for best results (see [AML setup](#aml-setup))
5957
```bash
6058
source set_env_variables.sh
61-
cd computer_vision/object_detection/yolo_v8
6259
wget https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8l.pt
63-
AIO_IMPLICIT_FP16_TRANSFORM_FILTER=".*" AIO_NUM_THREADS=32 python3 run.py -m yolov8l.pt -p fp32 -f pytorch
60+
AIO_IMPLICIT_FP16_TRANSFORM_FILTER=".*" AIO_NUM_THREADS=32 python3 computer_vision/object_detection/yolo_v8/run.py -m yolov8l.pt -p fp32 -f pytorch
6461
### the command above will run the model utilizing 32 threads
6562
### implicit conversion to FP16 datatype will be applied - you can default to fp32 precision by not setting the AIO_IMPLICIT_FP16_ variable
6663
```
@@ -70,9 +67,8 @@ AIO_IMPLICIT_FP16_TRANSFORM_FILTER=".*" AIO_NUM_THREADS=32 python3 run.py -m yol
7067
note that the example uses PyTorch - we recommend using Ampere Optimized PyTorch for best results (see [AML setup](#aml-setup))
7168
```bash
7269
source set_env_variables.sh
73-
cd natural_language_processing/extractive_question_answering/bert_large
7470
wget -O bert_large_mlperf.pt https://zenodo.org/records/3733896/files/model.pytorch?download=1
75-
AIO_IMPLICIT_FP16_TRANSFORM_FILTER=".*" AIO_NUM_THREADS=32 python3 run_mlperf.py -m bert_large_mlperf.pt -p fp32 -f pytorch
71+
AIO_IMPLICIT_FP16_TRANSFORM_FILTER=".*" AIO_NUM_THREADS=32 python3 natural_language_processing/extractive_question_answering/bert_large/run_mlperf.py -m bert_large_mlperf.pt -p fp32 -f pytorch
7672
### the command above will run the model utilizing 32 threads
7773
### implicit conversion to FP16 datatype will be applied - you can default to fp32 precision by not setting the AIO_IMPLICIT_FP16_ variable
7874
```

0 commit comments

Comments
 (0)