Skip to content
This repository was archived by the owner on Aug 28, 2024. It is now read-only.

Commit 86698d0

Browse files
committed
README and scripts updated
1 parent 535bd1b commit 86698d0

File tree

14 files changed

+42
-76
lines changed

14 files changed

+42
-76
lines changed

D2Go/README.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2,22 +2,22 @@
22

33
## Introduction
44

5-
[Detectron2](https://github.com/facebookresearch/detectron2) is one of the most widely adopted open source projects and implements state-of-the-art object detection, semantic segmentation, panoptic segmentation, and human pose prediction. [D2Go](https://github.com/facebookresearch/d2go) is powered by PyTorch 1.9.0, torchvision 0.10.0, and Detectron2 with built-in SOTA networks for mobile - the D2Go model is very small (only 2.15MB) and runs very fast on iOS.
5+
[Detectron2](https://github.com/facebookresearch/detectron2) is one of the most widely adopted open source projects and implements state-of-the-art object detection, semantic segmentation, panoptic segmentation, and human pose prediction. [D2Go](https://github.com/facebookresearch/d2go) is powered by PyTorch 1.9, torchvision 0.10, and Detectron2 with built-in SOTA networks for mobile - the D2Go model is very small (only 2.15MB) and runs very fast on iOS.
66

77
This D2Go iOS demo app shows how to prepare and use the D2Go model on iOS with the newly released LibTorchvision Cocoapods. The code is based on a previous PyTorch iOS [Object Detection demo app](https://github.com/pytorch/ios-demo-app/tree/master/ObjectDetection) that uses a pre-trained YOLOv5 model, with modified pre-processing and post-processing code required by the D2Go model.
88

99
## Prerequisites
1010

1111
* PyTorch 1.9 and torchvision 0.10 (Optional)
1212
* Python 3.8 or above (Optional)
13-
* iOS Cocoapods LibTorch 1.9.0 and LibTorchvision 0.10.0
13+
* iOS Cocoapods LibTorch-Lite 1.9.0 and LibTorchvision 0.10.0
1414
* Xcode 12.4 or later
1515

1616
## Quick Start
1717

1818
This section shows how to create and use the D2Go model in an iOS app. To just build and run the app without creating the D2Go model yourself, go directly to Step 4.
1919

20-
1. Install PyTorch 1.9.0 and torchvision 0.10.0, for example:
20+
1. Install PyTorch 1.9 and torchvision 0.10, for example:
2121

2222
```
2323
conda create -n d2go python=3.8.5
@@ -50,6 +50,7 @@ Run the following command to create the optimized D2Go model `d2go_optimized.pt`
5050
```
5151
python create_d2go.py
5252
```
53+
Both the optimized JIT model and the Lite Interpreter model will be created and saved in the project folder.
5354

5455
4. Build and run the D2Go iOS app
5556

ImageSegmentation/README.md

Lines changed: 5 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -6,9 +6,9 @@ This repo offers a Python script that converts the [PyTorch DeepLabV3 model](htt
66

77
## Prerequisites
88

9-
* PyTorch 1.9.0 and torchvision 0.10.0 (Optional)
9+
* PyTorch 1.9 and torchvision 0.10 (Optional)
1010
* Python 3.8 or above (Optional)
11-
* iOS Cocoapods LibTorch-Lite 1.9.0
11+
* iOS Cocoapods LibTorch-Lite 1.9.0 and LibTorchvision 0.10.0
1212
* Xcode 12.4 or later
1313

1414
## Quick Start
@@ -17,11 +17,9 @@ To Test Run the Image Segmentation iOS App, follow the steps below:
1717

1818
### 1. Prepare the Model
1919

20-
If you don't have the PyTorch environment set up to run the script below to generate the model file, you can download it to the `ios-demo-app/ImageSegmentation` folder using the link [here](https://drive.google.com/file/d/1FHV9tN6-e3EWUgM_K3YvDoRLPBj7NHXO/view?usp=sharing).
20+
If you don't have the PyTorch environment set up to run the script below to generate the model file, you can download it to the `ios-demo-app/ImageSegmentation` folder using the link [here](https://drive.google.com/file/d/1_guNVutt8eTvO_YhGxkAe1uReBhNaC4f/view?usp=sharing).
2121

22-
Be aware that the downloadable model file was created with PyTorch 1.7.0, matching the iOS LibTorch library 1.7.0 specified in the `Podfile`. If you use a different version of PyTorch to create your model by following the instructions below, make sure you specify the same iOS LibTorch version in the `Podfile` to avoid possible errors caused by the version mismatch. Furthermore, if you want to use the latest prototype features in the PyTorch master branch to create the model, follow the steps at [Building PyTorch iOS Libraries from Source](https://pytorch.org/mobile/ios/#build-pytorch-ios-libraries-from-source) on how to use the model in iOS.
23-
24-
Open a Mac Terminal, first install PyTorch 1.9.0 and torchvision 0.10.0 using command like `pip install torch torchvision`, then run the following commands:
22+
Open a Mac Terminal, first install PyTorch 1.9 and torchvision 0.10 using command like `pip install torch torchvision`, then run the following commands:
2523

2624
```
2725
git clone https://github.com/pytorch/ios-demo-app
@@ -31,7 +29,7 @@ python deeplabv3.py
3129

3230
The Python script `deeplabv3.py` is used to generate the Lite Interpreter model file `deeplabv3_scripted.ptl` to be used in iOS.
3331

34-
### 2. Use LibTorch
32+
### 2. Use LibTorch-Lite
3533

3634
Run the commands below (note the `Podfile` uses `pod 'LibTorch-Lite', '~>1.9.0'`):
3735

ImageSegmentation/deeplabv3.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,5 +6,4 @@
66

77
scripted_module = torch.jit.script(model)
88
optimized_model = optimize_for_mobile(scripted_module)
9-
optimized_model.save("ImageSegmentation/deeplabv3_scripted.pt")
109
optimized_model._save_for_lite_interpreter("ImageSegmentation/deeplabv3_scripted.ptl")

ObjectDetection/README.md

Lines changed: 6 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -6,9 +6,9 @@
66

77
## Prerequisites
88

9-
* PyTorch 1.9.0 or later (Optional)
9+
* PyTorch 1.9 and torchvision 0.10 (Optional)
1010
* Python 3.8 (Optional)
11-
* iOS Cocoapods library LibTorch 1.9.0
11+
* iOS Cocoapods LibTorch-Lite 1.9.0 and LibTorchvision 0.10.0
1212
* Xcode 12 or later
1313

1414
## Quick Start
@@ -17,11 +17,9 @@ To Test Run the Object Detection iOS App, follow the steps below:
1717

1818
### 1. Prepare the Model
1919

20-
If you don't have the PyTorch environment set up to run the script, you can download the model file [here](https://drive.google.com/file/d/1QOxNfpy_j_1KbuhN8INw2AgAC82nEw0F/view?usp=sharing) to the `ios-demo-app/ObjectDetection/ObjectDetection` folder, then skip the rest of this step and go to step 2 directly.
20+
If you don't have the PyTorch environment set up to run the script, you can download the model file [here](https://drive.google.com/file/d/1_MF7NVi9Csm1lizoSCp1wCtUUUpuhwet/view?usp=sharing) to the `ios-demo-app/ObjectDetection/ObjectDetection` folder, then skip the rest of this step and go to step 2 directly.
2121

22-
Be aware that the downloadable model file was created with PyTorch 1.9.0, matching the iOS LibTorch library 1.9.0 specified in the `Podfile`. If you use a different version of PyTorch to create your model by following the instructions below, make sure you specify the same iOS LibTorch version in the `Podfile` to avoid possible errors caused by the version mismatch. Furthermore, if you want to use the latest prototype features in the PyTorch master branch to create the model, follow the steps at [Building PyTorch iOS Libraries from Source](https://pytorch.org/mobile/ios/#build-pytorch-ios-libraries-from-source) on how to use the model in iOS.
23-
24-
The Python script `export.py` in the `models` folder of the [YOLOv5 repo](https://github.com/ultralytics/yolov5) is used to generate a TorchScript-formatted YOLOv5 model named `yolov5s.torchscript.pt` for mobile apps.
22+
The Python script `export.py` in the `models` folder of the [YOLOv5 repo](https://github.com/ultralytics/yolov5) is used to generate a TorchScript-formatted YOLOv5 model named `yolov5s.torchscript.ptl` for mobile apps.
2523

2624
Open a Mac/Linux/Windows Terminal, run the following commands (note that we use the fork of the original YOLOv5 repo to make sure the code changes work, but feel free to use the original repo):
2725

@@ -31,28 +29,15 @@ cd yolov5
3129
pip install -r requirements.txt
3230
```
3331

34-
Then edit `models/export.py` to make two changes:
35-
36-
* Change the line 50 from `model.model[-1].export = True` to `model.model[-1].export = False`
37-
38-
* Add the following two lines of model optimization code after line 57, between `ts = torch.jit.trace(model, img)` and `ts.save(f)`:
39-
40-
```
41-
from torch.utils.mobile_optimizer import optimize_for_mobile
42-
ts = optimize_for_mobile(ts)
43-
```
44-
45-
If you ignore this step, you can still create a TorchScript model for mobile apps to use, but the inference on a non-optimized model can take twice as long as the inference on an optimized model - using the iOS app test images, the average inference time on an optimized and non-optimized model is 0.6 seconds and 1.18 seconds, respectively. See [SCRIPT AND OPTIMIZE FOR MOBILE RECIPE](https://pytorch.org/tutorials/recipes/script_optimized.html) for more details.
46-
47-
Finally, run the script below to generate the optimized TorchScript model and copy the generated model file `yolov5s.torchscript.pt` to the `ios-demo-app/ObjectDetection/ObjectDetection` folder:
32+
Finally, run the script below to generate the optimized TorchScript Lite Interpreter model and copy the generated model file `yolov5s.torchscript.ptl` to the `ios-demo-app/ObjectDetection/ObjectDetection` folder:
4833

4934
```
5035
python models/export.py
5136
```
5237

5338
Note that small sized version of the YOLOv5 model, which runs faster but with less accuracy, is generated by default when running the `export.py`. You can also change the value of the `weights` parameter in the `export.py` to generate the medium, large, and extra large version of the model.
5439

55-
### 2. Use LibTorch
40+
### 2. Use LibTorch-Lite
5641

5742
Run the commands below:
5843

QuestionAnswering/README.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ In this demo app, we'll show how to quantize and convert the Huggingface's Disti
1010

1111
* PyTorch 1.9 or later (Optional)
1212
* Python 3.8 (Optional)
13-
* iOS Cocoapods library LibTorch 1.9.0
13+
* iOS Cocoapods LibTorch-Lite 1.9.0
1414
* Xcode 12 or later
1515

1616
## Quick Start
@@ -19,34 +19,34 @@ To Test Run the iOS QA demo app, run the following commands on a Terminal:
1919

2020
### 1. Prepare the Model
2121

22-
If you don't have PyTorch installed or want to have a quick try of the demo app, you can download the scripted QA model compressed in a zip file [here](https://drive.google.com/file/d/1RWZa_5oSQg5AfInkn344DN3FJ5WbbZbq/view?usp=sharing), then unzip, drag and drop it to the project, and continue to Step 2.
22+
If you don't have PyTorch installed or want to have a quick try of the demo app, you can download the scripted QA model compressed in a zip file [here](https://drive.google.com/file/d/1PgD3pAEf0riUiT3BfwHOm6UEGk8FfJzI/view?usp=sharing), then unzip, drag and drop it to the project, and continue to Step 2.
2323

2424
Be aware that the downloadable model file was created with PyTorch 1.9.0, matching the iOS LibTorch library 1.9.0 specified in the `Podfile`. If you use a different version of PyTorch to create your model by following the instructions below, make sure you specify the same iOS LibTorch version in the `Podfile` to avoid possible errors caused by the version mismatch. Furthermore, if you want to use the latest prototype features in the PyTorch master branch to create the model, follow the steps at [Building PyTorch iOS Libraries from Source](https://pytorch.org/mobile/ios/#build-pytorch-ios-libraries-from-source) on how to use the model in iOS.
2525

26-
With PyTorch 1.9.0 installed, first install the Huggingface `transformers` by running `pip install transformers`, then run `python convert_distilbert_qa.py`.
26+
With PyTorch 1.9.0 installed, first install the Huggingface `transformers` 4.6.1 (newer version may have some issue) by running `pip install transformers==4.6.1`, then run `python convert_distilbert_qa.py`.
2727

2828
Note that a pre-defined question and text, resulting in the size of the input tokens (of question and text) being 360, is used in the `convert_distilbert_qa.py`, and 360 is the maximum token size for the user text and question in the app. If the token size of the inputs of the text and question is less than 360, padding will be needed to make the model work correctly.
2929

30-
After the script completes, drag and drop the model file `qa360_quantized.pt` to the iOS app project. [Dynamic quantization](https://pytorch.org/tutorials/intermediate/dynamic_quantization_bert_tutorial.html) is used to quantize the model to reduce its size to half, without causing inference difference in question answering - you can verify this changing the last 4 lines of code in `convert_distilbert_qa.py` from:
30+
After the script completes, drag and drop the model file `qa360_quantized.ptl` to the iOS app project. [Dynamic quantization](https://pytorch.org/tutorials/intermediate/dynamic_quantization_bert_tutorial.html) is used to quantize the model to reduce its size to half, without causing inference difference in question answering - you can verify this changing the last 4 lines of code in `convert_distilbert_qa.py` from:
3131

3232
```
3333
model_dynamic_quantized = torch.quantization.quantize_dynamic(model, qconfig_spec={torch.nn.Linear}, dtype=torch.qint8)
3434
traced_model = torch.jit.trace(model_dynamic_quantized, inputs['input_ids'], strict=False)
3535
optimized_traced_model = optimize_for_mobile(traced_model)
36-
torch.jit.save(optimized_traced_model, "qa360_quantized.pt")
36+
optimized_traced_model._save_for_lite_interpreter("QuestionAnswering/qa360_quantized.ptl")
3737
```
3838

3939
to
4040

4141
```
4242
traced_model = torch.jit.trace(model, inputs['input_ids'], strict=False)
4343
optimized_traced_model = optimize_for_mobile(traced_model)
44-
torch.jit.save(optimized_traced_model, "qa360.pt")
44+
optimized_traced_model._save_for_lite_interpreter("QuestionAnswering/qa360_quantized.ptl")
4545
```
4646

4747
and rerun `python convert_distilbert_qa.py` to generate a non-quantized model and use it in the app to compare with the quantized version.
4848

49-
### 2. Use LibTorch
49+
### 2. Use LibTorch-Lite
5050

5151
Run the commands below:
5252

QuestionAnswering/convert_distilbert_qa.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,5 @@
1414
model_dynamic_quantized = torch.quantization.quantize_dynamic(model, qconfig_spec={torch.nn.Linear}, dtype=torch.qint8)
1515
traced_model = torch.jit.trace(model_dynamic_quantized, inputs['input_ids'], strict=False)
1616
optimized_traced_model = optimize_for_mobile(traced_model)
17-
optimized_traced_model.save("QuestionAnswering/qa360_quantized.pt")
1817
optimized_traced_model._save_for_lite_interpreter("QuestionAnswering/qa360_quantized.ptl")
1918
# 360 is the length of model input, i.e. the length of the tokenized ids of question+text

Seq2SeqNMT/README.md

Lines changed: 5 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -12,9 +12,9 @@ This iOS demo app shows:
1212

1313
## Prerequisites
1414

15-
* PyTorch 1.9.0 or later (Optional)
15+
* PyTorch 1.9 or later (Optional)
1616
* Python 3.8 (Optional)
17-
* iOS Cocoapods library LibTorch 1.9.0
17+
* iOS Cocoapods library LibTorch-Lite 1.9.0
1818
* Xcode 12 or later
1919

2020
## Quick Start
@@ -23,15 +23,11 @@ To Test Run the Object Detection iOS App, follow the steps below:
2323

2424
### 1. Prepare the Model
2525

26-
If you don't have the PyTorch environment set up to run the script, you can download the PyTorch trained and optimized NMT encoder and decoder models compressed in a zip [here](https://drive.google.com/file/d/1Ju9ceHi5e87UW1P09-XIvPVdMjOs5kiE/view?usp=sharing), unzip it, copy to the iOS app project folder, and continue to Step 2.
26+
If you don't have the PyTorch environment set up to run the script, you can download the PyTorch trained and optimized NMT encoder and decoder models compressed in a zip [here](https://drive.google.com/file/d/1Azj1AI3-clVJ7ub_FUVSm3ja7TIn43Kl/view?usp=sharing), unzip it, copy to the iOS app project folder, and continue to Step 2.
2727

28-
Be aware that the downloadable model files were created with PyTorch 1.9.0, matching the iOS LibTorch library 1.9.0 specified in the `Podfile`. If you use a different version of PyTorch to create your model by following the instructions below, make sure you specify the same iOS LibTorch version in the `Podfile` to avoid possible errors caused by the version mismatch. Furthermore, if you want to use the latest prototype features in the PyTorch master branch to create the model, follow the steps at [Building PyTorch iOS Libraries from Source](https://pytorch.org/mobile/ios/#build-pytorch-ios-libraries-from-source) on how to use the model in iOS.
28+
If you have a good GPU and want to train your model from scratch, uncomment the line `trainIters(encoder, decoder, 450100, print_every=5000)` in `seq2seq_nmt.py` before running `python seq2seq2_nmt.py` to go through the whole process of training, saving, loading, optimizing and saving the final mobile-ready models. Otherwise, run the script to create `optimized_encoder_150k.ptl` and `optimized_decoder_150k.ptl`, and copy them to the iOS app. Note that dynamic quantization is applied to the decoder in `seq2seq2_nmt.py` for its `nn.Linear` parameters to reduce the decoder model size from 29MB to 18MB.
2929

30-
If you have a good GPU and want to train your model from scratch, uncomment the line `trainIters(encoder, decoder, 450100, print_every=5000)` in `seq2seq_nmt.py` before running `python seq2seq2_nmt.py` to go through the whole process of training, saving, loading, optimizing and saving the final mobile-ready models.
31-
32-
To just convert a pre-trained model `seq2seq_mt_150000.pt` to the TorchScript model used on mobile, download [seq2seq_mt_150000.pt](https://drive.google.com/file/d/1f91PvlkxS8JS0xGpMRZ3fmr0Ev80Guxk/view?usp=sharing) first to the same directory as `seq2seq2_nmt.py`, then run `python seq2seq_nmt.py`. After `optimized_encoder_150k.pth` and `optimized_decoder_150k.pth` are generated, copy them to the iOS app. Note that dynamic quantization is applied to the decoder in `seq2seq2_nmt.py` for its `nn.Linear` parameters to reduce the decoder model size from 29MB to 18MB.
33-
34-
### 2. Use LibTorch
30+
### 2. Use LibTorch-Lite
3531

3632
Run the commands below:
3733

Seq2SeqNMT/seq2seq_nmt.py

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -319,9 +319,7 @@ def trainIters(encoder, decoder, n_iters, print_every=1000, plot_every=100, lear
319319
from torch.utils.mobile_optimizer import optimize_for_mobile
320320

321321
traced_encoder_optimized = optimize_for_mobile(traced_encoder)
322-
traced_encoder_optimized.save("optimized_encoder_150k.pth")
323322
traced_encoder_optimized._save_for_lite_interpreter("optimized_encoder_150k.ptl")
324323

325324
traced_decoder_optimized = optimize_for_mobile(traced_decoder)
326-
traced_decoder_optimized.save("optimized_decoder_150k.pth")
327325
traced_decoder_optimized._save_for_lite_interpreter("optimized_decoder_150k.ptl")

SpeechRecognition/README.md

Lines changed: 6 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -8,9 +8,9 @@ In this demo app, we'll show how to quantize, trace, and optimize the wav2vec2 m
88

99
## Prerequisites
1010

11-
* PyTorch 1.9.0 and torchaudio 0.9.0 (Optional)
11+
* PyTorch 1.9 and torchaudio 0.9 (Optional)
1212
* Python 3.8 or above (Optional)
13-
* iOS PyTorch Cocoapods library LibTorch 1.9.0
13+
* iOS Cocoapods LibTorch-Lite 1.9.0
1414
* Xcode 12.4 or later
1515

1616
## Quick Start
@@ -24,14 +24,11 @@ git clone https://github.com/pytorch/ios-demo-app
2424
cd ios-demo-app/SpeechRecognition
2525
```
2626

27-
If you don't have PyTorch 1.9.0 and torchaudio 0.9.0 installed or want to have a quick try of the demo app, you can download the quantized scripted wav2vec2 model file [here](https://drive.google.com/file/d/1RcCy3K3gDVN2Nun5IIdDbpIDbrKD-XVw/view?usp=sharing), then drag and drop to the project, and continue to Step 3.
28-
29-
Be aware that the downloadable model file was created with PyTorch 1.9.0 and torchaudio 0.9.0, matching the iOS LibTorch library 1.9.0 specified in the `Podfile`. If you use a different version of PyTorch to create your model by following the instructions below, make sure you specify the same iOS LibTorch version in the `Podfile` to avoid possible errors caused by the version mismatch. Furthermore, if you want to use the latest prototype features in the PyTorch master branch to create the model, follow the steps at [Building PyTorch iOS Libraries from Source](https://pytorch.org/mobile/ios/#build-pytorch-ios-libraries-from-source) on how to use the model in iOS.
30-
27+
If you don't have PyTorch 1.9 and torchaudio 0.9 installed or want to have a quick try of the demo app, you can download the quantized scripted wav2vec2 model file [here](https://drive.google.com/file/d/1xMh-BZMSIeoohBfZvQFYcemmh5zUn_gh/view?usp=sharing), then drag and drop to the project, and continue to Step 3.
3128

3229
### 2. Prepare the Model
3330

34-
To install PyTorch 1.9.0, torchaudio 0.9.0 and the Hugging Face transformers, you can do something like this:
31+
To install PyTorch 1.9, torchaudio 0.9 and the Hugging Face transformers, you can do something like this:
3532

3633
```
3734
conda create -n wav2vec2 python=3.8.5
@@ -40,13 +37,13 @@ pip install torch torchaudio
4037
pip install transformers
4138
```
4239

43-
Now with PyTorch 1.9.0 and torchaudio 0.9.0 installed, run the following commands on a Terminal:
40+
Now with PyTorch 1.9 and torchaudio 0.9 installed, run the following commands on a Terminal:
4441

4542
```
4643
python create_wav2vec2.py
4744
```
4845

49-
This will create the model file `wav2vec2.pt` and save to the `SpeechRecognition` folder.
46+
This will create the model file `wav2vec2.ptl` and save to the `SpeechRecognition` folder.
5047

5148
### 2. Use LibTorch
5249

0 commit comments

Comments
 (0)