You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Aug 28, 2024. It is now read-only.
Copy file name to clipboardExpand all lines: D2Go/README.md
+4-3Lines changed: 4 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,22 +2,22 @@
2
2
3
3
## Introduction
4
4
5
-
[Detectron2](https://github.com/facebookresearch/detectron2) is one of the most widely adopted open source projects and implements state-of-the-art object detection, semantic segmentation, panoptic segmentation, and human pose prediction. [D2Go](https://github.com/facebookresearch/d2go) is powered by PyTorch 1.9.0, torchvision 0.10.0, and Detectron2 with built-in SOTA networks for mobile - the D2Go model is very small (only 2.15MB) and runs very fast on iOS.
5
+
[Detectron2](https://github.com/facebookresearch/detectron2) is one of the most widely adopted open source projects and implements state-of-the-art object detection, semantic segmentation, panoptic segmentation, and human pose prediction. [D2Go](https://github.com/facebookresearch/d2go) is powered by PyTorch 1.9, torchvision 0.10, and Detectron2 with built-in SOTA networks for mobile - the D2Go model is very small (only 2.15MB) and runs very fast on iOS.
6
6
7
7
This D2Go iOS demo app shows how to prepare and use the D2Go model on iOS with the newly released LibTorchvision Cocoapods. The code is based on a previous PyTorch iOS [Object Detection demo app](https://github.com/pytorch/ios-demo-app/tree/master/ObjectDetection) that uses a pre-trained YOLOv5 model, with modified pre-processing and post-processing code required by the D2Go model.
8
8
9
9
## Prerequisites
10
10
11
11
* PyTorch 1.9 and torchvision 0.10 (Optional)
12
12
* Python 3.8 or above (Optional)
13
-
* iOS Cocoapods LibTorch 1.9.0 and LibTorchvision 0.10.0
13
+
* iOS Cocoapods LibTorch-Lite 1.9.0 and LibTorchvision 0.10.0
14
14
* Xcode 12.4 or later
15
15
16
16
## Quick Start
17
17
18
18
This section shows how to create and use the D2Go model in an iOS app. To just build and run the app without creating the D2Go model yourself, go directly to Step 4.
19
19
20
-
1. Install PyTorch 1.9.0 and torchvision 0.10.0, for example:
20
+
1. Install PyTorch 1.9 and torchvision 0.10, for example:
21
21
22
22
```
23
23
conda create -n d2go python=3.8.5
@@ -50,6 +50,7 @@ Run the following command to create the optimized D2Go model `d2go_optimized.pt`
50
50
```
51
51
python create_d2go.py
52
52
```
53
+
Both the optimized JIT model and the Lite Interpreter model will be created and saved in the project folder.
Copy file name to clipboardExpand all lines: ImageSegmentation/README.md
+5-7Lines changed: 5 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,9 +6,9 @@ This repo offers a Python script that converts the [PyTorch DeepLabV3 model](htt
6
6
7
7
## Prerequisites
8
8
9
-
* PyTorch 1.9.0 and torchvision 0.10.0 (Optional)
9
+
* PyTorch 1.9 and torchvision 0.10 (Optional)
10
10
* Python 3.8 or above (Optional)
11
-
* iOS Cocoapods LibTorch-Lite 1.9.0
11
+
* iOS Cocoapods LibTorch-Lite 1.9.0 and LibTorchvision 0.10.0
12
12
* Xcode 12.4 or later
13
13
14
14
## Quick Start
@@ -17,11 +17,9 @@ To Test Run the Image Segmentation iOS App, follow the steps below:
17
17
18
18
### 1. Prepare the Model
19
19
20
-
If you don't have the PyTorch environment set up to run the script below to generate the model file, you can download it to the `ios-demo-app/ImageSegmentation` folder using the link [here](https://drive.google.com/file/d/1FHV9tN6-e3EWUgM_K3YvDoRLPBj7NHXO/view?usp=sharing).
20
+
If you don't have the PyTorch environment set up to run the script below to generate the model file, you can download it to the `ios-demo-app/ImageSegmentation` folder using the link [here](https://drive.google.com/file/d/1_guNVutt8eTvO_YhGxkAe1uReBhNaC4f/view?usp=sharing).
21
21
22
-
Be aware that the downloadable model file was created with PyTorch 1.7.0, matching the iOS LibTorch library 1.7.0 specified in the `Podfile`. If you use a different version of PyTorch to create your model by following the instructions below, make sure you specify the same iOS LibTorch version in the `Podfile` to avoid possible errors caused by the version mismatch. Furthermore, if you want to use the latest prototype features in the PyTorch master branch to create the model, follow the steps at [Building PyTorch iOS Libraries from Source](https://pytorch.org/mobile/ios/#build-pytorch-ios-libraries-from-source) on how to use the model in iOS.
23
-
24
-
Open a Mac Terminal, first install PyTorch 1.9.0 and torchvision 0.10.0 using command like `pip install torch torchvision`, then run the following commands:
22
+
Open a Mac Terminal, first install PyTorch 1.9 and torchvision 0.10 using command like `pip install torch torchvision`, then run the following commands:
25
23
26
24
```
27
25
git clone https://github.com/pytorch/ios-demo-app
@@ -31,7 +29,7 @@ python deeplabv3.py
31
29
32
30
The Python script `deeplabv3.py` is used to generate the Lite Interpreter model file `deeplabv3_scripted.ptl` to be used in iOS.
33
31
34
-
### 2. Use LibTorch
32
+
### 2. Use LibTorch-Lite
35
33
36
34
Run the commands below (note the `Podfile` uses `pod 'LibTorch-Lite', '~>1.9.0'`):
Copy file name to clipboardExpand all lines: ObjectDetection/README.md
+6-21Lines changed: 6 additions & 21 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,9 +6,9 @@
6
6
7
7
## Prerequisites
8
8
9
-
* PyTorch 1.9.0 or later (Optional)
9
+
* PyTorch 1.9 and torchvision 0.10 (Optional)
10
10
* Python 3.8 (Optional)
11
-
* iOS Cocoapods library LibTorch 1.9.0
11
+
* iOS Cocoapods LibTorch-Lite 1.9.0 and LibTorchvision 0.10.0
12
12
* Xcode 12 or later
13
13
14
14
## Quick Start
@@ -17,11 +17,9 @@ To Test Run the Object Detection iOS App, follow the steps below:
17
17
18
18
### 1. Prepare the Model
19
19
20
-
If you don't have the PyTorch environment set up to run the script, you can download the model file [here](https://drive.google.com/file/d/1QOxNfpy_j_1KbuhN8INw2AgAC82nEw0F/view?usp=sharing) to the `ios-demo-app/ObjectDetection/ObjectDetection` folder, then skip the rest of this step and go to step 2 directly.
20
+
If you don't have the PyTorch environment set up to run the script, you can download the model file [here](https://drive.google.com/file/d/1_MF7NVi9Csm1lizoSCp1wCtUUUpuhwet/view?usp=sharing) to the `ios-demo-app/ObjectDetection/ObjectDetection` folder, then skip the rest of this step and go to step 2 directly.
21
21
22
-
Be aware that the downloadable model file was created with PyTorch 1.9.0, matching the iOS LibTorch library 1.9.0 specified in the `Podfile`. If you use a different version of PyTorch to create your model by following the instructions below, make sure you specify the same iOS LibTorch version in the `Podfile` to avoid possible errors caused by the version mismatch. Furthermore, if you want to use the latest prototype features in the PyTorch master branch to create the model, follow the steps at [Building PyTorch iOS Libraries from Source](https://pytorch.org/mobile/ios/#build-pytorch-ios-libraries-from-source) on how to use the model in iOS.
23
-
24
-
The Python script `export.py` in the `models` folder of the [YOLOv5 repo](https://github.com/ultralytics/yolov5) is used to generate a TorchScript-formatted YOLOv5 model named `yolov5s.torchscript.pt` for mobile apps.
22
+
The Python script `export.py` in the `models` folder of the [YOLOv5 repo](https://github.com/ultralytics/yolov5) is used to generate a TorchScript-formatted YOLOv5 model named `yolov5s.torchscript.ptl` for mobile apps.
25
23
26
24
Open a Mac/Linux/Windows Terminal, run the following commands (note that we use the fork of the original YOLOv5 repo to make sure the code changes work, but feel free to use the original repo):
27
25
@@ -31,28 +29,15 @@ cd yolov5
31
29
pip install -r requirements.txt
32
30
```
33
31
34
-
Then edit `models/export.py` to make two changes:
35
-
36
-
* Change the line 50 from `model.model[-1].export = True` to `model.model[-1].export = False`
37
-
38
-
* Add the following two lines of model optimization code after line 57, between `ts = torch.jit.trace(model, img)` and `ts.save(f)`:
39
-
40
-
```
41
-
from torch.utils.mobile_optimizer import optimize_for_mobile
42
-
ts = optimize_for_mobile(ts)
43
-
```
44
-
45
-
If you ignore this step, you can still create a TorchScript model for mobile apps to use, but the inference on a non-optimized model can take twice as long as the inference on an optimized model - using the iOS app test images, the average inference time on an optimized and non-optimized model is 0.6 seconds and 1.18 seconds, respectively. See [SCRIPT AND OPTIMIZE FOR MOBILE RECIPE](https://pytorch.org/tutorials/recipes/script_optimized.html) for more details.
46
-
47
-
Finally, run the script below to generate the optimized TorchScript model and copy the generated model file `yolov5s.torchscript.pt` to the `ios-demo-app/ObjectDetection/ObjectDetection` folder:
32
+
Finally, run the script below to generate the optimized TorchScript Lite Interpreter model and copy the generated model file `yolov5s.torchscript.ptl` to the `ios-demo-app/ObjectDetection/ObjectDetection` folder:
48
33
49
34
```
50
35
python models/export.py
51
36
```
52
37
53
38
Note that small sized version of the YOLOv5 model, which runs faster but with less accuracy, is generated by default when running the `export.py`. You can also change the value of the `weights` parameter in the `export.py` to generate the medium, large, and extra large version of the model.
Copy file name to clipboardExpand all lines: QuestionAnswering/README.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@ In this demo app, we'll show how to quantize and convert the Huggingface's Disti
10
10
11
11
* PyTorch 1.9 or later (Optional)
12
12
* Python 3.8 (Optional)
13
-
* iOS Cocoapods library LibTorch 1.9.0
13
+
* iOS Cocoapods LibTorch-Lite 1.9.0
14
14
* Xcode 12 or later
15
15
16
16
## Quick Start
@@ -19,34 +19,34 @@ To Test Run the iOS QA demo app, run the following commands on a Terminal:
19
19
20
20
### 1. Prepare the Model
21
21
22
-
If you don't have PyTorch installed or want to have a quick try of the demo app, you can download the scripted QA model compressed in a zip file [here](https://drive.google.com/file/d/1RWZa_5oSQg5AfInkn344DN3FJ5WbbZbq/view?usp=sharing), then unzip, drag and drop it to the project, and continue to Step 2.
22
+
If you don't have PyTorch installed or want to have a quick try of the demo app, you can download the scripted QA model compressed in a zip file [here](https://drive.google.com/file/d/1PgD3pAEf0riUiT3BfwHOm6UEGk8FfJzI/view?usp=sharing), then unzip, drag and drop it to the project, and continue to Step 2.
23
23
24
24
Be aware that the downloadable model file was created with PyTorch 1.9.0, matching the iOS LibTorch library 1.9.0 specified in the `Podfile`. If you use a different version of PyTorch to create your model by following the instructions below, make sure you specify the same iOS LibTorch version in the `Podfile` to avoid possible errors caused by the version mismatch. Furthermore, if you want to use the latest prototype features in the PyTorch master branch to create the model, follow the steps at [Building PyTorch iOS Libraries from Source](https://pytorch.org/mobile/ios/#build-pytorch-ios-libraries-from-source) on how to use the model in iOS.
25
25
26
-
With PyTorch 1.9.0 installed, first install the Huggingface `transformers` by running `pip install transformers`, then run `python convert_distilbert_qa.py`.
26
+
With PyTorch 1.9.0 installed, first install the Huggingface `transformers`4.6.1 (newer version may have some issue) by running `pip install transformers==4.6.1`, then run `python convert_distilbert_qa.py`.
27
27
28
28
Note that a pre-defined question and text, resulting in the size of the input tokens (of question and text) being 360, is used in the `convert_distilbert_qa.py`, and 360 is the maximum token size for the user text and question in the app. If the token size of the inputs of the text and question is less than 360, padding will be needed to make the model work correctly.
29
29
30
-
After the script completes, drag and drop the model file `qa360_quantized.pt` to the iOS app project. [Dynamic quantization](https://pytorch.org/tutorials/intermediate/dynamic_quantization_bert_tutorial.html) is used to quantize the model to reduce its size to half, without causing inference difference in question answering - you can verify this changing the last 4 lines of code in `convert_distilbert_qa.py` from:
30
+
After the script completes, drag and drop the model file `qa360_quantized.ptl` to the iOS app project. [Dynamic quantization](https://pytorch.org/tutorials/intermediate/dynamic_quantization_bert_tutorial.html) is used to quantize the model to reduce its size to half, without causing inference difference in question answering - you can verify this changing the last 4 lines of code in `convert_distilbert_qa.py` from:
Copy file name to clipboardExpand all lines: Seq2SeqNMT/README.md
+5-9Lines changed: 5 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,9 +12,9 @@ This iOS demo app shows:
12
12
13
13
## Prerequisites
14
14
15
-
* PyTorch 1.9.0 or later (Optional)
15
+
* PyTorch 1.9 or later (Optional)
16
16
* Python 3.8 (Optional)
17
-
* iOS Cocoapods library LibTorch 1.9.0
17
+
* iOS Cocoapods library LibTorch-Lite 1.9.0
18
18
* Xcode 12 or later
19
19
20
20
## Quick Start
@@ -23,15 +23,11 @@ To Test Run the Object Detection iOS App, follow the steps below:
23
23
24
24
### 1. Prepare the Model
25
25
26
-
If you don't have the PyTorch environment set up to run the script, you can download the PyTorch trained and optimized NMT encoder and decoder models compressed in a zip [here](https://drive.google.com/file/d/1Ju9ceHi5e87UW1P09-XIvPVdMjOs5kiE/view?usp=sharing), unzip it, copy to the iOS app project folder, and continue to Step 2.
26
+
If you don't have the PyTorch environment set up to run the script, you can download the PyTorch trained and optimized NMT encoder and decoder models compressed in a zip [here](https://drive.google.com/file/d/1Azj1AI3-clVJ7ub_FUVSm3ja7TIn43Kl/view?usp=sharing), unzip it, copy to the iOS app project folder, and continue to Step 2.
27
27
28
-
Be aware that the downloadable model files were created with PyTorch 1.9.0, matching the iOS LibTorch library 1.9.0 specified in the `Podfile`. If you use a different version of PyTorch to create your model by following the instructions below, make sure you specify the same iOS LibTorch version in the `Podfile` to avoid possible errors caused by the version mismatch. Furthermore, if you want to use the latest prototype features in the PyTorch master branch to create the model, follow the steps at [Building PyTorch iOS Libraries from Source](https://pytorch.org/mobile/ios/#build-pytorch-ios-libraries-from-source) on how to use the model in iOS.
28
+
If you have a good GPU and want to train your model from scratch, uncomment the line `trainIters(encoder, decoder, 450100, print_every=5000)` in `seq2seq_nmt.py` before running `python seq2seq2_nmt.py`to go through the whole process of training, saving, loading, optimizing and saving the final mobile-ready models. Otherwise, run the script to create `optimized_encoder_150k.ptl` and `optimized_decoder_150k.ptl`, and copy them to the iOS app. Note that dynamic quantization is applied to the decoder in `seq2seq2_nmt.py` for its `nn.Linear` parameters to reduce the decoder model size from 29MB to 18MB.
29
29
30
-
If you have a good GPU and want to train your model from scratch, uncomment the line `trainIters(encoder, decoder, 450100, print_every=5000)` in `seq2seq_nmt.py` before running `python seq2seq2_nmt.py` to go through the whole process of training, saving, loading, optimizing and saving the final mobile-ready models.
31
-
32
-
To just convert a pre-trained model `seq2seq_mt_150000.pt` to the TorchScript model used on mobile, download [seq2seq_mt_150000.pt](https://drive.google.com/file/d/1f91PvlkxS8JS0xGpMRZ3fmr0Ev80Guxk/view?usp=sharing) first to the same directory as `seq2seq2_nmt.py`, then run `python seq2seq_nmt.py`. After `optimized_encoder_150k.pth` and `optimized_decoder_150k.pth` are generated, copy them to the iOS app. Note that dynamic quantization is applied to the decoder in `seq2seq2_nmt.py` for its `nn.Linear` parameters to reduce the decoder model size from 29MB to 18MB.
If you don't have PyTorch 1.9.0 and torchaudio 0.9.0 installed or want to have a quick try of the demo app, you can download the quantized scripted wav2vec2 model file [here](https://drive.google.com/file/d/1RcCy3K3gDVN2Nun5IIdDbpIDbrKD-XVw/view?usp=sharing), then drag and drop to the project, and continue to Step 3.
28
-
29
-
Be aware that the downloadable model file was created with PyTorch 1.9.0 and torchaudio 0.9.0, matching the iOS LibTorch library 1.9.0 specified in the `Podfile`. If you use a different version of PyTorch to create your model by following the instructions below, make sure you specify the same iOS LibTorch version in the `Podfile` to avoid possible errors caused by the version mismatch. Furthermore, if you want to use the latest prototype features in the PyTorch master branch to create the model, follow the steps at [Building PyTorch iOS Libraries from Source](https://pytorch.org/mobile/ios/#build-pytorch-ios-libraries-from-source) on how to use the model in iOS.
30
-
27
+
If you don't have PyTorch 1.9 and torchaudio 0.9 installed or want to have a quick try of the demo app, you can download the quantized scripted wav2vec2 model file [here](https://drive.google.com/file/d/1xMh-BZMSIeoohBfZvQFYcemmh5zUn_gh/view?usp=sharing), then drag and drop to the project, and continue to Step 3.
31
28
32
29
### 2. Prepare the Model
33
30
34
-
To install PyTorch 1.9.0, torchaudio 0.9.0 and the Hugging Face transformers, you can do something like this:
31
+
To install PyTorch 1.9, torchaudio 0.9 and the Hugging Face transformers, you can do something like this:
35
32
36
33
```
37
34
conda create -n wav2vec2 python=3.8.5
@@ -40,13 +37,13 @@ pip install torch torchaudio
40
37
pip install transformers
41
38
```
42
39
43
-
Now with PyTorch 1.9.0 and torchaudio 0.9.0 installed, run the following commands on a Terminal:
40
+
Now with PyTorch 1.9 and torchaudio 0.9 installed, run the following commands on a Terminal:
44
41
45
42
```
46
43
python create_wav2vec2.py
47
44
```
48
45
49
-
This will create the model file `wav2vec2.pt` and save to the `SpeechRecognition` folder.
46
+
This will create the model file `wav2vec2.ptl` and save to the `SpeechRecognition` folder.
0 commit comments