Skip to content

Commit 8aa5d52

Browse files
tonybove-appleabove3
andauthored
Docs-Guides - Update Examples in Batch 1 (#2080)
* Docs-Guides - Update Examples in Batch 1 * Edits For Docs-Guides - Update Examples in Batch 1 --------- Co-authored-by: above3 <anthony_bove@apple.com>
1 parent f1b684f commit 8aa5d52

17 files changed

+255
-230
lines changed

docs-guides/source/convert-tensorflow-2-bert-transformer-models.md

Lines changed: 92 additions & 70 deletions
Original file line numberDiff line numberDiff line change
@@ -12,109 +12,131 @@ The following examples demonstrate converting TensorFlow 2 models to Core ML usi
1212

1313
The following example converts the [DistilBERT model from Huggingface](https://huggingface.co/transformers/model_doc/distilbert.html#tfdistilbertformaskedlm) to Core ML.
1414

15-
```{admonition} Install Transformers
15+
```{admonition} Requirements
1616
17-
You may need to first install Transformers version 4.17.0.
17+
This example requires TensorFlow 2 and Transformers version 4.17.0.
1818
```
1919

20+
2021
Follow these steps:
2122

2223
1. Add the import statements:
2324

24-
```python
25-
import numpy as np
26-
import coremltools as ct
27-
import tensorflow as tf
25+
```python
26+
import numpy as np
27+
import coremltools as ct
28+
import tensorflow as tf
2829

29-
from transformers import DistilBertTokenizer, TFDistilBertForMaskedLM
30-
```
30+
from transformers import DistilBertTokenizer, TFDistilBertForMaskedLM
31+
```
3132

3233
2. Load the DistilBERT model and tokenizer. This example uses the `TFDistilBertForMaskedLM` variant:
3334

34-
```python
35-
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-cased')
36-
distilbert_model = TFDistilBertForMaskedLM.from_pretrained('distilbert-base-cased')
37-
```
35+
```python
36+
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-cased')
37+
distilbert_model = TFDistilBertForMaskedLM.from_pretrained('distilbert-base-cased')
38+
```
3839

3940
3. Describe and set the input layer, and then build the TensorFlow model (`tf_model`):
40-
41-
```python
42-
max_seq_length = 10
43-
input_shape = (1, max_seq_length) #(batch_size, maximum_sequence_length)
41+
42+
```python
43+
max_seq_length = 10
44+
input_shape = (1, max_seq_length) #(batch_size, maximum_sequence_length)
4445

45-
input_layer = tf.keras.layers.Input(shape=input_shape[1:], dtype=tf.int32, name='input')
46+
input_layer = tf.keras.layers.Input(shape=input_shape[1:], dtype=tf.int32, name='input')
4647

47-
prediction_model = distilbert_model(input_layer)
48-
tf_model = tf.keras.models.Model(inputs=input_layer, outputs=prediction_model)
49-
```
48+
prediction_model = distilbert_model(input_layer)
49+
tf_model = tf.keras.models.Model(inputs=input_layer, outputs=prediction_model)
50+
```
5051

51-
4. Convert the `tf_model` to a Core ML neural network (`mlmodel`):
52-
53-
```python
54-
mlmodel = ct.convert(tf_model)
55-
```
52+
4. Convert the `tf_model` to an ML program (`mlmodel`):
53+
54+
```python
55+
mlmodel = ct.convert(tf_model)
56+
```
5657

5758
5. Create the input using `tokenizer`:
58-
59-
```python
60-
# Fill the input with zeros to adhere to input_shape
61-
input_values = np.zeros(input_shape)
62-
# Store the tokens from our sample sentence into the input
63-
input_values[0,:8] = np.array(tokenizer.encode("Hello, my dog is cute")).astype(np.int32)
64-
```
59+
60+
```python
61+
# Fill the input with zeros to adhere to input_shape
62+
input_values = np.zeros(input_shape)
63+
# Store the tokens from our sample sentence into the input
64+
input_values[0,:8] = np.array(tokenizer.encode("Hello, my dog is cute")).astype(np.int32)
65+
```
6566

6667
6. Use `mlmodel` for prediction:
67-
68-
```python
69-
mlmodel.predict({'input':input_values}) # 'input' is the name of our input layer from (3)
70-
```
68+
69+
```python
70+
mlmodel.predict({'input':input_values}) # 'input' is the name of our input layer from (3)
71+
```
7172

7273
## Convert the TF Hub BERT Transformer Model
7374

74-
The following example converts the [BERT model from TensorFlow Hub](https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/1). Follow these steps:
75+
The following example converts the [BERT model from TensorFlow Hub](https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/1).
76+
77+
```{admonition} Requirements
78+
79+
This example requires TensorFlow 2, TensorFlow Hub, and Transformers version 4.17.0.
80+
```
81+
82+
Follow these steps:
7583

7684
1. Add the import statements:
77-
78-
```python
79-
import numpy as np
80-
import tensorflow as tf
81-
import tensorflow_hub as tf_hub
85+
86+
```python
87+
import numpy as np
88+
import tensorflow as tf
89+
import tensorflow_hub as tf_hub
8290

83-
import coremltools as ct
84-
```
91+
import coremltools as ct
92+
```
8593

8694
2. Describe and set the input layer:
87-
88-
```python
89-
max_seq_length = 384
90-
input_shape = (1, max_seq_length)
91-
92-
input_words = tf.keras.layers.Input(
93-
shape=input_shape[1:], dtype=tf.int32, name='input_words')
94-
input_masks = tf.keras.layers.Input(
95-
shape=input_shape[1:], dtype=tf.int32, name='input_masks')
96-
segment_ids = tf.keras.layers.Input(
97-
shape=input_shape[1:], dtype=tf.int32, name='segment_ids')
98-
```
95+
96+
```python
97+
max_seq_length = 384
98+
input_shape = (1, max_seq_length)
99+
100+
input_words = tf.keras.layers.Input(
101+
shape=input_shape[1:], dtype=tf.int32, name='input_words')
102+
input_masks = tf.keras.layers.Input(
103+
shape=input_shape[1:], dtype=tf.int32, name='input_masks')
104+
segment_ids = tf.keras.layers.Input(
105+
shape=input_shape[1:], dtype=tf.int32, name='segment_ids')
106+
```
99107

100108
3. Build the TensorFlow model (`tf_model`):
101-
102-
```python
103-
bert_layer = tf_hub.KerasLayer("https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/1", trainable=False)
109+
110+
```python
111+
bert_layer = tf_hub.KerasLayer("https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/1", trainable=False)
112+
113+
pooled_output, sequence_output = bert_layer(
114+
[input_words, input_masks, segment_ids])
104115

105-
pooled_output, sequence_output = bert_layer(
106-
[input_words, input_masks, segment_ids])
116+
tf_model = tf.keras.models.Model(
117+
inputs=[input_words, input_masks, segment_ids],
118+
outputs=[pooled_output, sequence_output])
119+
```
120+
121+
4. Convert the `tf_model` to an ML program:
122+
123+
```python
124+
mlmodel = ct.convert(tf_model, source='TensorFlow')
125+
```
126+
127+
5. Define the `model.preview.type` metadata as `"bertqa"` so that you can preview the model in Xcode, and then save the model in an `mlpackage` file:
128+
129+
```python
130+
model.user_defined_metadata["com.apple.coreml.model.preview.type"] = "bertQA"
131+
model.save("BERT_with_preview_type.mlpackage")
132+
```
107133

108-
tf_model = tf.keras.models.Model(
109-
inputs=[input_words, input_masks, segment_ids],
110-
outputs=[pooled_output, sequence_output])
111-
```
134+
To test the model, double-click the `BERT_with_preview_type.mlpackage` file in the Mac Finder to launch Xcode and open the model information pane, and then follow these steps:
112135

113-
4. Convert the `tf_model` to a neural network:
114-
115-
```python
116-
mlmodel = ct.convert(tf_model, source='TensorFlow')
117-
```
136+
1. Click the **Preview** tab.
137+
2. Copy and paste sample text, such as the BERT QA model description, into the Passage Context field.
138+
3. Enter a question in the Question field, such as **What is BERT?** The answer appears in the Answer Candidate field, and is also highlighted in the Passage Context field.
118139

140+
![Preview in Xcode](images/xcode_bert_model_preview3_preview.png)
119141

120142

docs-guides/source/coremltools-examples.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -120,7 +120,6 @@ Full example:
120120
Full examples:
121121

122122
- [Segmentation Example](xcode-model-preview-types.md#segmentation-example)
123-
- [BERT QA Example](xcode-model-preview-types.md#bert-qa-example)
124123
- [Body Pose Example](xcode-model-preview-types.md#body-pose-example)
125124

126125
### MLModel Utilities
-146 KB
Loading
-181 KB
Loading
-701 KB
Loading
332 KB
Loading
-150 KB
Loading
-20.1 KB
Loading
-319 KB
Loading
-65.4 KB
Loading

0 commit comments

Comments
 (0)