You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs-guides/source/convert-tensorflow-2-bert-transformer-models.md
+92-70Lines changed: 92 additions & 70 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,109 +12,131 @@ The following examples demonstrate converting TensorFlow 2 models to Core ML usi
12
12
13
13
The following example converts the [DistilBERT model from Huggingface](https://huggingface.co/transformers/model_doc/distilbert.html#tfdistilbertformaskedlm) to Core ML.
14
14
15
-
```{admonition}Install Transformers
15
+
```{admonition}Requirements
16
16
17
-
You may need to first install Transformers version 4.17.0.
17
+
This example requires TensorFlow 2 and Transformers version 4.17.0.
18
18
```
19
19
20
+
20
21
Follow these steps:
21
22
22
23
1. Add the import statements:
23
24
24
-
```python
25
-
import numpy as np
26
-
import coremltools as ct
27
-
import tensorflow as tf
25
+
```python
26
+
import numpy as np
27
+
import coremltools as ct
28
+
import tensorflow as tf
28
29
29
-
from transformers import DistilBertTokenizer, TFDistilBertForMaskedLM
30
-
```
30
+
from transformers import DistilBertTokenizer, TFDistilBertForMaskedLM
31
+
```
31
32
32
33
2. Load the DistilBERT model and tokenizer. This example uses the `TFDistilBertForMaskedLM` variant:
4. Convert the `tf_model` to a Core ML neural network (`mlmodel`):
52
-
53
-
```python
54
-
mlmodel = ct.convert(tf_model)
55
-
```
52
+
4. Convert the `tf_model` to an MLprogram (`mlmodel`):
53
+
54
+
```python
55
+
mlmodel = ct.convert(tf_model)
56
+
```
56
57
57
58
5. Create the input using `tokenizer`:
58
-
59
-
```python
60
-
# Fill the input with zeros to adhere to input_shape
61
-
input_values = np.zeros(input_shape)
62
-
# Store the tokens from our sample sentence into the input
63
-
input_values[0,:8] = np.array(tokenizer.encode("Hello, my dog is cute")).astype(np.int32)
64
-
```
59
+
60
+
```python
61
+
# Fill the input with zeros to adhere to input_shape
62
+
input_values = np.zeros(input_shape)
63
+
# Store the tokens from our sample sentence into the input
64
+
input_values[0,:8] = np.array(tokenizer.encode("Hello, my dog is cute")).astype(np.int32)
65
+
```
65
66
66
67
6. Use `mlmodel`for prediction:
67
-
68
-
```python
69
-
mlmodel.predict({'input':input_values}) # 'input' is the name of our input layer from (3)
70
-
```
68
+
69
+
```python
70
+
mlmodel.predict({'input':input_values}) # 'input' is the name of our input layer from (3)
71
+
```
71
72
72
73
## Convert the TF Hub BERT Transformer Model
73
74
74
-
The following example converts the [BERT model from TensorFlow Hub](https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/1). Follow these steps:
75
+
The following example converts the [BERT model from TensorFlow Hub](https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/1).
76
+
77
+
```{admonition} Requirements
78
+
79
+
This example requires TensorFlow 2, TensorFlow Hub, and Transformers version 4.17.0.
To test the model, double-click the `BERT_with_preview_type.mlpackage`filein the Mac Finder to launch Xcode andopen the model information pane, and then follow these steps:
2. Copy and paste sample text, such as the BERTQA model description, into the Passage Context field.
138
+
3. Enter a question in the Question field, such as**What isBERT?** The answer appears in the Answer Candidate field, andis also highlighted in the Passage Context field.
118
139
140
+

0 commit comments