You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -147,6 +140,6 @@ Cortex manages its own Kubernetes cluster so that end-to-end functionality like
147
140
## Examples
148
141
149
142
<!-- CORTEX_VERSION_README_MINOR x3 -->
150
-
*[Image classification](https://github.com/cortexlabs/cortex/tree/0.17/examples/tensorflow/image-classifier): deploy an Inception model to classify images.
151
-
*[Search completion](https://github.com/cortexlabs/cortex/tree/0.17/examples/pytorch/search-completer): deploy Facebook's RoBERTa model to complete search terms.
152
-
*[Text generation](https://github.com/cortexlabs/cortex/tree/0.17/examples/pytorch/text-generator): deploy Hugging Face's DistilGPT2 model to generate text.
143
+
*[Image classification](https://github.com/cortexlabs/cortex/tree/0.18/examples/tensorflow/image-classifier): deploy an Inception model to classify images.
144
+
*[Search completion](https://github.com/cortexlabs/cortex/tree/0.18/examples/pytorch/search-completer): deploy Facebook's RoBERTa model to complete search terms.
145
+
*[Text generation](https://github.com/cortexlabs/cortex/tree/0.18/examples/pytorch/text-generator): deploy Hugging Face's DistilGPT2 model to generate text.
Copy file name to clipboardExpand all lines: docs/cluster-management/config.md
+21-21Lines changed: 21 additions & 21 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -47,7 +47,7 @@ instance_volume_type: gp2
47
47
48
48
# whether the subnets used for EC2 instances should be public or private (default: "public")
49
49
# if "public", instances will be assigned public IP addresses; if "private", instances won't have public IPs and a NAT gateway will be created to allow outgoing network requests
50
-
# see https://docs.cortex.dev/v/master/miscellaneous/security#private-cluster for more information
50
+
# see https://docs.cortex.dev/v/0.18/miscellaneous/security#private-cluster for more information
51
51
subnet_visibility: public # must be "public" or "private"
52
52
53
53
# whether to include a NAT gateway with the cluster (a NAT gateway is necessary when using private subnets)
@@ -56,12 +56,12 @@ nat_gateway: none # must be "none", "single", or "highly_available" (highly_ava
56
56
57
57
# whether the API load balancer should be internet-facing or internal (default: "internet-facing")
58
58
# note: if using "internal", APIs will still be accessible via the public API Gateway endpoint unless you also disable API Gateway in your API's configuration (if you do that, you must configure VPC Peering to connect to your APIs)
59
-
# see https://docs.cortex.dev/v/master/miscellaneous/security#private-cluster for more information
59
+
# see https://docs.cortex.dev/v/0.18/miscellaneous/security#private-cluster for more information
60
60
api_load_balancer_scheme: internet-facing # must be "internet-facing" or "internal"
61
61
62
62
# whether the operator load balancer should be internet-facing or internal (default: "internet-facing")
63
-
# note: if using "internal", you must configure VPC Peering to connect your CLI to your cluster operator (https://docs.cortex.dev/v/master/guides/vpc-peering)
64
-
# see https://docs.cortex.dev/v/master/miscellaneous/security#private-cluster for more information
63
+
# note: if using "internal", you must configure VPC Peering to connect your CLI to your cluster operator (https://docs.cortex.dev/v/0.18/guides/vpc-peering)
64
+
# see https://docs.cortex.dev/v/0.18/miscellaneous/security#private-cluster for more information
65
65
operator_load_balancer_scheme: internet-facing # must be "internet-facing" or "internal"
66
66
67
67
# CloudWatch log group for cortex (default: <cluster_name>)
@@ -71,10 +71,10 @@ log_group: cortex
71
71
tags: # <string>: <string> map of key/value pairs
72
72
73
73
# whether to use spot instances in the cluster (default: false)
74
-
# see https://docs.cortex.dev/v/master/cluster-management/spot-instances for additional details on spot configuration
74
+
# see https://docs.cortex.dev/v/0.18/cluster-management/spot-instances for additional details on spot configuration
75
75
spot: false
76
76
77
-
# see https://docs.cortex.dev/v/master/guides/custom-domain for instructions on how to set up a custom domain
77
+
# see https://docs.cortex.dev/v/0.18/guides/custom-domain for instructions on how to set up a custom domain
78
78
ssl_certificate_arn:
79
79
```
80
80
@@ -85,19 +85,19 @@ The docker images used by the Cortex cluster can also be overridden, although th
Copy file name to clipboardExpand all lines: docs/deployments/exporting.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,7 +11,7 @@ Here are examples for some common ML libraries:
11
11
The recommended approach is export your PyTorch model with [torch.save()](https://pytorch.org/docs/stable/torch.html?highlight=save#torch.save). Here is PyTorch's documentation on [saving and loading models](https://pytorch.org/tutorials/beginner/saving_loading_models.html).
12
12
13
13
<!-- CORTEX_VERSION_MINOR -->
14
-
[examples/pytorch/iris-classifier](https://github.com/cortexlabs/cortex/blob/master/examples/pytorch/iris-classifier) exports its trained model like this:
14
+
[examples/pytorch/iris-classifier](https://github.com/cortexlabs/cortex/blob/0.18/examples/pytorch/iris-classifier) exports its trained model like this:
15
15
16
16
```python
17
17
torch.save(model.state_dict(), "weights.pth")
@@ -24,7 +24,7 @@ For Inferentia-equipped instances, check the [Inferentia instructions](inferenti
24
24
It may also be possible to export your PyTorch model into the ONNX format using [torch.onnx.export()](https://pytorch.org/docs/stable/onnx.html#torch.onnx.export).
25
25
26
26
<!-- CORTEX_VERSION_MINOR -->
27
-
For example, if [examples/pytorch/iris-classifier](https://github.com/cortexlabs/cortex/blob/master/examples/pytorch/iris-classifier) were to export the model to ONNX, it would look like this:
27
+
For example, if [examples/pytorch/iris-classifier](https://github.com/cortexlabs/cortex/blob/0.18/examples/pytorch/iris-classifier) were to export the model to ONNX, it would look like this:
28
28
29
29
```python
30
30
placeholder = torch.randn(1, 4)
@@ -52,7 +52,7 @@ A TensorFlow `SavedModel` directory should have this structure:
52
52
```
53
53
54
54
<!-- CORTEX_VERSION_MINOR -->
55
-
Most of the TensorFlow examples use this approach. Here is the relevant code from [examples/tensorflow/sentiment-analyzer](https://github.com/cortexlabs/cortex/blob/master/examples/tensorflow/sentiment-analyzer):
55
+
Most of the TensorFlow examples use this approach. Here is the relevant code from [examples/tensorflow/sentiment-analyzer](https://github.com/cortexlabs/cortex/blob/0.18/examples/tensorflow/sentiment-analyzer):
[examples/tensorflow/iris-classifier](https://github.com/cortexlabs/cortex/blob/master/examples/tensorflow/iris-classifier) also use the `SavedModel` approach, and includes a Python notebook demonstrating how it was exported.
93
+
[examples/tensorflow/iris-classifier](https://github.com/cortexlabs/cortex/blob/0.18/examples/tensorflow/iris-classifier) also use the `SavedModel` approach, and includes a Python notebook demonstrating how it was exported.
94
94
95
95
### Other model formats
96
96
97
97
There are other ways to export Keras or TensorFlow models, and as long as they can be loaded and used to make predictions in Python, they will be supported by Cortex.
98
98
99
99
<!-- CORTEX_VERSION_MINOR -->
100
-
For example, the `crnn` API in [examples/tensorflow/license-plate-reader](https://github.com/cortexlabs/cortex/blob/master/examples/tensorflow/license-plate-reader) uses this approach.
100
+
For example, the `crnn` API in [examples/tensorflow/license-plate-reader](https://github.com/cortexlabs/cortex/blob/0.18/examples/tensorflow/license-plate-reader) uses this approach.
101
101
102
102
## Scikit-learn
103
103
@@ -106,7 +106,7 @@ For example, the `crnn` API in [examples/tensorflow/license-plate-reader](https:
106
106
Scikit-learn models are typically exported using `pickle`. Here is [Scikit-learn's documentation](https://scikit-learn.org/stable/modules/model_persistence.html).
107
107
108
108
<!-- CORTEX_VERSION_MINOR -->
109
-
[examples/sklearn/iris-classifier](https://github.com/cortexlabs/cortex/blob/master/examples/sklearn/iris-classifier) uses this approach. Here is the relevant code:
109
+
[examples/sklearn/iris-classifier](https://github.com/cortexlabs/cortex/blob/0.18/examples/sklearn/iris-classifier) uses this approach. Here is the relevant code:
110
110
111
111
```python
112
112
pickle.dump(model, open("model.pkl", "wb"))
@@ -159,7 +159,7 @@ model.save_model("model.bin")
159
159
It is also possible to export an XGBoost model to the ONNX format using [onnxmltools](https://github.com/onnx/onnxmltools).
160
160
161
161
<!-- CORTEX_VERSION_MINOR -->
162
-
[examples/onnx/iris-classifier](https://github.com/cortexlabs/cortex/blob/master/examples/onnx/iris-classifier) uses this approach. Here is the relevant code:
162
+
[examples/onnx/iris-classifier](https://github.com/cortexlabs/cortex/blob/0.18/examples/onnx/iris-classifier) uses this approach. Here is the relevant code:
0 commit comments