You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
You can now run the same commands shown above to deploy the iris classifier to AWS (if you didn't set the default CLI environment, add `--env aws` to the `cortex` commands).
59
+
You can now run the same commands shown above to deploy the text generator to AWS (if you didn't set the default CLI environment, add `--env aws` to the `cortex` commands).
60
60
61
61
## Next steps
62
62
63
63
<!-- CORTEX_VERSION_MINOR -->
64
-
* Try the [tutorial](../../examples/sklearn/iris-classifier/README.md) to learn more about how to use Cortex.
64
+
* Try the [tutorial](../../examples/pytorch/text-generator/README.md) to learn more about how to use Cortex.
65
65
* Deploy one of our [examples](https://github.com/cortexlabs/cortex/tree/master/examples).
66
66
* See our [exporting guide](../guides/exporting.md) for how to export your model to use in an API.
67
67
* See [uninstall](uninstall.md) if you'd like to spin down your cluster.
Copy file name to clipboardExpand all lines: docs/deployments/batch-api/predictors.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -22,7 +22,7 @@ The following files can also be added at the root of the project's directory:
22
22
For example, if your directory looks like this:
23
23
24
24
```text
25
-
./iris-classifier/
25
+
./my-classifier/
26
26
├── cortex.yaml
27
27
├── values.json
28
28
├── predictor.py
@@ -191,7 +191,7 @@ class TensorFlowPredictor:
191
191
<!-- CORTEX_VERSION_MINOR -->
192
192
Cortex provides a `tensorflow_client` to your Predictor's constructor. `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/master/pkg/workloads/cortex/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `tensorflow_client.predict()` to make an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
193
193
194
-
When multiple models are defined using the Predictor's `models` field, the `tensorflow_client.predict()` method expects a second argument `model_name` which must hold the name of the model that you want to use for inference (for example: `self.client.predict(payload, "iris-classifier")`). See the [multi model guide](../../guides/multi-model.md#tensorflow-predictor) for more information.
194
+
When multiple models are defined using the Predictor's `models` field, the `tensorflow_client.predict()` method expects a second argument `model_name` which must hold the name of the model that you want to use for inference (for example: `self.client.predict(payload, "text-generator")`). See the [multi model guide](../../guides/multi-model.md#tensorflow-predictor) for more information.
195
195
196
196
For proper separation of concerns, it is recommended to use the constructor's `config` parameter for information such as from where to download the model and initialization files, or any configurable model parameters. You define `config` in your [API configuration](api-configuration.md), and it is passed through to your Predictor's constructor. The `config` parameters in the `API configuration` can be overridden by providing `config` in the job submission requests.
197
197
@@ -260,7 +260,7 @@ class ONNXPredictor:
260
260
<!-- CORTEX_VERSION_MINOR -->
261
261
Cortex provides an `onnx_client` to your Predictor's constructor. `onnx_client` is an instance of [ONNXClient](https://github.com/cortexlabs/cortex/tree/master/pkg/workloads/cortex/lib/client/onnx.py) that manages an ONNX Runtime session to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `onnx_client.predict()` to make an inference with your exported ONNX model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
262
262
263
-
When multiple models are defined using the Predictor's `models` field, the `onnx_client.predict()` method expects a second argument `model_name` which must hold the name of the model that you want to use for inference (for example: `self.client.predict(model_input, "iris-classifier")`). See the [multi model guide](../../guides/multi-model.md#onnx-predictor) for more information.
263
+
When multiple models are defined using the Predictor's `models` field, the `onnx_client.predict()` method expects a second argument `model_name` which must hold the name of the model that you want to use for inference (for example: `self.client.predict(model_input, "text-generator")`). See the [multi model guide](../../guides/multi-model.md#onnx-predictor) for more information.
264
264
265
265
For proper separation of concerns, it is recommended to use the constructor's `config` parameter for information such as from where to download the model and initialization files, or any configurable model parameters. You define `config` in your [API configuration](api-configuration.md), and it is passed through to your Predictor's constructor. The `config` parameters in the `API configuration` can be overridden by providing `config` in the job submission requests.
Copy file name to clipboardExpand all lines: docs/deployments/python-packages.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ _WARNING: you are on the master branch, please refer to the docs on the branch t
7
7
You can install your required PyPI packages and import them in your Python files using pip. Cortex looks for a `requirements.txt` file in the top level Cortex project directory (i.e. the directory which contains `cortex.yaml`):
8
8
9
9
```text
10
-
./iris-classifier/
10
+
./my-classifier/
11
11
├── cortex.yaml
12
12
├── predictor.py
13
13
├── ...
@@ -56,7 +56,7 @@ On GitHub, you can generate a personal access token by following [these steps](h
56
56
Python packages can also be installed by providing a `setup.py` that describes your project's modules. Here's an example directory structure:
57
57
58
58
```text
59
-
./iris-classifier/
59
+
./my-classifier/
60
60
├── cortex.yaml
61
61
├── predictor.py
62
62
├── ...
@@ -78,7 +78,7 @@ In this case, `requirements.txt` will have this form:
78
78
Cortex supports installing Conda packages. We recommend only using Conda when your required packages are not available in PyPI. Cortex looks for a `conda-packages.txt` file in the top level Cortex project directory (i.e. the directory which contains `cortex.yaml`):
Cortex provides a `tensorflow_client` to your Predictor's constructor. `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/master/pkg/workloads/cortex/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `tensorflow_client.predict()` to make an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
258
235
259
-
When multiple models are defined using the Predictor's `models` field, the `tensorflow_client.predict()` method expects a second argument `model_name` which must hold the name of the model that you want to use for inference (for example: `self.client.predict(payload, "iris-classifier")`). See the [multi model guide](../../guides/multi-model.md#tensorflow-predictor) for more information.
236
+
When multiple models are defined using the Predictor's `models` field, the `tensorflow_client.predict()` method expects a second argument `model_name` which must hold the name of the model that you want to use for inference (for example: `self.client.predict(payload, "text-generator")`). See the [multi model guide](../../guides/multi-model.md#tensorflow-predictor) for more information.
260
237
261
238
For proper separation of concerns, it is recommended to use the constructor's `config` parameter for information such as configurable model parameters or download links for initialization files. You define `config` in your [API configuration](api-configuration.md), and it is passed through to your Predictor's constructor.
262
239
@@ -352,7 +329,7 @@ class ONNXPredictor:
352
329
<!-- CORTEX_VERSION_MINOR -->
353
330
Cortex provides an `onnx_client` to your Predictor's constructor. `onnx_client` is an instance of [ONNXClient](https://github.com/cortexlabs/cortex/tree/master/pkg/workloads/cortex/lib/client/onnx.py) that manages an ONNX Runtime session to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `onnx_client.predict()` to make an inference with your exported ONNX model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
354
331
355
-
When multiple models are defined using the Predictor's `models` field, the `onnx_client.predict()` method expects a second argument `model_name` which must hold the name of the model that you want to use for inference (for example: `self.client.predict(model_input, "iris-classifier")`). See the [multi model guide](../../guides/multi-model.md#onnx-predictor) for more information.
332
+
When multiple models are defined using the Predictor's `models` field, the `onnx_client.predict()` method expects a second argument `model_name` which must hold the name of the model that you want to use for inference (for example: `self.client.predict(model_input, "text-generator")`). See the [multi model guide](../../guides/multi-model.md#onnx-predictor) for more information.
356
333
357
334
For proper separation of concerns, it is recommended to use the constructor's `config` parameter for information such as configurable model parameters or download links for initialization files. You define `config` in your [API configuration](api-configuration.md), and it is passed through to your Predictor's constructor.
Copy file name to clipboardExpand all lines: docs/deployments/realtime-api/traffic-splitter.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -78,5 +78,5 @@ Note that this will not delete the Realtime APIs targeted by the Traffic Splitte
78
78
79
79
<!-- CORTEX_VERSION_MINOR -->
80
80
* [Traffic Splitter Tutorial](../../../examples/traffic-splitter/README.md) provides a step-by-step walkthrough for deploying an Traffic Splitter
81
-
* [Realtime API Tutorial](../../../examples/sklearn/iris-classifier/README.md) provides a step-by-step walkthrough of deploying an iris classifier Realtime API
81
+
* [Realtime API Tutorial](../../../examples/pytorch/text-generator/README.md) provides a step-by-step walkthrough of deploying a realtime API for text generation
82
82
* [CLI documentation](../../miscellaneous/cli.md) lists all CLI commands
0 commit comments