Skip to content

Commit adde985

Browse files
authored
Update README, install, and tutorial docs (#991)
1 parent 4994345 commit adde985

File tree

6 files changed

+127
-99
lines changed

6 files changed

+127
-99
lines changed

README.md

Lines changed: 57 additions & 51 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,10 @@
1-
# Deploy machine learning models in production
2-
3-
Cortex is an open source platform for deploying machine learning models as production web services.
1+
# Machine learning model serving infrastructure
42

53
<br>
64

75
<!-- Delete on release branches -->
86
<!-- CORTEX_VERSION_README_MINOR -->
9-
[install](https://cortex.dev/install)[tutorial](https://cortex.dev/iris-classifier)[docs](https://cortex.dev)[examples](https://github.com/cortexlabs/cortex/tree/0.15/examples)[we're hiring](https://angel.co/cortex-labs-inc/jobs)[email us](mailto:hello@cortex.dev)[chat with us](https://gitter.im/cortexlabs/cortex)<br><br>
7+
[install](https://cortex.dev/install)[docs](https://cortex.dev)[examples](https://github.com/cortexlabs/cortex/tree/0.15/examples)[we're hiring](https://angel.co/cortex-labs-inc/jobs)[chat with us](https://gitter.im/cortexlabs/cortex)<br><br>
108

119
<!-- Set header Cache-Control=no-cache on the S3 object metadata (see https://help.github.com/en/articles/about-anonymized-image-urls) -->
1210
![Demo](https://d1zqebknpdh033.cloudfront.net/demo/gif/v0.13_2.gif)
@@ -25,43 +23,15 @@ Cortex is an open source platform for deploying machine learning models as produ
2523

2624
<br>
2725

28-
## Spinning up a cluster
26+
## Deploying a model
2927

30-
Cortex is designed to be self-hosted on any AWS account. You can spin up a cluster with a single command:
28+
### Install the CLI
3129

3230
<!-- CORTEX_VERSION_README_MINOR -->
3331
```bash
34-
# install the CLI on your machine
3532
$ bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/0.15/get-cli.sh)"
36-
37-
# provision infrastructure on AWS and spin up a cluster
38-
$ cortex cluster up
39-
40-
aws region: us-west-2
41-
aws instance type: g4dn.xlarge
42-
spot instances: yes
43-
min instances: 0
44-
max instances: 5
45-
46-
aws resource cost per hour
47-
1 eks cluster $0.10
48-
0 - 5 g4dn.xlarge instances for your apis $0.1578 - $0.526 each (varies based on spot price)
49-
0 - 5 50gb ebs volumes for your apis $0.007 each
50-
1 t3.medium instance for the operator $0.0416
51-
1 20gb ebs volume for the operator $0.003
52-
2 network load balancers $0.0225 each
53-
54-
your cluster will cost $0.19 - $2.85 per hour based on cluster size and spot instance pricing/availability
55-
56-
○ spinning up your cluster ...
57-
58-
your cluster is ready!
5933
```
6034

61-
<br>
62-
63-
## Deploying a model
64-
6535
### Implement your predictor
6636

6737
```python
@@ -84,35 +54,75 @@ class PythonPredictor:
8454
predictor:
8555
type: python
8656
path: predictor.py
87-
tracker:
88-
model_type: classification
8957
compute:
9058
gpu: 1
9159
mem: 4G
9260
```
9361
94-
### Deploy to AWS
62+
### Deploy your model
9563
9664
```bash
9765
$ cortex deploy
9866

9967
creating sentiment-classifier
10068
```
10169

102-
### Serve real-time predictions
70+
### Serve predictions
71+
72+
```bash
73+
$ curl http://localhost:8888 \
74+
-X POST -H "Content-Type: application/json" \
75+
-d '{"text": "serving models locally is cool!"}'
76+
77+
positive
78+
```
79+
80+
<br>
81+
82+
## Deploying models at scale
83+
84+
### Spin up a cluster
85+
86+
Cortex clusters are designed to be self-hosted on any AWS account (GCP support is coming soon):
87+
88+
```bash
89+
$ cortex cluster up
90+
91+
aws region: us-west-2
92+
aws instance type: g4dn.xlarge
93+
spot instances: yes
94+
min instances: 0
95+
max instances: 5
96+
97+
your cluster will cost $0.19 - $2.85 per hour based on cluster size and spot instance pricing/availability
98+
99+
○ spinning up your cluster ...
100+
101+
your cluster is ready!
102+
```
103+
104+
### Deploy to your cluster with the same code and configuration
105+
106+
```bash
107+
$ cortex deploy --env aws
108+
109+
creating sentiment-classifier
110+
```
111+
112+
### Serve predictions at scale
103113

104114
```bash
105115
$ curl http://***.amazonaws.com/sentiment-classifier \
106116
-X POST -H "Content-Type: application/json" \
107-
-d '{"text": "the movie was amazing!"}'
117+
-d '{"text": "serving models at scale is really cool!"}'
108118

109119
positive
110120
```
111121

112122
### Monitor your deployment
113123

114124
```bash
115-
$ cortex get sentiment-classifier --watch
125+
$ cortex get sentiment-classifier
116126

117127
status up-to-date requested last update avg request 2XX
118128
live 1 1 8s 24ms 12
@@ -122,27 +132,23 @@ positive 8
122132
negative 4
123133
```
124134

125-
<br>
135+
### How it works
126136

127-
## What is Cortex similar to?
137+
The CLI sends configuration and code to the cluster every time you run `cortex deploy`. Each model is loaded into a Docker container, along with any Python packages and request handling code. The model is exposed as a web service using a Network Load Balancer (NLB) and FastAPI / TensorFlow Serving / ONNX Runtime (depending on the model type). The containers are orchestrated on Elastic Kubernetes Service (EKS) while logs and metrics are streamed to CloudWatch.
128138

129-
Cortex is an open source alternative to serving models with SageMaker or building your own model deployment platform on top of AWS services like Elastic Kubernetes Service (EKS), Elastic Container Service (ECS), Lambda, Fargate, and Elastic Compute Cloud (EC2) and open source projects like Docker, Kubernetes, and TensorFlow Serving.
139+
Cortex manages its own Kubernetes cluster so that end-to-end functionality like request-based autoscaling, GPU support, and spot instance management can work out of the box without any additional DevOps work.
130140

131141
<br>
132142

133-
## How does Cortex work?
134-
135-
The CLI sends configuration and code to the cluster every time you run `cortex deploy`. Each model is loaded into a Docker container, along with any Python packages and request handling code. The model is exposed as a web service using Elastic Load Balancing (ELB), TensorFlow Serving, and ONNX Runtime. The containers are orchestrated on Elastic Kubernetes Service (EKS) while logs and metrics are streamed to CloudWatch.
143+
## What is Cortex similar to?
136144

137-
Cortex manages its own Kubernetes cluster so that end-to-end functionality like request-based autoscaling, GPU support, and spot instance management can work out of the box without any additional DevOps work.
145+
Cortex is an open source alternative to serving models with SageMaker or building your own model deployment platform on top of AWS services like Elastic Kubernetes Service (EKS), Lambda, or Fargate and open source projects like Docker, Kubernetes, TensorFlow Serving, and TorchServe.
138146

139147
<br>
140148

141-
## Examples of Cortex deployments
149+
## Examples
142150

143-
<!-- CORTEX_VERSION_README_MINOR x5 -->
144-
* [Sentiment analysis](https://github.com/cortexlabs/cortex/tree/0.15/examples/tensorflow/sentiment-analyzer): deploy a BERT model for sentiment analysis.
151+
<!-- CORTEX_VERSION_README_MINOR x3 -->
145152
* [Image classification](https://github.com/cortexlabs/cortex/tree/0.15/examples/tensorflow/image-classifier): deploy an Inception model to classify images.
146153
* [Search completion](https://github.com/cortexlabs/cortex/tree/0.15/examples/pytorch/search-completer): deploy Facebook's RoBERTa model to complete search terms.
147154
* [Text generation](https://github.com/cortexlabs/cortex/tree/0.15/examples/pytorch/text-generator): deploy Hugging Face's DistilGPT2 model to generate text.
148-
* [Iris classification](https://github.com/cortexlabs/cortex/tree/0.15/examples/sklearn/iris-classifier): deploy a scikit-learn model to classify iris flowers.

cli/cmd/root.go

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -121,7 +121,7 @@ func initTelemetry() {
121121
var _rootCmd = &cobra.Command{
122122
Use: "cortex",
123123
Aliases: []string{"cx"},
124-
Short: "deploy machine learning models in production",
124+
Short: "machine learning model serving infrastructure",
125125
}
126126

127127
func Execute() {

docs/cluster-management/install.md

Lines changed: 19 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -2,41 +2,37 @@
22

33
_WARNING: you are on the master branch, please refer to the docs on the branch that matches your `cortex version`_
44

5-
## Prerequisites
5+
## Running on your machine or a single instance
66

7-
1. [Docker](https://docs.docker.com/install)
8-
2. [AWS credentials](aws-credentials.md)
7+
[Docker](https://docs.docker.com/install) is required to run Cortex locally. In addition, your machine (or your Docker Desktop for Mac users) should have at least 8GB of memory if you plan to deploy large deep learning models.
98

10-
## Spin up a cluster
11-
12-
See [cluster configuration](config.md) to learn how you can customize your cluster with `cluster.yaml` and see [EC2 instances](ec2-instances.md) for an overview of several EC2 instance types. To use GPU nodes, you may need to subscribe to the [EKS-optimized AMI with GPU Support](https://aws.amazon.com/marketplace/pp/B07GRHFXGM) and [file an AWS support ticket](https://console.aws.amazon.com/support/cases#/create?issueType=service-limit-increase&limitType=ec2-instances) to increase the limit for your desired instance type.
9+
### Install the CLI
1310

1411
<!-- CORTEX_VERSION_MINOR -->
1512
```bash
16-
# install the CLI on your machine
1713
$ bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/master/get-cli.sh)"
14+
```
1815

19-
# provision infrastructure on AWS and spin up a cluster
20-
$ cortex cluster up
21-
22-
aws resource cost per hour
23-
1 eks cluster $0.10
24-
0 - 5 g4dn.xlarge instances for your apis $0.1578 - $0.526 each (varies based on spot price)
25-
0 - 5 50gb ebs volumes for your apis $0.007 each
26-
1 t3.medium instance for the operator $0.0416
27-
1 20gb ebs volume for the operator $0.003
28-
2 network load balancers $0.0225 each
16+
## Running at scale on AWS
2917

30-
your cluster will cost $0.19 - $2.85 per hour based on cluster size and spot instance pricing/availability
18+
[Docker](https://docs.docker.com/install) and valid [AWS credentials](aws-credentials.md) are required to run a Cortex cluster on AWS.
3119

32-
○ spinning up your cluster ...
20+
### Spin up a cluster
3321

34-
your cluster is ready!
35-
```
22+
See [cluster configuration](config.md) to learn how you can customize your cluster with `cluster.yaml` and see [EC2 instances](ec2-instances.md) for an overview of several EC2 instance types.
3623

37-
## Deploy a model
24+
To use GPU nodes, you may need to subscribe to the [EKS-optimized AMI with GPU Support](https://aws.amazon.com/marketplace/pp/B07GRHFXGM) and [file an AWS support ticket](https://console.aws.amazon.com/support/cases#/create?issueType=service-limit-increase&limitType=ec2-instances) to increase the limit for your desired instance type.
3825

3926
<!-- CORTEX_VERSION_MINOR -->
27+
```bash
28+
# install the CLI on your machine
29+
$ bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/master/get-cli.sh)"
30+
31+
# provision infrastructure on AWS and spin up a cluster
32+
$ cortex cluster up
33+
```
34+
35+
## Deploy an example
4036

4137
```bash
4238
# clone the Cortex repository
@@ -45,7 +41,7 @@ git clone -b master https://github.com/cortexlabs/cortex.git
4541
# navigate to the TensorFlow iris classification example
4642
cd cortex/examples/tensorflow/iris-classifier
4743

48-
# deploy the model to the cluster
44+
# deploy the model
4945
cortex deploy
5046

5147
# view the status of the api
@@ -61,11 +57,7 @@ cortex get iris-classifier
6157
curl -X POST -H "Content-Type: application/json" \
6258
-d '{ "sepal_length": 5.2, "sepal_width": 3.6, "petal_length": 1.4, "petal_width": 0.3 }' \
6359
<API endpoint>
64-
```
6560

66-
## Cleanup
67-
68-
```bash
6961
# delete the api
7062
cortex delete iris-classifier
7163
```

docs/summary.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Table of contents
22

3-
* [Deploy machine learning models in production](../README.md)
3+
* [Machine learning model serving infrastructure](../README.md)
44
* [Install](cluster-management/install.md)
55
* [Tutorial](../examples/sklearn/iris-classifier/README.md)
66
* [GitHub](https://github.com/cortexlabs/cortex)
Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
11
{
2-
"text": "deploy machine learning models in production"
2+
"text": "machine learning model serving infrastructure"
33
}

0 commit comments

Comments
 (0)