You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
-d '{"text": "serving models at scale is really cool!"}'
108
118
109
119
positive
110
120
```
111
121
112
122
### Monitor your deployment
113
123
114
124
```bash
115
-
$ cortex get sentiment-classifier --watch
125
+
$ cortex get sentiment-classifier
116
126
117
127
status up-to-date requested last update avg request 2XX
118
128
live 1 1 8s 24ms 12
@@ -122,27 +132,23 @@ positive 8
122
132
negative 4
123
133
```
124
134
125
-
<br>
135
+
### How it works
126
136
127
-
## What is Cortex similar to?
137
+
The CLI sends configuration and code to the cluster every time you run `cortex deploy`. Each model is loaded into a Docker container, along with any Python packages and request handling code. The model is exposed as a web service using a Network Load Balancer (NLB) and FastAPI / TensorFlow Serving / ONNX Runtime (depending on the model type). The containers are orchestrated on Elastic Kubernetes Service (EKS) while logs and metrics are streamed to CloudWatch.
128
138
129
-
Cortex is an open source alternative to serving models with SageMaker or building your own model deployment platform on top of AWS services like Elastic Kubernetes Service (EKS), Elastic Container Service (ECS), Lambda, Fargate, and Elastic Compute Cloud (EC2) and open source projects like Docker, Kubernetes, and TensorFlow Serving.
139
+
Cortex manages its own Kubernetes cluster so that end-to-end functionality like request-based autoscaling, GPU support, and spot instance management can work out of the box without any additional DevOps work.
130
140
131
141
<br>
132
142
133
-
## How does Cortex work?
134
-
135
-
The CLI sends configuration and code to the cluster every time you run `cortex deploy`. Each model is loaded into a Docker container, along with any Python packages and request handling code. The model is exposed as a web service using Elastic Load Balancing (ELB), TensorFlow Serving, and ONNX Runtime. The containers are orchestrated on Elastic Kubernetes Service (EKS) while logs and metrics are streamed to CloudWatch.
143
+
## What is Cortex similar to?
136
144
137
-
Cortex manages its own Kubernetes cluster so that end-to-end functionality like request-based autoscaling, GPU support, and spot instance management can work out of the box without any additional DevOps work.
145
+
Cortex is an open source alternative to serving models with SageMaker or building your own model deployment platform on top of AWS services like Elastic Kubernetes Service (EKS), Lambda, or Fargate and open source projects like Docker, Kubernetes, TensorFlow Serving, and TorchServe.
138
146
139
147
<br>
140
148
141
-
## Examples of Cortex deployments
149
+
## Examples
142
150
143
-
<!-- CORTEX_VERSION_README_MINOR x5 -->
144
-
*[Sentiment analysis](https://github.com/cortexlabs/cortex/tree/0.15/examples/tensorflow/sentiment-analyzer): deploy a BERT model for sentiment analysis.
151
+
<!-- CORTEX_VERSION_README_MINOR x3 -->
145
152
*[Image classification](https://github.com/cortexlabs/cortex/tree/0.15/examples/tensorflow/image-classifier): deploy an Inception model to classify images.
146
153
*[Search completion](https://github.com/cortexlabs/cortex/tree/0.15/examples/pytorch/search-completer): deploy Facebook's RoBERTa model to complete search terms.
147
154
*[Text generation](https://github.com/cortexlabs/cortex/tree/0.15/examples/pytorch/text-generator): deploy Hugging Face's DistilGPT2 model to generate text.
148
-
*[Iris classification](https://github.com/cortexlabs/cortex/tree/0.15/examples/sklearn/iris-classifier): deploy a scikit-learn model to classify iris flowers.
Copy file name to clipboardExpand all lines: docs/cluster-management/install.md
+19-27Lines changed: 19 additions & 27 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,41 +2,37 @@
2
2
3
3
_WARNING: you are on the master branch, please refer to the docs on the branch that matches your `cortex version`_
4
4
5
-
## Prerequisites
5
+
## Running on your machine or a single instance
6
6
7
-
1.[Docker](https://docs.docker.com/install)
8
-
2.[AWS credentials](aws-credentials.md)
7
+
[Docker](https://docs.docker.com/install) is required to run Cortex locally. In addition, your machine (or your Docker Desktop for Mac users) should have at least 8GB of memory if you plan to deploy large deep learning models.
9
8
10
-
## Spin up a cluster
11
-
12
-
See [cluster configuration](config.md) to learn how you can customize your cluster with `cluster.yaml` and see [EC2 instances](ec2-instances.md) for an overview of several EC2 instance types. To use GPU nodes, you may need to subscribe to the [EKS-optimized AMI with GPU Support](https://aws.amazon.com/marketplace/pp/B07GRHFXGM) and [file an AWS support ticket](https://console.aws.amazon.com/support/cases#/create?issueType=service-limit-increase&limitType=ec2-instances) to increase the limit for your desired instance type.
# provision infrastructure on AWS and spin up a cluster
20
-
$ cortex cluster up
21
-
22
-
aws resource cost per hour
23
-
1 eks cluster $0.10
24
-
0 - 5 g4dn.xlarge instances for your apis $0.1578 - $0.526 each (varies based on spot price)
25
-
0 - 5 50gb ebs volumes for your apis $0.007 each
26
-
1 t3.medium instance for the operator $0.0416
27
-
1 20gb ebs volume for the operator $0.003
28
-
2 network load balancers $0.0225 each
16
+
## Running at scale on AWS
29
17
30
-
your cluster will cost $0.19 - $2.85 per hour based on cluster size and spot instance pricing/availability
18
+
[Docker](https://docs.docker.com/install) and valid [AWS credentials](aws-credentials.md) are required to run a Cortex cluster on AWS.
31
19
32
-
○ spinning up your cluster ...
20
+
### Spin up a cluster
33
21
34
-
your cluster is ready!
35
-
```
22
+
See [cluster configuration](config.md) to learn how you can customize your cluster with `cluster.yaml` and see [EC2 instances](ec2-instances.md) for an overview of several EC2 instance types.
36
23
37
-
## Deploy a model
24
+
To use GPU nodes, you may need to subscribe to the [EKS-optimized AMI with GPU Support](https://aws.amazon.com/marketplace/pp/B07GRHFXGM) and [file an AWS support ticket](https://console.aws.amazon.com/support/cases#/create?issueType=service-limit-increase&limitType=ec2-instances) to increase the limit for your desired instance type.
0 commit comments