|
1 | 1 | # Install |
2 | 2 |
|
3 | | -## Running on your machine or a single instance |
4 | | - |
5 | | -[Docker](https://docs.docker.com/install) is required to run Cortex locally. In addition, your machine (or your Docker Desktop for Mac users) should have at least 8GB of memory if you plan to deploy large deep learning models. |
6 | | - |
7 | | -### Install the CLI |
| 3 | +## Install the CLI |
8 | 4 |
|
9 | 5 | <!-- CORTEX_VERSION_MINOR --> |
10 | 6 | ```bash |
11 | 7 | bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/0.18/get-cli.sh)" |
12 | 8 | ``` |
13 | 9 |
|
14 | | -Continue to [deploy an example](#deploy-an-example) below. |
15 | | - |
16 | | -## Running at scale on AWS |
17 | | - |
18 | | -[Docker](https://docs.docker.com/install) and valid [AWS credentials](aws-credentials.md) are required to run a Cortex cluster on AWS. |
19 | | - |
20 | | -### Spin up a cluster |
21 | | - |
22 | | -See [cluster configuration](config.md) to learn how you can customize your cluster with `cluster.yaml` and see [EC2 instances](ec2-instances.md) for an overview of several EC2 instance types. |
23 | | - |
24 | | -To use GPU nodes, you may need to subscribe to the [EKS-optimized AMI with GPU Support](https://aws.amazon.com/marketplace/pp/B07GRHFXGM) and [file an AWS support ticket](https://console.aws.amazon.com/support/cases#/create?issueType=service-limit-increase&limitType=ec2-instances) to increase the limit for your desired instance type. |
25 | | - |
26 | | -<!-- CORTEX_VERSION_MINOR --> |
27 | | -```bash |
28 | | -# install the CLI on your machine |
29 | | -bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/0.18/get-cli.sh)" |
30 | | - |
31 | | -# provision infrastructure on AWS and spin up a cluster |
32 | | -cortex cluster up |
33 | | -``` |
| 10 | +You must have [Docker](https://docs.docker.com/install) installed to run Cortex locally or to create a cluster on AWS. |
34 | 11 |
|
35 | 12 | ## Deploy an example |
36 | 13 |
|
@@ -63,4 +40,26 @@ curl -X POST -H "Content-Type: application/json" \ |
63 | 40 | cortex delete iris-classifier |
64 | 41 | ``` |
65 | 42 |
|
66 | | -See [uninstall](uninstall.md) if you'd like to spin down your cluster. |
| 43 | +## Running at scale on AWS |
| 44 | + |
| 45 | +Run the command below to create a cluster with basic configuration, or see [cluster configuration](config.md) to learn how you can customize your cluster with `cluster.yaml`. |
| 46 | + |
| 47 | +See [EC2 instances](ec2-instances.md) for an overview of several EC2 instance types. To use GPU nodes, you may need to subscribe to the [EKS-optimized AMI with GPU Support](https://aws.amazon.com/marketplace/pp/B07GRHFXGM) and [file an AWS support ticket](https://console.aws.amazon.com/support/cases#/create?issueType=service-limit-increase&limitType=ec2-instances) to increase the limit for your desired instance type. |
| 48 | + |
| 49 | +```bash |
| 50 | +# create a Cortex cluster on your AWS account |
| 51 | +cortex cluster up |
| 52 | + |
| 53 | +# set the default CLI environment (optional) |
| 54 | +cortex env default aws |
| 55 | +``` |
| 56 | + |
| 57 | +You can now run the same commands shown above to deploy the iris classifier to AWS (if you didn't set the default CLI environment, add `--env aws` to the `cortex` commands). |
| 58 | + |
| 59 | +## Next steps |
| 60 | + |
| 61 | +<!-- CORTEX_VERSION_MINOR --> |
| 62 | +* Try the [tutorial](../../examples/sklearn/iris-classifier/README.md) to learn more about how to use Cortex. |
| 63 | +* Deploy one of our [examples](https://github.com/cortexlabs/cortex/tree/0.18/examples). |
| 64 | +* See our [exporting docs](../deployments/exporting.md) for how to export your model to use in an API. |
| 65 | +* See [uninstall](uninstall.md) if you'd like to spin down your cluster. |
0 commit comments