You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/cluster-management/install.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,9 +7,9 @@ _WARNING: you are on the master branch, please refer to the docs on the branch t
7
7
1.[Docker](https://docs.docker.com/install)
8
8
2.[AWS credentials](aws-credentials.md)
9
9
10
-
## Installation
10
+
## Spin up a cluster
11
11
12
-
See [cluster configuration](config.md) to learn how you can customize your installation and [EC2 instances](ec2-instances.md) for an overview of how to pick an appropriate EC2 instance type for your cluster.
12
+
See [cluster configuration](config.md) to learn how you can customize your cluster and [EC2 instances](ec2-instances.md) for an overview of several EC2 instance types.
13
13
14
14
<!-- CORTEX_VERSION_MINOR -->
15
15
```bash
@@ -30,8 +30,8 @@ Note: This will create resources in your AWS account which aren't included in th
Create a `requirements.txt` file to specify the dependencies needed by `predictor.py`. Cortex will automatically install them into your runtime once you deploy:
92
91
@@ -100,9 +99,9 @@ You can skip dependencies that are [pre-installed](../../../docs/deployments/pyt
100
99
101
100
<br>
102
101
103
-
## Configure an API
102
+
## Configure your API
104
103
105
-
Create a `cortex.yaml` file and add the configuration below. An `api` provides a runtime for inference and makes our`predictor.py` implementation available as a web service that can serve real-time predictions:
104
+
Create a `cortex.yaml` file and add the configuration below. An `api` provides a runtime for inference and makes your`predictor.py` implementation available as a web service that can serve real-time predictions:
106
105
107
106
```yaml
108
107
# cortex.yaml
@@ -120,7 +119,7 @@ Create a `cortex.yaml` file and add the configuration below. An `api` provides a
120
119
121
120
## Deploy to AWS
122
121
123
-
`cortex deploy` takes the declarative configuration from `cortex.yaml` and creates it on your Cortex cluster:
122
+
`cortex deploy` takes the configuration from `cortex.yaml` and creates it on your cluster:
The output above indicates that one replica of the API was requested and is available to serve predictions. Cortex will automatically launch more replicas if the load increases and spin down replicas if there is unused capacity.
141
+
The output above indicates that one replica of your API was requested and is available to serve predictions. Cortex will automatically launch more replicas if the load increases and spin down replicas if there is unused capacity.
Running `cortex delete` will free up cluster resources and allow Cortex to scale down to the minimum number of instances you specified during cluster installation. It will not spin down your cluster.
441
-
442
-
Any questions? [chat with us](https://gitter.im/cortexlabs/cortex).
0 commit comments