Skip to content

Commit 3669d7a

Browse files
authored
Set YAML all uppercase in docs (#1280)
1 parent fe2f3a1 commit 3669d7a

File tree

5 files changed

+5
-5
lines changed

5 files changed

+5
-5
lines changed

docs/deployments/batch-api.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ A Batch API deployed in Cortex will create/support the following:
2727
You specify the following:
2828

2929
* a Cortex Predictor class in Python that defines how to initialize your model run batch inference
30-
* an API configuration yaml file that defines how your API will behave in production (parallelism, networking, compute, etc.)
30+
* an API configuration YAML file that defines how your API will behave in production (parallelism, networking, compute, etc.)
3131

3232
Once you've implemented your predictor and defined your API configuration, you can use the Cortex CLI to deploy a Batch API. The Cortex CLI will package your predictor implementation and the rest of the code and dependencies and upload it to the Cortex Cluster. The Cortex Cluster will setup an endpoint to a web service that can receive job submission requests and manage jobs.
3333

docs/deployments/batch-api/api-configuration.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
_WARNING: you are on the master branch, please refer to the docs on the branch that matches your `cortex version`_
44

5-
Once your model is [exported](../../guides/exporting.md) and you've implemented a [Predictor](predictors.md), you can configure your API via a yaml file (typically named `cortex.yaml`).
5+
Once your model is [exported](../../guides/exporting.md) and you've implemented a [Predictor](predictors.md), you can configure your API via a YAML file (typically named `cortex.yaml`).
66

77
Reference the section below which corresponds to your Predictor type: [Python](#python-predictor), [TensorFlow](#tensorflow-predictor), or [ONNX](#onnx-predictor).
88

docs/deployments/batch-api/endpoints.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -177,7 +177,7 @@ RESPONSE:
177177
"start_time": <string> # e.g. 2020-07-16T14:56:10.276007415Z
178178
"end_time": <string> (optional) # e.g. 2020-07-16T14:56:10.276007415Z (only present if the job has completed)
179179
},
180-
"api_spec": <string>, # a base64 encoded string of your api configuration yaml that has been encoded in msgpack
180+
"api_spec": <string>, # a base64 encoded string of your api configuration YAML that has been encoded in msgpack
181181
"endpoint": <string> # endpoint for this job
182182
}
183183
```

docs/deployments/realtime-api.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ A Realtime API deployed in Cortex has the following features:
3030
You specify the following:
3131

3232
* a Cortex Predictor class in Python that defines how to initialize and serve your model
33-
* an API configuration yaml file that defines how your API will behave in production (autoscaling, monitoring, networking, compute, etc.)
33+
* an API configuration YAML file that defines how your API will behave in production (autoscaling, monitoring, networking, compute, etc.)
3434

3535
Once you've implemented your predictor and defined your API configuration, you can use the Cortex CLI to deploy a Realtime API. The Cortex CLI will package your predictor implementation and the rest of the code and dependencies and upload it to the Cortex Cluster. The Cortex Cluster will set up an HTTP endpoint that routes traffic to multiple replicas/copies of web servers initialized with your code.
3636

docs/deployments/realtime-api/api-configuration.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
_WARNING: you are on the master branch, please refer to the docs on the branch that matches your `cortex version`_
44

5-
Once your model is [exported](../../guides/exporting.md) and you've implemented a [Predictor](predictors.md), you can configure your API via a yaml file (typically named `cortex.yaml`).
5+
Once your model is [exported](../../guides/exporting.md) and you've implemented a [Predictor](predictors.md), you can configure your API via a YAML file (typically named `cortex.yaml`).
66

77
Reference the section below which corresponds to your Predictor type: [Python](#python-predictor), [TensorFlow](#tensorflow-predictor), or [ONNX](#onnx-predictor).
88

0 commit comments

Comments
 (0)