Skip to content

Commit 5825079

Browse files
authored
Documentation fixes (#1282)
1 parent cc31a4d commit 5825079

File tree

4 files changed

+17
-17
lines changed

4 files changed

+17
-17
lines changed

docs/cluster-management/uninstall.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -33,10 +33,10 @@ To delete them:
3333
export AWS_ACCESS_KEY_ID=***
3434
export AWS_SECRET_ACCESS_KEY=***
3535

36-
# identify the name of your cortex s3 bucket
36+
# identify the name of your cortex S3 bucket
3737
aws s3 ls
3838

39-
# delete the s3 bucket
39+
# delete the S3 bucket
4040
aws s3 rb --force s3://<bucket>
4141

4242
# delete the log group (replace <log_group> with what was configured during installation, default: cortex)

docs/deployments/batch-api/endpoints.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -62,14 +62,14 @@ RESPONSE:
6262

6363
### S3 file paths
6464

65-
If your input data is a list of files such as images/videos in an s3 directory, you can define `file_path_lister` in your submission request payload. You can use `file_path_lister.s3_paths` to specify a list of files or prefixes, and `file_path_lister.includes` and/or `file_path_lister.excludes` to remove unwanted files. The s3 file paths will be aggregated into batches of size `file_path_lister.batch_size`. To learn more about fine-grained S3 file filtering see [filtering files](#filtering-files).
65+
If your input data is a list of files such as images/videos in an S3 directory, you can define `file_path_lister` in your submission request payload. You can use `file_path_lister.s3_paths` to specify a list of files or prefixes, and `file_path_lister.includes` and/or `file_path_lister.excludes` to remove unwanted files. The S3 file paths will be aggregated into batches of size `file_path_lister.batch_size`. To learn more about fine-grained S3 file filtering see [filtering files](#filtering-files).
6666

6767
__The total size of a batch must be less than 256 KiB.__
6868

6969
This submission pattern can be useful in the following scenarios:
7070

71-
* you have a list of images/videos in an s3 directory
72-
* each s3 file represents a single sample or a small number of samples
71+
* you have a list of images/videos in an S3 directory
72+
* each S3 file represents a single sample or a small number of samples
7373

7474
If a single S3 file contains a lot of samples/rows, try the next submission strategy.
7575

@@ -78,10 +78,10 @@ POST <batch_api_endpoint>/:
7878
{
7979
"workers": <int>, # the number of workers to allocate for this job (required)
8080
"file_path_lister": {
81-
"s3_paths": [<string>], # can be s3 prefixes or complete s3 paths (required)
81+
"s3_paths": [<string>], # can be S3 prefixes or complete S3 paths (required)
8282
"includes": [<string>], # glob patterns (optional)
8383
"excludes": [<string>], # glob patterns (optional)
84-
"batch_size": <int>, # the number of s3 file paths per batch (the predict() function is called once per batch) (required)
84+
"batch_size": <int>, # the number of S3 file paths per batch (the predict() function is called once per batch) (required)
8585
}
8686
"config": { # custom fields for this specific job (will override values in `config` specified in your api configuration) (optional)
8787
"string": <any>
@@ -102,22 +102,22 @@ RESPONSE:
102102

103103
### Newline delimited JSON files in S3
104104

105-
If your input dataset is a newline delimited json file in an s3 directory (or a list of them), you can define `delimited_files` in your request payload to break up the contents of the file into batches of size `delimited_files.batch_size`.
105+
If your input dataset is a newline delimited json file in an S3 directory (or a list of them), you can define `delimited_files` in your request payload to break up the contents of the file into batches of size `delimited_files.batch_size`.
106106

107-
Upon receiving `delimited_files`, your Batch API will iterate through the `delimited_files.s3_paths` to generate the set of s3 files to process. You can use `delimited_files.includes` and `delimited_files.excludes` to filter out unwanted files. Each S3 file will be parsed as a newline delimited JSON file. Each line in the file should be a JSON object, which will be treated as a single sample. The S3 file will be broken down into batches of size `delimited_files.batch_size` and submitted to your workers. To learn more about fine-grained S3 file filtering see [filtering files](#filtering-files).
107+
Upon receiving `delimited_files`, your Batch API will iterate through the `delimited_files.s3_paths` to generate the set of S3 files to process. You can use `delimited_files.includes` and `delimited_files.excludes` to filter out unwanted files. Each S3 file will be parsed as a newline delimited JSON file. Each line in the file should be a JSON object, which will be treated as a single sample. The S3 file will be broken down into batches of size `delimited_files.batch_size` and submitted to your workers. To learn more about fine-grained S3 file filtering see [filtering files](#filtering-files).
108108

109109
__The total size of a batch must be less than 256 KiB.__
110110

111111
This submission pattern is useful in the following scenarios:
112112

113-
* one or more s3 files contains a large number of samples and must be broken down into batches
113+
* one or more S3 files contains a large number of samples and must be broken down into batches
114114

115115
```yaml
116116
POST <batch_api_endpoint>/:
117117
{
118118
"workers": <int>, # the number of workers to allocate for this job (required)
119119
"delimited_files": {
120-
"s3_paths": [<string>], # can be s3 prefixes or complete s3 paths (required)
120+
"s3_paths": [<string>], # can be S3 prefixes or complete S3 paths (required)
121121
"includes": [<string>], # glob patterns (optional)
122122
"excludes": [<string>], # glob patterns (optional)
123123
"batch_size": <int>, # the number of json objects per batch (the predict() function is called once per batch) (required)
@@ -201,7 +201,7 @@ RESPONSE:
201201

202202
When submitting a job using `delimited_files` or `file_path_lister`, you can use `s3_paths` in conjunction with `includes` and `excludes` to precisely filter files.
203203

204-
The Batch API will iterate through each s3 path in `s3_paths`. If the s3 path is a prefix, it iterates through each file in that prefix. For each file, if `includes` is non-empty, it will discard the s3 path if the s3 file doesn't match any of the glob patterns provided in `includes`. After passing the `includes` filter (if specified), if the `excludes` is non-empty, it will discard the s3 path if the s3 files matches any of the glob patterns provided in `excludes`.
204+
The Batch API will iterate through each S3 path in `s3_paths`. If the S3 path is a prefix, it iterates through each file in that prefix. For each file, if `includes` is non-empty, it will discard the S3 path if the S3 file doesn't match any of the glob patterns provided in `includes`. After passing the `includes` filter (if specified), if the `excludes` is non-empty, it will discard the S3 path if the S3 files matches any of the glob patterns provided in `excludes`.
205205

206206
If you aren't sure which files will be processed in your request, specify the `dryRun=true` query parameter in the job submission request to see the target list.
207207

docs/deployments/batch-api/predictors.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,7 @@ For proper separation of concerns, it is recommended to use the constructor's `c
8181
### Examples
8282

8383
<!-- CORTEX_VERSION_MINOR -->
84-
You can find an example of a BatchAPI using a PythonPredictor in [examples/batch/image-classifier](https://github.com/cortexlabs/cortex/tree/master/examples/batch/image-classifier)
84+
You can find an example of a BatchAPI using a PythonPredictor in [examples/batch/image-classifier](https://github.com/cortexlabs/cortex/tree/master/examples/batch/image-classifier).
8585

8686
### Pre-installed packages
8787

@@ -198,7 +198,7 @@ For proper separation of concerns, it is recommended to use the constructor's `c
198198
### Examples
199199

200200
<!-- CORTEX_VERSION_MINOR -->
201-
You can find an example of a BatchAPI using a TensorFlowPredictor in [examples/batch/tensorflow](https://github.com/cortexlabs/cortex/tree/master/examples/batch/tensorflow)
201+
You can find an example of a BatchAPI using a TensorFlowPredictor in [examples/batch/tensorflow](https://github.com/cortexlabs/cortex/tree/master/examples/batch/tensorflow).
202202

203203
### Pre-installed packages
204204

@@ -267,7 +267,7 @@ For proper separation of concerns, it is recommended to use the constructor's `c
267267
### Examples
268268

269269
<!-- CORTEX_VERSION_MINOR -->
270-
You can find an example of a BatchAPI using an ONNXPredictor in [examples/batch/onnx](https://github.com/cortexlabs/cortex/tree/master/examples/batch/onnx)
270+
You can find an example of a BatchAPI using an ONNXPredictor in [examples/batch/onnx](https://github.com/cortexlabs/cortex/tree/master/examples/batch/onnx).
271271

272272
### Pre-installed packages
273273

examples/batch/image-classifier/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -415,7 +415,7 @@ spinning up workers...
415415

416416
The status of your job, which you can get from `cortex get <BATCH_API_NAME> <JOB_ID>`, should change from `running` to `succeeded` once the job has completed. If it changes to a different status, you may be able to find the stacktrace using `cortex logs <BATCH_API_NAME> <JOB_ID>`. If your job has completed successfully, you can view the results of the image classification in the S3 directory you specified in the job submission.
417417

418-
Using AWS CLI:
418+
Using the AWS CLI:
419419

420420
```bash
421421
$ aws s3 ls $CORTEX_DEST_S3_DIR/<JOB_ID>/
@@ -524,7 +524,7 @@ spinning up workers...
524524

525525
The status of your job, which you can get from `cortex get <BATCH_API_NAME> <JOB_ID>`, should change from `running` to `succeeded` once the job has completed. If it changes to a different status, you may be able to find the stacktrace using `cortex logs <BATCH_API_NAME> <JOB_ID>`. If your job has completed successfully, you can view the results of the image classification in the S3 directory you specified in the job submission.
526526

527-
Using AWS CLI:
527+
Using the AWS CLI:
528528

529529
```bash
530530
$ aws s3 ls $CORTEX_DEST_S3_DIR/<JOB_ID>/

0 commit comments

Comments
 (0)