Skip to content

Commit ea3b45a

Browse files
authored
Flush stdout during CLI output streaming in Python Client (#1474)
1 parent 293bcdb commit ea3b45a

File tree

3 files changed

+13
-6
lines changed

3 files changed

+13
-6
lines changed

docs/cluster-management/install.md

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -19,13 +19,16 @@ See [here](../miscellaneous/cli.md#install-cortex-cli-without-python-client) to
1919
```bash
2020
# clone the Cortex repository
2121
git clone -b master https://github.com/cortexlabs/cortex.git
22+
23+
# navigate to the Pytorch text generator example
24+
cd cortex/examples/pytorch/text-generator
2225
```
2326

2427
### Using the CLI
2528

2629
```bash
2730
# deploy the model as a realtime api
28-
cortex deploy cortex/examples/pytorch/text-generator/cortex.yaml
31+
cortex deploy
2932

3033
# view the status of the api
3134
cortex get --watch
@@ -39,7 +42,7 @@ cortex get text-generator
3942
# generate text
4043
curl <API endpoint> \
4144
-X POST -H "Content-Type: application/json" \
42-
-d '{"text": "machine learning is"}' \
45+
-d '{"text": "machine learning is"}'
4346

4447
# delete the api
4548
cortex delete text-generator
@@ -54,7 +57,7 @@ import requests
5457
local_client = cortex.client("local")
5558

5659
# deploy the model as a realtime api and wait for it to become active
57-
deployments = local_client.deploy("cortex/examples/pytorch/text-generator/cortex.yaml", wait=True)
60+
deployments = local_client.deploy("./cortex.yaml", wait=True)
5861

5962
# get the api's endpoint
6063
url = deployments[0]["api"]["endpoint"]

pkg/workloads/cortex/client/README.md

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -38,6 +38,9 @@ You must have [Docker](https://docs.docker.com/install) installed to run Cortex
3838
```bash
3939
# clone the Cortex repository
4040
git clone -b master https://github.com/cortexlabs/cortex.git
41+
42+
# navigate to the Pytorch text generator example
43+
cd cortex/examples/pytorch/text-generator
4144
```
4245

4346
### In Python
@@ -48,7 +51,7 @@ import requests
4851
local_client = cortex.client("local")
4952

5053
# deploy the model as a realtime api and wait for it to become active
51-
deployments = local_client.deploy("cortex/examples/pytorch/text-generator/cortex.yaml", wait=True)
54+
deployments = local_client.deploy("./cortex.yaml", wait=True)
5255

5356
# get the api's endpoint
5457
url = deployments[0]["api"]["endpoint"]
@@ -63,7 +66,7 @@ local_client.delete_api("text-generator")
6366
### Using the CLI
6467
```bash
6568
# deploy the model as a realtime api
66-
cortex deploy cortex/examples/pytorch/text-generator/cortex.yaml
69+
cortex deploy
6770

6871
# view the status of the api
6972
cortex get --watch
@@ -77,7 +80,7 @@ cortex get text-generator
7780
# generate text
7881
curl <API endpoint> \
7982
-X POST -H "Content-Type: application/json" \
80-
-d '{"text": "machine learning is"}' \
83+
-d '{"text": "machine learning is"}'
8184

8285
# delete the api
8386
cortex delete text-generator

pkg/workloads/cortex/client/cortex/binary/__init__.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -87,6 +87,7 @@ def run_cli(
8787
if not hide_output:
8888
if (not mixed_output) or (mixed_output and not result_found):
8989
sys.stdout.write(c)
90+
sys.stdout.flush()
9091

9192
process.wait()
9293

0 commit comments

Comments
 (0)