Skip to content

Commit 4033fb0

Browse files
authored
Update README.md
1 parent fc4034d commit 4033fb0

File tree

1 file changed

+2
-4
lines changed

1 file changed

+2
-4
lines changed

README.md

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,6 @@
22

33
Cortex is an open source platform that takes machine learning models—trained with nearly any framework—and turns them into production web APIs in one command. <br>
44

5-
<!-- CORTEX_VERSION_MINOR x2 (e.g. www.cortex.dev/v/0.8/...) -->
65
[install](https://www.cortex.dev/install)[docs](https://www.cortex.dev)[examples](examples)[we're hiring](https://angel.co/cortex-labs-inc/jobs)[email us](mailto:hello@cortex.dev)[chat with us](https://gitter.im/cortexlabs/cortex)
76

87
<br>
@@ -14,15 +13,14 @@ Cortex is an open source platform that takes machine learning models—trained w
1413

1514
## Quickstart
1615

17-
<!-- CORTEX_VERSION_MINOR (e.g. www.cortex.dev/v/0.8/...) -->
1816
Below, we'll walk through how to use Cortex to deploy OpenAI's GPT-2 model as a service on AWS. You'll need to [install Cortex](https://www.cortex.dev/install) on your AWS account before getting started.
1917

2018
<br>
2119

2220
### Step 1: Configure your deployment
2321

2422
<!-- CORTEX_VERSION_MINOR -->
25-
Define a `deployment` and an `api` resource. A `deployment` specifies a set of APIs that are deployed as a single unit. An `api` makes a model available as a web service that can serve real-time predictions. The configuration below will download the model from the `cortex-examples` S3 bucket. You can run the code that generated the exported GPT-2 model [here](https://colab.research.google.com/github/cortexlabs/cortex/blob/master/examples/text-generator/gpt-2.ipynb).
23+
Define a `deployment` and an `api` resource. A `deployment` specifies a set of APIs that are deployed together. An `api` makes a model available as a web service that can serve real-time predictions. The configuration below will download the model from the `cortex-examples` S3 bucket. You can run the code that generated the model [here](https://colab.research.google.com/github/cortexlabs/cortex/blob/master/examples/text-generator/gpt-2.ipynb).
2624

2725
```yaml
2826
# cortex.yaml
@@ -56,7 +54,7 @@ def pre_inference(sample, metadata):
5654
5755
def post_inference(prediction, metadata):
5856
response = prediction["sample"]
59-
return {encoder.decode(response)}
57+
return encoder.decode(response)
6058
```
6159

6260
<br>

0 commit comments

Comments
 (0)