Skip to content

Commit f9ee4a9

Browse files
author
Miguel Varela Ramos
authored
Move python code into it's own folder (#2143)
* Move python code out of the pkg/ directory * Fix import order
1 parent 1c49e1d commit f9ee4a9

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

86 files changed

+128
-126
lines changed

build/cli.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ function build_and_upload() {
4747
}
4848

4949
function build_python {
50-
pushd $ROOT/pkg/cortex/client
50+
pushd $ROOT/python/client
5151
python setup.py sdist
5252

5353
if [ "$upload" == "true" ]; then

dev/generate_python_client_md.sh

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ docs_path="$ROOT/docs/clients/python.md"
2727

2828
pip3 uninstall -y cortex
2929

30-
cd $ROOT/pkg/cortex/client
30+
cd $ROOT/python/client
3131

3232
pip3 install -e .
3333

@@ -67,4 +67,4 @@ truncate -s -1 $docs_path
6767
sed -i "s/^## create\\\_api/## create\\\_api\n\n<!-- CORTEX_VERSION_MINOR -->/g" $docs_path
6868

6969
pip3 uninstall -y cortex
70-
rm -rf $ROOT/pkg/cortex/client/cortex.egg-info
70+
rm -rf $ROOT/python/client/cortex.egg-info

dev/python_version_test.sh

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ pip install requests
3535
export CORTEX_CLI_PATH=$ROOT/bin/cortex
3636

3737
# install cortex
38-
cd $ROOT/pkg/cortex/client
38+
cd $ROOT/python/client
3939
pip install -e .
4040

4141
# run script.py
@@ -44,4 +44,4 @@ python $ROOT/dev/deploy_test.py $2
4444
# clean up conda
4545
conda deactivate
4646
conda env remove -n env
47-
rm -rf $ROOT/pkg/cortex/client/cortex.egg-info
47+
rm -rf $ROOT/python/client/cortex.egg-info

dev/versions.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -197,7 +197,7 @@ Note: it's ok if example training notebooks aren't upgraded, as long as the expo
197197

198198
## Python packages
199199

200-
1. Update versions in `pkg/cortex/serve/*requirements.txt`
200+
1. Update versions in `python/serve/*requirements.txt`
201201

202202
## S6-overlay supervisor
203203

docs/workloads/async/models.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ class Handler:
4444
<!-- CORTEX_VERSION_MINOR -->
4545

4646
Cortex provides a `tensorflow_client` to your handler's constructor. `tensorflow_client` is an instance
47-
of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/master/pkg/cortex/serve/cortex_internal/lib/client/tensorflow.py)
47+
of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/master/python/serve/cortex_internal/lib/client/tensorflow.py)
4848
that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as
4949
an instance variable in your handler class, and your `handle_async()` function should call `tensorflow_client.predict()` to make
5050
an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions

docs/workloads/batch/models.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,7 @@ class Handler:
5555
```
5656

5757
<!-- CORTEX_VERSION_MINOR -->
58-
Cortex provides a `tensorflow_client` to your Handler class' constructor. `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/master/pkg/cortex/serve/cortex_internal/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as an instance variable in your Handler class, and your `handle_batch()` function should call `tensorflow_client.predict()` to make an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `handle_batch()` function as well.
58+
Cortex provides a `tensorflow_client` to your Handler class' constructor. `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/master/python/serve/cortex_internal/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as an instance variable in your Handler class, and your `handle_batch()` function should call `tensorflow_client.predict()` to make an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `handle_batch()` function as well.
5959

6060
When multiple models are defined using the Handler's `models` field, the `tensorflow_client.predict()` method expects a second argument `model_name` which must hold the name of the model that you want to use for inference (for example: `self.client.predict(payload, "text-generator")`). There is also an optional third argument to specify the model version.
6161

docs/workloads/realtime/models.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -95,7 +95,7 @@ class Handler:
9595
```
9696

9797
<!-- CORTEX_VERSION_MINOR -->
98-
When explicit model paths are specified in the Python handler's API configuration, Cortex provides a `model_client` to your Handler's constructor. `model_client` is an instance of [ModelClient](https://github.com/cortexlabs/cortex/tree/master/pkg/cortex/serve/cortex_internal/lib/client/python.py) that is used to load model(s) (it calls the `load_model()` method of your handler, which must be defined when using explicit model paths). It should be saved as an instance variable in your handler class, and your handler method should call `model_client.get_model()` to load your model for inference. Preprocessing of the JSON/gRPC payload and postprocessing of predictions can be implemented in your handler method as well.
98+
When explicit model paths are specified in the Python handler's API configuration, Cortex provides a `model_client` to your Handler's constructor. `model_client` is an instance of [ModelClient](https://github.com/cortexlabs/cortex/tree/master/python/serve/cortex_internal/lib/client/python.py) that is used to load model(s) (it calls the `load_model()` method of your handler, which must be defined when using explicit model paths). It should be saved as an instance variable in your handler class, and your handler method should call `model_client.get_model()` to load your model for inference. Preprocessing of the JSON/gRPC payload and postprocessing of predictions can be implemented in your handler method as well.
9999

100100
When multiple models are defined using the Handler's `multi_model_reloading` field, the `model_client.get_model()` method expects an argument `model_name` which must hold the name of the model that you want to load (for example: `self.client.get_model("text-generator")`). There is also an optional second argument to specify the model version.
101101

@@ -284,7 +284,7 @@ class Handler:
284284
```
285285

286286
<!-- CORTEX_VERSION_MINOR -->
287-
Cortex provides a `tensorflow_client` to your Handler's constructor. `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/master/pkg/cortex/serve/cortex_internal/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as an instance variable in your Handler class, and your handler method should call `tensorflow_client.predict()` to make an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your handler method as well.
287+
Cortex provides a `tensorflow_client` to your Handler's constructor. `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/master/python/serve/cortex_internal/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as an instance variable in your Handler class, and your handler method should call `tensorflow_client.predict()` to make an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your handler method as well.
288288

289289
When multiple models are defined using the Handler's `models` field, the `tensorflow_client.predict()` method expects a second argument `model_name` which must hold the name of the model that you want to use for inference (for example: `self.client.predict(payload, "text-generator")`). There is also an optional third argument to specify the model version.
290290

images/downloader/Dockerfile

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -10,14 +10,14 @@ RUN apt-get update -qq && apt-get install -y -q \
1010
pip install --upgrade pip && \
1111
rm -rf /root/.cache/pip*
1212

13-
COPY pkg/cortex/serve/cortex_internal.requirements.txt /src/cortex/serve/cortex_internal.requirements.txt
13+
COPY python/serve/cortex_internal.requirements.txt /src/cortex/serve/cortex_internal.requirements.txt
1414

1515
RUN pip install --no-cache-dir \
1616
-r /src/cortex/serve/cortex_internal.requirements.txt && \
1717
rm -rf /root/.cache/pip*
1818

19-
COPY pkg/cortex/downloader /src/cortex/downloader
20-
COPY pkg/cortex/serve/ /src/cortex/serve
19+
COPY python/downloader /src/cortex/downloader
20+
COPY python/serve/ /src/cortex/serve
2121
ENV CORTEX_LOG_CONFIG_FILE /src/cortex/serve/log_config.yaml
2222

2323
RUN pip install --no-deps /src/cortex/serve/ && \

images/python-handler-cpu/Dockerfile

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -43,17 +43,17 @@ RUN /opt/conda/bin/conda create -n env -c conda-forge python=$PYTHONVERSION pip=
4343
ENV BASH_ENV=~/.env
4444
SHELL ["/bin/bash", "-c"]
4545

46-
COPY pkg/cortex/serve/serve.requirements.txt /src/cortex/serve/serve.requirements.txt
47-
COPY pkg/cortex/serve/cortex_internal.requirements.txt /src/cortex/serve/cortex_internal.requirements.txt
46+
COPY python/serve/serve.requirements.txt /src/cortex/serve/serve.requirements.txt
47+
COPY python/serve/cortex_internal.requirements.txt /src/cortex/serve/cortex_internal.requirements.txt
4848
RUN pip install --no-cache-dir \
4949
-r /src/cortex/serve/serve.requirements.txt \
5050
-r /src/cortex/serve/cortex_internal.requirements.txt
5151

52-
COPY pkg/cortex/serve/init/install-core-dependencies.sh /usr/local/cortex/install-core-dependencies.sh
52+
COPY python/serve/init/install-core-dependencies.sh /usr/local/cortex/install-core-dependencies.sh
5353
RUN chmod +x /usr/local/cortex/install-core-dependencies.sh
5454

55-
COPY pkg/cortex/serve/ /src/cortex/serve
56-
COPY pkg/cortex/client/ /src/cortex/client
55+
COPY python/serve/ /src/cortex/serve
56+
COPY python/client/ /src/cortex/client
5757
ENV CORTEX_LOG_CONFIG_FILE /src/cortex/serve/log_config.yaml
5858

5959
RUN pip install --no-deps /src/cortex/serve/ && \

images/python-handler-gpu/Dockerfile

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -45,17 +45,17 @@ RUN /opt/conda/bin/conda create -n env -c conda-forge python=$PYTHONVERSION pip=
4545
ENV BASH_ENV=~/.env
4646
SHELL ["/bin/bash", "-c"]
4747

48-
COPY pkg/cortex/serve/serve.requirements.txt /src/cortex/serve/serve.requirements.txt
49-
COPY pkg/cortex/serve/cortex_internal.requirements.txt /src/cortex/serve/cortex_internal.requirements.txt
48+
COPY python/serve/serve.requirements.txt /src/cortex/serve/serve.requirements.txt
49+
COPY python/serve/cortex_internal.requirements.txt /src/cortex/serve/cortex_internal.requirements.txt
5050
RUN pip install --no-cache-dir \
5151
-r /src/cortex/serve/serve.requirements.txt \
5252
-r /src/cortex/serve/cortex_internal.requirements.txt
5353

54-
COPY pkg/cortex/serve/init/install-core-dependencies.sh /usr/local/cortex/install-core-dependencies.sh
54+
COPY python/serve/init/install-core-dependencies.sh /usr/local/cortex/install-core-dependencies.sh
5555
RUN chmod +x /usr/local/cortex/install-core-dependencies.sh
5656

57-
COPY pkg/cortex/serve/ /src/cortex/serve
58-
COPY pkg/cortex/client/ /src/cortex/client
57+
COPY python/serve/ /src/cortex/serve
58+
COPY python/client/ /src/cortex/client
5959
ENV CORTEX_LOG_CONFIG_FILE /src/cortex/serve/log_config.yaml
6060

6161
RUN pip install --no-deps /src/cortex/serve/ && \

0 commit comments

Comments
 (0)