You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -101,16 +103,22 @@ Only Multi-Model Deployments with **base service LLM models (text-generation)**
101
103
102
104
### Select 'Deploy Multi Model'
103
105
- Based on the 'models' field, a Compute Shape will be recommended to accomidate both models.
106
+
- Select the 'Fine Tuned Weights'.
107
+
- Only fine tuned model with version `V2` is allowed to be deployed as weights in Multi Deployment. For deploying old fine tuned model weight, run the following command to convert it to version `V2` and apply the new fine tuned model to the deployment creation. This command deletes the old fine tuned model by default after conversion but you can add ``--delete_model False`` to keep it instead.
108
+
109
+
```bash
110
+
ads aqua model convert_fine_tune --model_id [FT_OCID]
111
+
```
104
112
- Select logging and endpoints (/v1/completions | /v1/chat/completions).
105
113
- Submit form via 'Deploy Button' at bottom.
106
-

114
+

107
115
108
116
### Inferencing with Multi-Model Deployment
109
117
110
118
There are two ways to send inference requests to models within a Multi-Model Deployment
111
119
112
120
1. Python SDK (recommended)- see [here](#Multi-Model-Inferencing)
113
-
2. Using AQUA UI (see below, ok for testing)
121
+
2. Using AQUA UI - see [here](#using-aqua-ui-interface-for-multi-model-deployment)
114
122
115
123
Once the Deployment is Active, view the model deployment details and inferencing form by clicking on the 'Deployments' Tab and selecting the model within the Model Deployment list.
116
124
@@ -472,8 +480,13 @@ ads aqua deployment get_multimodel_deployment_config --model_ids '["ocid1.datasc
472
480
473
481
## 3. Create Multi-Model Deployment
474
482
475
-
Only **base service LLM models** are supported for MultiModel Deployment. All selected models will run on the same **GPU shape**, sharing the available compute resources. Make sure to choose a shape that meets the needs of all models in your deployment using [MultiModel Configuration command](#get-multimodel-configuration)
483
+
All selected models will run on the same **GPU shape**, sharing the available compute resources. Make sure to choose a shape that meets the needs of all models in your deployment using [MultiModel Configuration command](#get-multimodel-configuration)
484
+
485
+
Only fine tuned model with version `V2` is allowed to be deployed as weights in Multi Deployment. For deploying old fine tuned model weight, run the following command to convert it to version `V2` and apply the new fine tuned model OCID to the deployment creation. This command deletes the old fine tuned model by default after conversion but you can add ``--delete_model False`` to keep it instead.
476
486
487
+
```bash
488
+
ads aqua model convert_fine_tune --model_id [FT_OCID]
489
+
```
477
490
478
491
### Description
479
492
@@ -750,6 +763,144 @@ To list all AQUA deployments (both Multi-Model and single-model) within a specif
750
763
751
764
Note: Multi-Model deployments are identified by the tag `"aqua_multimodel": "true",` associated with them.
752
765
766
+
### Edit Multi-Model Deployments
767
+
768
+
AQUA deployment must be in `ACTIVE` state to be updated and can only be updated one at a time for the following option groups. There are two ways to update model deployment: `ZDT` and `LIVE`. The default update type for AQUA deployment is `ZDT` but `LIVE` will be adopted if `models` are changed in multi deployment.
769
+
770
+
-`Name or description`: Change the name or description.
771
+
-`Default configuration`: Change or add freeform and defined tags.
772
+
-`Models`: Change the model.
773
+
-`Compute`: Change the number of CPUs or amount of memory for each CPU in gigabytes.
774
+
-`Logging`: Change the logging configuration for access and predict logs.
775
+
-`Load Balancer`: Change the load balancing bandwidth.
776
+
777
+
#### Usage
778
+
779
+
```bash
780
+
ads aqua deployment update [OPTIONS]
781
+
```
782
+
783
+
#### Required Parameters
784
+
785
+
`--model_deployment_id [str]`
786
+
787
+
The model deployment OCID to be updated.
788
+
789
+
#### Optional Parameters
790
+
791
+
`--models [str]`
792
+
793
+
The String representation of a JSON array, where each object defines a model’s OCID and the number of GPUs assigned to it. The gpu count should always be a **power of two (e.g., 1, 2, 4, 8)**. <br>
794
+
Example: `'[{"model_id":"<model_ocid>", "gpu_count":1},{"model_id":"<model_ocid>", "gpu_count":1}]'` for `VM.GPU.A10.2` shape. <br>
795
+
796
+
`--display_name [str]`
797
+
798
+
The name of model deployment.
799
+
800
+
`--description [str]`
801
+
802
+
The description of the model deployment. Defaults to None.
803
+
804
+
`--instance_count [int]`
805
+
806
+
The number of instance used for model deployment. Defaults to 1.
807
+
808
+
`--log_group_id [str]`
809
+
810
+
The oci logging group id. The access log and predict log share the same log group.
811
+
812
+
`--access_log_id [str]`
813
+
814
+
The access log OCID for the access logs. Check [model deployment logging](https://docs.oracle.com/en-us/iaas/data-science/using/model_dep_using_logging.htm) for more details.
815
+
816
+
`--predict_log_id [str]`
817
+
818
+
The predict log OCID for the predict logs. Check [model deployment logging](https://docs.oracle.com/en-us/iaas/data-science/using/model_dep_using_logging.htm) for more details.
819
+
820
+
`--web_concurrency [int]`
821
+
822
+
The number of worker processes/threads to handle incoming requests.
823
+
824
+
`--bandwidth_mbps [int]`
825
+
826
+
The bandwidth limit on the load balancer in Mbps.
827
+
828
+
`--memory_in_gbs [float]`
829
+
830
+
Memory (in GB) for the selected shape.
831
+
832
+
`--ocpus [float]`
833
+
834
+
OCPU count for the selected shape.
835
+
836
+
`--freeform_tags [dict]`
837
+
838
+
Freeform tags for model deployment.
839
+
840
+
`--defined_tags [dict]`
841
+
Defined tags for model deployment.
842
+
843
+
#### Example
844
+
845
+
##### Edit Multi-Model deployment with `/v1/completions`
The only change required to infer a specific model from a Multi-Model deployment is to update the value of `"model"` parameter in the request payload. The values for this parameter can be found in the Model Deployment details, under the field name `"model_name"`. This parameter segregates the request flow, ensuring that the inference request is directed to the correct model within the MultiModel deployment.
0 commit comments