You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: samples/contract-compliance-analysis/back-end/README.md
+35-1Lines changed: 35 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -172,10 +172,44 @@ The recommended sequence of steps:
172
172
173
173
By default, the application uses Anthropic Claude 3 Haiku v1. Here are steps explaining how to update the model to use. For this example, we will use [Amazon Nova Pro v1](https://aws.amazon.com/blogs/aws/introducing-amazon-nova-frontier-intelligence-and-industry-leading-price-performance/):
174
174
175
-
- Open the [app_properties.yaml](./app_properties.yaml) file and update the field ```claude_model_id``` to use the model you selected. In this case, we update the field to ```us.amazon.nova-pro-v1:0```. Replace it with the model id you want to use. The list of model ids available through Amazon Bedrock is available in the [documentation](https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html). Ensure the model you are selecting is enabled in the Amazon Bedrock -> Model access and available in your region.
175
+
- Open the [app_properties.yaml](./app_properties.yaml) file and update the field ```claude_model_id``` to use the model you selected. In this case, we update the field to ```us.amazon.nova-pro-v1:0```. Replace it with the model id you want to use. The list of model ids available through Amazon Bedrock is available in the [documentation](https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html). Ensure the model you are selecting is enabled in the console (Amazon Bedrock -> Model access) and available in your region.
176
176
- Depending on the model selected, you might need to update some hardcoded values regarding the max number of new tokens generated. For instance, Amazon Nova Pro v1 supports 5000 output tokens, which doesn't require any modifications. However, some models might have a max output tokens of 3000, which requires some changes in the sample. Update the following lines if required:
177
177
- In file [fn-preprocess-contract/index.py](./stack/sfn/preprocessing/fn-preprocess-contract/index.py), update line 96 to change the chunks size to a value smaller than the max tokens output for your model, as well as line 107 to match your model's max output tokens.
178
178
- In file [scripts/utils/llm.py](./scripts/utils/llm.py), update the max tokens output line 28.
179
179
- In file [common-layer/llm.py](./stack/sfn/common-layer/llm.py) update the max tokens output line 30.
180
180
- In file [fn-classify-clauses/index.py](.stack/sfn/classification/fn-classify-clauses/index.py), update line 182 the max tokens output for your model
181
181
- Re-deploy the solution as described in previous sections
182
+
183
+
### Troubleshooting
184
+
185
+
#### KeyError in step X
186
+
187
+
If you change the model, it is possible that you face an error in the step functionrun. This can be due to the parsing of the LLM response.
188
+
In that case, identify the failing lambda functionfrom the step functions logs, and update the dedicated lambda functioncode to enable verbose messaging. For instance, if the failing lambda functionis```PreprocessingStepPreproc```, open the file [fn-preprocess-contract/index.py](./stack/sfn/preprocessing/fn-preprocess-contract/index.py) and update the invoke_llm code:
Then, modify the file [common-layer/llm.py](./stack/sfn/common-layer/llm.py) and print the response from the runnable invocation:
202
+
203
+
```python
204
+
response = chain.invoke({})
205
+
logger.info(f"Model response: {response}") # <- log the response
206
+
content = response.content
207
+
```
208
+
209
+
Re-deploy the solution, and verify in the logs the structure of the response. Depending on the model used, it is possible that the schema of the reponse is different, thus the ```usage``` and ```stop reason``` values might require to be parsed differently. In that case, add the correct code in the file [common-layer/llm.py](./stack/sfn/common-layer/llm.py):
210
+
211
+
```python
212
+
if ('mymodel'in model_id):
213
+
usage_data = response.XXX # <- specify how to parse usage data
214
+
stop_reason = response.XXX # <- specify how to parse stop reason
0 commit comments