You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
-[How to customize contract analysis according to your use case](#how-to-customize-contract-analysis-according-to-your-use-case)
10
+
-[How to use a different Amazon Bedrock FM](#how-to-use-a-different-amazon-bedrock-fm)
11
+
3
12
## Basic setup
4
13
5
14
### Local environment or Cloud9
@@ -9,8 +18,6 @@ You have the option of running the setup from a local workspace or from a Cloud9
9
18
In case you opt for Cloud9, you have to setup a Cloud9 environment in the same AWS Account where this Backend will
10
19
be installed.
11
20
12
-
If your local workspace has a non-x86 processor architecture (for instance ARM, like the M processor from Macbooks), it's strongly recommended to perform the setup steps from a Cloud9 environment, to avoid bundling issues of Lambda function dependencies (see [ticket](https://github.com/awslabs/generative-ai-cdk-constructs/issues/541)). Otherwise, set the `DOCKER_DEFAULT_PLATFORM` environmental variable to `linux/amd64` to build `x86_64` packages.
13
-
14
21
#### Cloud9 setup (optional)
15
22
16
23
1. Follow the steps on [https://docs.aws.amazon.com/cloud9/latest/user-guide/setting-up.html](https://docs.aws.amazon.com/cloud9/latest/user-guide/setting-up.html)
@@ -88,16 +95,12 @@ cdk bootstrap
88
95
cdk deploy --require-approval=never
89
96
```
90
97
91
-
> Use `DOCKER_DEFAULT_PLATFORM=linux/amd64 cdk deploy --require-approval=never` on macOS
92
-
93
98
2. Any modifications made to the code can be applied to the deployed stack by running the same command again.
94
99
95
100
```shell
96
101
cdk deploy --require-approval=never
97
102
```
98
103
99
-
> Use `DOCKER_DEFAULT_PLATFORM=linux/amd64 cdk deploy --require-approval=never` on macOS
100
-
101
104
#### Populate Guidelines table
102
105
103
106
Once the Stack is setup, you need to populate the DynamoDB Guidelines table with the data from the Guidelines Excel sheet that is included in the `guidelines` folder.
@@ -137,7 +140,7 @@ Click the **Enable specific models** button and enable the checkbox for Anthropi
137
140
138
141
Click **Next** and **Submit** buttons
139
142
140
-
## How to customize contract analysis accordding to your use case
143
+
## How to customize contract analysis according to your use case
141
144
142
145
This solution was designed to support analysis of contracts of different types and of different languages, based on the assumption that the contracts establish an agreement between two parties: a given company and another party. The solution already comes pre-configured to analyze service contract contracts in English for the company *AnyCompany*, together with an example of guidelines.
143
146
@@ -164,3 +167,49 @@ The recommended sequence of steps:
By default, the application uses Anthropic Claude 3 Haiku v1. Here are steps explaining how to update the model to use. For this example, we will use [Amazon Nova Pro v1](https://aws.amazon.com/blogs/aws/introducing-amazon-nova-frontier-intelligence-and-industry-leading-price-performance/):
174
+
175
+
- Open the [app_properties.yaml](./app_properties.yaml) file and update the field ```claude_model_id``` to use the model you selected. In this case, we update the field to ```us.amazon.nova-pro-v1:0```. Replace it with the model id you want to use. The list of model ids available through Amazon Bedrock is available in the [documentation](https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html). Ensure the model you are selecting is enabled in the console (Amazon Bedrock -> Model access) and available in your region.
176
+
- Depending on the model selected, you might need to update some hardcoded values regarding the max number of new tokens generated. For instance, Amazon Nova Pro v1 supports 5000 output tokens, which doesn't require any modifications. However, some models might have a max output tokens of 3000, which requires some changes in the sample. Update the following lines if required:
177
+
- In file [fn-preprocess-contract/index.py](./stack/sfn/preprocessing/fn-preprocess-contract/index.py), update line 96 to change the chunks size to a value smaller than the max tokens output for your model, as well as line 107 to match your model's max output tokens.
178
+
- In file [scripts/utils/llm.py](./scripts/utils/llm.py), update the max tokens output line 28.
179
+
- In file [common-layer/llm.py](./stack/sfn/common-layer/llm.py) update the max tokens output line 30.
180
+
- In file [fn-classify-clauses/index.py](.stack/sfn/classification/fn-classify-clauses/index.py), update line 182 the max tokens output for your model
181
+
- Re-deploy the solution as described in previous sections
182
+
183
+
### Troubleshooting
184
+
185
+
#### KeyError in step X
186
+
187
+
If you change the model, it is possible that you face an error in the step functionrun. This can be due to the parsing of the LLM response.
188
+
In that case, identify the failing lambda functionfrom the step functions logs, and update the dedicated lambda functioncode to enable verbose messaging. For instance, if the failing lambda functionis```PreprocessingStepPreproc```, open the file [fn-preprocess-contract/index.py](./stack/sfn/preprocessing/fn-preprocess-contract/index.py) and update the invoke_llm code:
Then, modify the file [common-layer/llm.py](./stack/sfn/common-layer/llm.py) and print the response from the runnable invocation:
202
+
203
+
```python
204
+
response = chain.invoke({})
205
+
logger.info(f"Model response: {response}") # <- log the response
206
+
content = response.content
207
+
```
208
+
209
+
Re-deploy the solution, and verify in the logs the structure of the response. Depending on the model used, it is possible that the schema of the reponse is different, thus the ```usage``` and ```stop reason``` values might require to be parsed differently. In that case, add the correct code in the file [common-layer/llm.py](./stack/sfn/common-layer/llm.py):
210
+
211
+
```python
212
+
if ('mymodel'in model_id):
213
+
usage_data = response.XXX # <- specify how to parse usage data
214
+
stop_reason = response.XXX # <- specify how to parse stop reason
0 commit comments