Skip to content

Commit 0c745c2

Browse files
committed
chore: usage documentation
1 parent e43cba7 commit 0c745c2

File tree

2 files changed

+259
-17
lines changed

2 files changed

+259
-17
lines changed

samples/code-expert/README.md

Lines changed: 24 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -169,23 +169,6 @@ If you choose to
169169
use [Cross-Region Inference](https://docs.aws.amazon.com/bedrock/latest/userguide/cross-region-inference.html) for
170170
increased throughput, you will need to activate the models in each region that will be used.
171171

172-
### Configure Rules
173-
174-
Prepare your rules file according to the [format](documentation/rules.md#code-expert-rules-configuration-format).
175-
176-
Upload your **rules.json** file to the configuration bucket in **ConfigBucketName** from the deployment output.
177-
178-
```shell
179-
aws s3 cp rules.json s3://<ConfigBucketName>/rules.json
180-
```
181-
182-
### Demo
183-
184-
You are now ready to perform code reviews!
185-
186-
To run the [demo](demo/README.md#code-expert-demo-app), you will need the **InputBucketName** and **StateMachineArn**
187-
outputs from the CDK deployment.
188-
189172
### Development
190173

191174
#### Modify project
@@ -217,6 +200,30 @@ You may have logged in to `public.ecr.aws` with Docker and the credentials have
217200
docker logout public.ecr.aws
218201
```
219202

203+
## Usage
204+
205+
### Configure Rules
206+
207+
Prepare your rules file according to the [format](documentation/rules.md#code-expert-rules-configuration-format).
208+
209+
Upload your **rules.json** file to the configuration bucket in **ConfigBucketName** from the deployment output.
210+
211+
```shell
212+
aws s3 cp rules.json s3://<ConfigBucketName>/rules.json
213+
```
214+
215+
### Demo
216+
217+
You are now ready to perform code reviews!
218+
219+
To run the [demo](demo/README.md#code-expert-demo-app), you will need the **InputBucketName** and **StateMachineArn**
220+
outputs from the CDK deployment.
221+
222+
### Running Code Reviews
223+
224+
See the [usage instructions](documentation/usage.md) for details on how to perform code reviews. You will need the *
225+
*InputBucketName**, **OutputBucketName**, and **StateMachineArn** outputs from the CDK deployment.
226+
220227
## Cleanup
221228

222229
In the event that you decide to stop using the prototype, we recommend that you follow a tear-down procedure. Most of
Lines changed: 235 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,235 @@
1+
# Usage
2+
3+
## Overview
4+
5+
The code expert system evaluates code repositories against a set of guidelines using generative AI. The process
6+
involves:
7+
8+
1. Uploading a code repository
9+
2. Starting a code review
10+
3. Waiting for results
11+
4. Downloading findings
12+
13+
## Prerequisites
14+
15+
- AWS CLI configured with appropriate permissions
16+
- A zipped code repository to evaluate
17+
- The State Machine ARN (available in CloudFormation outputs)
18+
- S3 bucket names (available in CloudFormation outputs)
19+
20+
## Getting Started
21+
22+
First, get your resource information from the CloudFormation stack outputs:
23+
24+
```shell
25+
aws cloudformation describe-stacks --stack-name CodeExpert --query 'Stacks[0].Outputs'
26+
```
27+
28+
Sample output:
29+
30+
```json
31+
[
32+
{
33+
"OutputKey": "InputBucketName",
34+
"OutputValue": "code-expert-inputbucket-1h0zbro1ysu7z"
35+
},
36+
{
37+
"OutputKey": "OutputBucketName",
38+
"OutputValue": "code-expert-outputbucket-8s7pyxq2m4e9"
39+
},
40+
{
41+
"OutputKey": "StateMachineArn",
42+
"OutputValue": "arn:aws:states:us-east-1:123456789012:stateMachine:CodeExpert"
43+
}
44+
]
45+
```
46+
47+
Save these values for use in subsequent commands:
48+
49+
```shell
50+
INPUT_BUCKET="code-expert-inputbucket-1h0zbro1ysu7z"
51+
OUTPUT_BUCKET="code-expert-outputbucket-8s7pyxq2m4e9"
52+
STATE_MACHINE_ARN="arn:aws:states:us-east-1:123456789012:stateMachine:CodeExpert"
53+
```
54+
55+
## Input Format
56+
57+
To start a code review, provide:
58+
59+
1. A zip file containing the code repository
60+
2. The model ID to use for evaluation
61+
3. Whether to use multiple evaluation mode (evaluating multiple rules in one model invocation)
62+
63+
The Step Functions input schema is:
64+
65+
```
66+
{
67+
"repo_key": "string", // S3 key of the uploaded repository zip
68+
"model_id": "string", // Bedrock model ID
69+
"multiple_evaluation": boolean // Whether to evaluate multiple rules per invocation
70+
}
71+
```
72+
73+
## Starting a Code Review
74+
75+
1. Upload the repository:
76+
77+
```bash
78+
aws s3 cp code.zip s3://${INPUT_BUCKET}/dataset/code.zip
79+
```
80+
81+
Sample output:
82+
83+
```shell
84+
upload: ./code.zip to s3://code-expert-inputbucket-1h0zbro1ysu7z/dataset/code.zip
85+
```
86+
87+
2. Start the Step Functions execution:
88+
89+
```bash
90+
aws stepfunctions start-execution \
91+
--state-machine-arn ${STATE_MACHINE_ARN} \
92+
--name "code_review_$(date +%s)" \
93+
--input '{
94+
"repo_key": "dataset/code.zip",
95+
"model_id": "us.anthropic.claude-3-5-sonnet-20241022-v2:0",
96+
"multiple_evaluation": false
97+
}'
98+
```
99+
100+
Sample output:
101+
102+
```json
103+
{
104+
"executionArn": "arn:aws:states:us-east-1:123456789012:execution:CodeExpert:code_review_1709543898",
105+
"startDate": "2024-03-04T12:31:38.793000+00:00"
106+
}
107+
```
108+
109+
Save the executionArn:
110+
111+
```shell
112+
EXECUTION_ARN="arn:aws:states:us-east-1:123456789012:execution:CodeExpert:code_review_1709543898"
113+
```
114+
115+
## Monitoring Progress
116+
117+
Check execution status:
118+
119+
```bash
120+
aws stepfunctions describe-execution \
121+
--execution-arn ${EXECUTION_ARN}
122+
```
123+
124+
Sample output while running:
125+
126+
```json
127+
{
128+
"executionArn": "arn:aws:states:us-east-1:123456789012:execution:CodeExpert:code_review_1709543898",
129+
"stateMachineArn": "arn:aws:states:us-east-1:123456789012:stateMachine:CodeExpert",
130+
"name": "code_review_1709543898",
131+
"status": "RUNNING",
132+
"startDate": "2024-03-04T12:31:38.793000+00:00"
133+
}
134+
```
135+
136+
Sample output when complete:
137+
138+
```json
139+
{
140+
"executionArn": "arn:aws:states:us-east-1:123456789012:execution:CodeExpert:code_review_1709543898",
141+
"stateMachineArn": "arn:aws:states:us-east-1:123456789012:stateMachine:CodeExpert",
142+
"name": "code_review_1709543898",
143+
"status": "SUCCEEDED",
144+
"startDate": "2024-03-04T12:31:38.793000+00:00",
145+
"stopDate": "2024-03-04T13:45:22.104000+00:00",
146+
"output": "{\"processFindings\":{\"bucket\":\"code-expert-outputbucket-8s7pyxq2m4e9\",\"key\":\"findings/code_review_1709543898.json\",\"errors_key\":\"errors/code_review_1709543898.json\"}}"
147+
}
148+
```
149+
150+
## Downloading Results
151+
152+
Get the findings:
153+
154+
```bash
155+
aws s3 cp s3://${OUTPUT_BUCKET}/findings/code_review_1709543898.json findings.json
156+
```
157+
158+
Sample findings content:
159+
160+
```json
161+
[
162+
{
163+
"rule": "JAVA001",
164+
"file": "src/main/java/com/example/service/UserService.java",
165+
"snippet": "@Autowired\nprivate UserRepository userRepository;",
166+
"description": "Field injection is being used instead of constructor injection. This makes the class harder to test and obscures its dependencies.",
167+
"suggestion": "Use constructor injection instead. Replace the field injection with a final field and add a constructor:\n\nprivate final UserRepository userRepository;\n\n@Autowired\npublic UserService(UserRepository userRepository) {\n this.userRepository = userRepository;\n}"
168+
}
169+
]
170+
```
171+
172+
Get any errors:
173+
174+
```bash
175+
aws s3 cp s3://${OUTPUT_BUCKET}/errors/code_review_1709543898.json errors.json
176+
```
177+
178+
Sample errors content:
179+
180+
```json
181+
[
182+
{
183+
"file": "src/main/java/com/example/config/SecurityConfig.java",
184+
"error": "Failed to process record: ModelResponseError: Output missing tool use",
185+
"rules": [
186+
"SEC001",
187+
"SEC002"
188+
]
189+
}
190+
]
191+
```
192+
193+
## Output Format
194+
195+
When the execution succeeds, the output includes S3 locations for the findings and any errors:
196+
197+
```
198+
{
199+
"processFindings": {
200+
"bucket": "string", // S3 bucket containing results
201+
"key": "string", // S3 key for findings JSON
202+
"errors_key": "string" // S3 key for errors JSON (if any)
203+
}
204+
}
205+
```
206+
207+
The findings JSON contains an array of findings:
208+
209+
```
210+
[
211+
{
212+
"rule": "string", // Rule ID
213+
"file": "string", // File path where issue was found
214+
"snippet": "string", // Relevant code snippet
215+
"description": "string", // Description of the issue
216+
"suggestion": "string" // Suggested improvement
217+
}
218+
]
219+
```
220+
221+
## Supported Models
222+
223+
The system supports the following Bedrock models:
224+
225+
- Claude 3 Haiku
226+
- Claude 3.5 Haiku
227+
- Claude 3.5 Sonnet
228+
- Claude 3.5 Sonnet v2
229+
- Claude 3.7
230+
- Nova Micro
231+
- Nova Lite
232+
- Nova Pro
233+
234+
You can optionally use cross-region inference for higher throughput by using the regional model IDs (e.g.,
235+
"us.anthropic.claude-3-5-sonnet-20241022-v2:0" instead of "anthropic.claude-3-5-sonnet-20241022-v2:0").

0 commit comments

Comments
 (0)