Skip to content

Commit 6c68a6f

Browse files
fix 311
The problem is hot partitions because it's over writing to the same partition. Having the hash it will distribute the writes to different partitions. It states that the consumed throughput is far below provisioned throughput, so increasing WCU and RCU won't change anything
1 parent 5d9e38f commit 6c68a6f

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3412,8 +3412,8 @@ We are so thankful for every contribution, which makes sure we can deliver top-n
34123412
### A company uses Amazon DynamoDB for managing and tracking orders. The DynamoDB table is partitioned based on the order date. The company receives a huge increase in orders during a sales event, causing DynamoDB writes to throttle, and the consumed throughput is far below the provisioned throughput. According to AWS best practices, how can this issue be resolved with MINIMAL costs?
34133413

34143414
- [ ] Create a new DynamoDB table for every order date.
3415-
- [x] Increase the read and write capacity units of the DynamoDB table.
3416-
- [ ] Add a random number suffix to the partition key values.
3415+
- [ ] Increase the read and write capacity units of the DynamoDB table.
3416+
- [x] Add a random number suffix to the partition key values.
34173417
- [ ] Add a global secondary index to the DynamoDB table.
34183418

34193419
**[⬆ Back to Top](#table-of-contents)**

0 commit comments

Comments
 (0)