1. Preparation
On staging, created two tables: one with on-demand capacity, second one with provisioned capacity, with 1 write capacity unit. Also created one on localstack.
The tables were simple one column tables.
2. Tests
Did the following:
- Generated ~2k (not a multiple of 25) UUIDs (random strings). First test was to save them all in one batch.
- Repeated, but split into chunks of ~400 items (413 IIRC). Spawned a spearate thread for each chunk and ran them concurrently.
- Also, during report service development, imported over 3K reports (100MB) using this function. Each 25-item batch was sent in a separate thread. This was done on localstack.
3. Observations
- Localstack doesn't do any throttling so all items were written.
- Similiar for on-demand capacity table. In fact it is able to throttle, but the limit is way too high for my tests.
- With provisioned capacity table, I was able to observe several `ProvisionedThroughputExceeded` errors. Exponential backoff helped with them and after a few retries all rows were able to be written. However, for large concurrent batch, I had to increase the base delay to give DDB time to get up.
- Example log: https://pastebin.com/embed_js/GmAkXHjk
- Never was able to "partially" succeed - i.e. the response was successful, but `unprocessed_items` was not empty.