Page MenuHomePhabricator

[services] Backup - Fix data size database limit
ClosedPublic

Authored by karol on Jun 7 2022, 6:53 AM.
Tags
None
Referenced Files
Unknown Object (File)
Sun, Nov 10, 2:08 AM
Unknown Object (File)
Fri, Nov 8, 1:19 PM
Unknown Object (File)
Thu, Nov 7, 10:26 PM
Unknown Object (File)
Tue, Nov 5, 5:01 PM
Unknown Object (File)
Sun, Oct 27, 9:25 PM
Unknown Object (File)
Oct 13 2024, 12:48 PM
Unknown Object (File)
Oct 13 2024, 12:48 PM
Unknown Object (File)
Oct 13 2024, 12:48 PM

Details

Summary

Depends on D4232

The database limit is invalid. I found out in the docs that it should be 400kb (link in the code).

Test Plan

-

Diff Detail

Repository
rCOMM Comm
Lint
No Lint Coverage
Unit
No Test Coverage

Event Timeline

karol edited the test plan for this revision. (Show Details)
karol added a reviewer: tomek.
varun added inline comments.
services/backup/src/Constants.h
23 ↗(On Diff #13389)

isn't 1KB 1000 bytes? Wondering generally if we're being consistent about kilo vs kibi, mega vs mebi, etc.

Personally I prefer using 1024 to 1000, but agree we should be precise and use KiB instead of KB

yes, we should use KiB, sorry, will fix this

I

services/backup/src/Constants.h
22

I'm a little paranoid that Amazon is using "KB" here but they really mean "KiB". I guess it's probably safe for us to assume "KB" (since it is smaller), but wondering – do you have any indication from these docs that Amazon is referring to KB rather than KiB, other than them just using "KB"?

This revision is now accepted and ready to land.Jun 14 2022, 3:59 AM
services/backup/src/Constants.h
22

I do not have anything like this but I simply ran a test for this and once I hardcoded 400KiB (400 * 1024) and tried to put this into the database, I got an error: error: Item size has exceeded the maximum allowed size. 400KB (400 * 1000) limit works alright.

services/backup/src/Constants.h
22

How about some number in between? It's not clear to me if 400 KB is the very highest number allowed, or if 400 KiB is just one byte too high

Ok, I ran some specific tests for this and it turned out the limit is 400KiB, so basically you were right. Thanks for pushing me to do the research, in the beginning I didn't know whether this is that important.

With that said, the current solution would work for most cases. The thing is, the limit refers to the size of the entire item, which consists of all fields names' sizes plus their values' sizes. That means, it would only break if the sizes of other fields (their names + values) would exceed (1024 * 400) - (1000 * 400) = 9600 which, in practice, is almost impossible considering how the service works right now. But this may change if one day we decide to add some data to this table or something. Also, it is just good to be specific.

Knowing this I see I have to now redesign the logic of sending logs/adding attachments a bit.

ashoat requested changes to this revision.Jun 21 2022, 9:54 AM

Thanks a bunch for doing that research! I didn't realize that the size of other fields factored in here, and I think it's really risky that we're relying on (1024 * 400) - (1000 * 400) = 9600 being enough space for those other things. (Please correct me if I'm misinterpreting.)

Curious for @palys-swm's perspective here also, but ultimately I think we should try to figure out a better "best practice" here. Some things that come to mind:

  1. Maybe we could programmatically guarantee this property during runtime
  2. Would be more difficult, but we could potentially dynamically decide the limit based on the size of the other fields
  3. Probably the best way forward here: is there any way we could write a unit test that would guarantee this property?
This revision now requires changes to proceed.Jun 21 2022, 9:54 AM

I created a task to track this: https://linear.app/comm/issue/ENG-1288/properly-handle-the-dynamodb-item-size-limit

I pushed follow-ups for this. Even though, some of the changes in these follow-ups may be pushed as updates to the existing diffs, I think it's better to create new diffs, so if we abandon the entire idea, it's easier to just remove it and the original stack will not be affected.

The part of the stack that takes care of this: D4320 - D4326

  1. Maybe we could programmatically guarantee this property during runtime

Not sure if I understand what you mean. "This property" means that we check if the item really exceeds the limit or not I suppose. If so, my solution does this.

  1. Would be more difficult, but we could potentially dynamically decide the limit based on the size of the other fields

Same, not sure I follow. I figured out an effective way of calculating the item's size - please check getItemSize in D4320.

  1. Probably the best way forward here: is there any way we could write a unit test that would guarantee this property?

I don't understand, sorry.

Please, check out what's in the task I posted, we can follow up there. The implementation of the idea will be pushed to the part of the stack I also linked here, above.

Thanks, your new stack makes sense to me. That said we probably shouldn't be landing this one since it says the limit is "400KB limit (KB, not KiB", which as I understand it is incorrect. Can we fix up the comment before landing?

This revision is now accepted and ready to land.Jun 22 2022, 11:06 AM

Ah, I see this is fixed in D4324. Would've been less confusing to fix the comment here I think

Ah, I see this is fixed in D4324. Would've been less confusing to fix the comment here I think

I totally agree, here's the reason why I did this as a separate change:

I pushed follow-ups for this. Even though, some of the changes in these follow-ups may be pushed as updates to the existing diffs, I think it's better to create new diffs, so if we abandon the entire idea, it's easier to just remove it and the original stack will not be affected.