Page MenuHomePhabricator

[keyserver] Debounce one-time key refresh request
ClosedPublic

Authored by ashoat on Mar 24 2024, 7:49 PM.
Tags
None
Referenced Files
Unknown Object (File)
Sun, Nov 24, 11:53 PM
Unknown Object (File)
Sun, Nov 24, 11:41 PM
Unknown Object (File)
Sun, Nov 24, 9:29 PM
Unknown Object (File)
Sat, Nov 16, 5:12 PM
Unknown Object (File)
Sat, Nov 16, 5:12 PM
Unknown Object (File)
Sat, Nov 16, 5:10 PM
Unknown Object (File)
Sat, Nov 16, 5:10 PM
Unknown Object (File)
Tue, Nov 5, 7:21 AM
Subscribers
None

Details

Summary

Identity will queue up a request for 5 new one-time keys every time it fails to find one. If a keyserver is offline for a bit, this can lead to a large queue of requests for one-time keys.

Instead of adding 5 new one-time keys for each message, I think we should debounce the requests. This diff makes us process just two requests (one leading and one trailing) if we see a large queue when we start up. From there, each additional message will be processed independently.

This addresses ENG-7383. More details there.

Depends on D11375

Test Plan

I enqueued 10 requests for new one-time keys, and then started the keyserver and observed that only 10 one-time keys were added to DynamoDB (meaning only two of the requests were processed)

Diff Detail

Repository
rCOMM Comm
Lint
Lint Not Applicable
Unit
Tests Not Applicable

Event Timeline

This looks reasonable, but I'm wondering if we shouldn't solve this on Identity service: it knows how many keys are there, and how many keys are needed to fulfill requests, and could ask a keyserver for the appropriate number. Also, it should know that a keyserver was asked for the keys recently and can decide either to not ask for more or to ask for e.g. only one more.

This revision is now accepted and ready to land.Mar 25 2024, 6:51 AM

Yes, we could consider skipping Tunnelbroker and having clients connect directly to identity (eg. via WebSocket)

That said it would require changing some architecture and this is a quick fix that works, so going to land for now