Page MenuHomePhabricator

[services] Backup - Add blob get client reactor
ClosedPublic

Authored by karol on Apr 6 2022, 7:35 AM.
Tags
None
Referenced Files
Unknown Object (File)
Fri, Oct 25, 12:14 AM
Unknown Object (File)
Sun, Oct 13, 3:42 PM
Unknown Object (File)
Sun, Oct 13, 3:42 PM
Unknown Object (File)
Sun, Oct 13, 3:42 PM
Unknown Object (File)
Sun, Oct 13, 3:42 PM
Unknown Object (File)
Sun, Oct 13, 3:41 PM
Unknown Object (File)
Sun, Oct 13, 3:39 PM
Unknown Object (File)
Sun, Oct 6, 10:59 AM

Details

Summary

Depends on D3627

Adding blob client reactor for get operation.

Test Plan

The same as in D3535

Diff Detail

Repository
rCOMM Comm
Lint
Lint Not Applicable
Unit
Tests Not Applicable

Event Timeline

Harbormaster returned this revision to the author for changes because remote builds failed.Apr 6 2022, 7:44 AM
Harbormaster failed remote builds in B7921: Diff 11103!
karol edited the summary of this revision. (Show Details)

retrigger ci

Harbormaster returned this revision to the author for changes because remote builds failed.Apr 7 2022, 3:52 AM
Harbormaster failed remote builds in B7960: Diff 11164!
tomek requested changes to this revision.Apr 8 2022, 2:42 AM
tomek added inline comments.
services/backup/docker-server/contents/server/src/Reactors/client/blob/BlobGetClientReactor.h
3–15 ↗(On Diff #11164)

I don't think all of these are used

44 ↗(On Diff #11164)

It looks like we write to the queue chunk by chunk. Are we going to keep writing until all the chunks are received? If that's the case, this doesn't scale too well as we might easily consume all the memory when there will be a couple of concurrent reactors each reading big blobs. Also, it would mean that we introduce a significant delay, because we wait with sending them to client.

But it might be also the case that this is not what's happening - so please clarify.

This revision now requires changes to proceed.Apr 8 2022, 2:42 AM
karol added inline comments.
services/backup/docker-server/contents/server/src/Reactors/client/blob/BlobGetClientReactor.h
3–15 ↗(On Diff #11164)

We can remove:

#include "Constants.h"
#include <condition_variable>
#include <iostream>
44 ↗(On Diff #11164)

I'm trying to save as much memory as possible by moving the data around. I'm not sure how exactly we could avoid what you're describing. We have to somehow handle big chunks of data.

tomek requested changes to this revision.Apr 11 2022, 6:25 AM
tomek added inline comments.
services/backup/docker-server/contents/server/src/Reactors/client/blob/BlobGetClientReactor.h
44 ↗(On Diff #11164)

Saving memory is a good practice but we should be prepared for arbitrary sizes. Is there a reason why we can't start sending the data to the client immediately when the first chunk is ready?

This revision now requires changes to proceed.Apr 11 2022, 6:25 AM
karol added inline comments.
services/backup/docker-server/contents/server/src/Reactors/client/blob/BlobGetClientReactor.h
44 ↗(On Diff #11164)

We do this.

After we get the data from the other side in readResponse, we immediately add it to the queue. We must use the queue as these are all async calls. And in the other reactor, once there is an item in the queue, we immediately take it off and use it.

How else can this be solved?

tomek added inline comments.
services/backup/docker-server/contents/server/src/Reactors/client/blob/BlobGetClientReactor.h
3–15 ↗(On Diff #11164)

Ok, so please remove them

44 ↗(On Diff #11164)

Ok, thanks for this clarification - the behavior is correct. I was asking about this in the first comment

Are we going to keep writing until all the chunks are received?

it would mean that we introduce a significant delay, because we wait with sending them to client.

It would save us some time if you gave that answer earlier.

This revision is now accepted and ready to land.Apr 12 2022, 9:11 AM
services/backup/docker-server/contents/server/src/Reactors/client/blob/BlobGetClientReactor.h
44 ↗(On Diff #11164)

Ok, sorry I might've misunderstood here.

This revision was landed with ongoing or failed builds.Apr 13 2022, 5:11 AM
This revision was automatically updated to reflect the committed changes.