Page MenuHomePhabricator

[keyserver][lib][web][native] use local search for thick threads
ClosedPublic

Authored by inka on Jul 24 2024, 1:58 AM.
Tags
None
Referenced Files
F3374432: D12865.id42723.diff
Tue, Nov 26, 3:31 PM
F3374389: D12865.id42767.diff
Tue, Nov 26, 3:23 PM
Unknown Object (File)
Mon, Nov 11, 6:07 PM
Unknown Object (File)
Mon, Nov 11, 4:58 PM
Unknown Object (File)
Fri, Nov 1, 12:02 PM
Unknown Object (File)
Oct 14 2024, 3:16 PM
Unknown Object (File)
Oct 14 2024, 3:16 PM
Unknown Object (File)
Oct 14 2024, 3:16 PM
Subscribers

Details

Summary

issue: ENG-8653

Test Plan

Tested by commenting out isThreadThin check, and commenting out extractKeyserverIDFromIDOptional(messageInfo.threadID) check in getMessageSearchStoreOps. This allowed me to treat a thin thread as if it were thick, and execute local search on it.

Tested that the messages are fetched, ordered by timestamp descending. Tested that in thin threads (if the above hacks are removed) search works as before.

One thing that doesn’t work in thick threads is fetching next pages of results. This is because timestamps exceed cpp int, so SQLiteQueryExecutor::searchMessages doesn’t bind them to sql query correctly. Fixing this is tracked in ENG-8893

Diff Detail

Repository
rCOMM Comm
Branch
inka/search
Lint
No Lint Coverage
Unit
No Test Coverage

Event Timeline

inka requested review of this revision.Jul 24 2024, 2:15 AM
inka added inline comments.
lib/actions/message-actions.js
496

@kamil do you remember why we decided to use string for timestampCursor? Is it because it is easier to handle when passing to cpp? Or did I use a string in tests, and you just followed that?

One thing that doesn’t work in thick threads is fetching next pages of results. This is because timestamps exceed cpp int, so SQLiteQueryExecutor::searchMessages doesn’t bind them to sql query correctly. Fixing this is tracked in ENG-8893

Next diff fixes this

tomek added inline comments.
lib/actions/message-actions.js
477

Can we use defaultNumberPerThread from lib/types/message-types.js?

507–513

It's up to you, but this can be simplified

native/search/message-search.react.js
49–50

We could also use a single state variable with the whole message info

web/search/message-search-state-provider.react.js
42–47

Would it be simpler if we had a single state variable with the whole message info?

kamil added inline comments.
lib/actions/message-actions.js
496

This on purpose for safety, in general, we have a convention to pass numbers as strings to .wasm or native code to avoid issues with encoding on a different platforms

This revision is now accepted and ready to land.Jul 25 2024, 1:42 AM
lib/actions/message-actions.js
496

Thank you for explaining!

Address review

native/search/message-search.react.js
49–50

I don't like that. They are being set in different places and we want memos to depend on them separately. So I feel like they should be separate

lib/actions/message-actions.js
498 ↗(On Diff #42827)

Nit: seems like a capitalization typo, should this be messageIDCursor?

lib/types/message-types.js
697–698 ↗(On Diff #42827)

Curious why we need both cursors

lib/actions/message-actions.js
498 ↗(On Diff #42827)

Created D12965 to address

lib/types/message-types.js
697–698 ↗(On Diff #42827)

(This was discussed here)
On the keyserver message ids grow with time, and there are no two identical ids. Meaning a newer message will have a bigger id, and messages with the same timestamp have different ids so we can sort them deterministically.
In DMs, message ids are random. So we cannot sort by them. So we sort by timestamp. But on the off chance that two messages will have the same timestamp, we need some way to deterministically decide which is "first".
This way, if the page of results happens to end on a timestamp that is shared between messages, but not all messages with this timestamp were fit in that page, we know which messages to send in the next page of results. This lets us avoid repetition or missing some messages.

lib/types/message-types.js
697–698 ↗(On Diff #42827)

Thank you for explaining!