Details
- Reviewers
tomek inka rohan - Commits
- rCOMMeadf8c40372f: [lib] Add chat mention SearchIndex selector
Tested later in the stack.
Diff Detail
- Repository
- rCOMM Comm
- Lint
No Lint Coverage - Unit
No Test Coverage
Event Timeline
If I understand correctly, this structure is build by creating a SentencePrefixSearchIndex for each community (or subtree with a root being a child of GENESIS), and having a "map" from every chat, to its corresponding SentencePrefixSearchIndex. But I don't understand why we needed to exclude the thread itself from its chatMentionCandidates, but we don't need to exclude it from this structure? Can you explain please?
Also - I think this code will change a lot after D8833 is refactored.
Discussed this offline:
But I don't understand why we needed to exclude the thread itself from its chatMentionCandidates, but we don't need to exclude it from this structure? Can you explain please?
Excluding the thread in which we mention was being handled by getMentionTypeaheadChatSuggestions introduced in D8910 (see for (const threadID in chatMentionCandidates) loop). But this is going to be modified: to avoid unnecessary for loop we can either extract thread itself from useThreadChatMentionSearchIndex or just to add if condition in getMentionTypeaheadChatSuggestions which filters out current thread id.
btw, I think that getMentionTypeaheadChatSuggestions should not be defined in D8910, but in separate diff...
lib/shared/thread-utils.js | ||
---|---|---|
1780–1794 ↗ | (On Diff #30614) | There are two possible performance issues here:
|
1803–1810 ↗ | (On Diff #30614) | Why do we have to do something special for genesis here? |
lib/shared/thread-utils.js | ||
---|---|---|
1780–1794 ↗ | (On Diff #30614) |
|
1803–1810 ↗ | (On Diff #30614) | See the argument: communityThreadIDForGenesisThreads[threadInfo.id]. |
It could, because every time something changes inside one community, we recompute every search index, instead of just the one that might be affected. But this shouldn't be a big issue - it's only one, rare render. In the solution I proposed, this issue will still be present.
So overall, my ideas were about improving the performance of an algorithm, without reducing the number of renders (significantly).