This simple change improves the perf of useChatMentionSearchIndex by 45% in my local environment. Before this it takes an average of 1777.5ms, but after it takes an average of 978ms.
Linear task: ENG-5480
Differential D9625
[lib] Avoid constructing multiple duplicate tokenizers in SearchIndex ashoat on Oct 27 2023, 11:23 AM. Authored by Tags None Referenced Files
Subscribers
Details This simple change improves the perf of useChatMentionSearchIndex by 45% in my local environment. Before this it takes an average of 1777.5ms, but after it takes an average of 978ms. Linear task: ENG-5480 I used this patch to test performance before and after this change. I made sure I had at least three samples of each scenario. Will also link my messy Gist of results, but it's not really interpretable by anyone other than me. Here's the relevant portion: BEFORE LOG useChatMentionSearchIndex took 1801ms LOG useChatMentionSearchIndex took 1748ms LOG useChatMentionSearchIndex took 1730ms LOG useChatMentionSearchIndex took 1831ms AVERAGE 1777.5ms JUST DEDUP LOG useChatMentionSearchIndex took 1027ms LOG useChatMentionSearchIndex took 949ms LOG useChatMentionSearchIndex took 957ms AVERAGE 977.7ms
Diff Detail
|