You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First off I need to clarify that our usage of Stream is fairly non-standard. In our application we need to fetch all channels up front because we do channel categorisation. We cannot categorise on scroll and infinitely load that way because channels will jump around.
Our issue is that the response from client.queryChannels doesn't return a total. Because of this we need to request channels sequentially rather than in parallel. We see some requests for channels in the 3-5s range pretty frequently.
If a coach has 100 users assigned to them so we need to make 4 requests due to the query limit being 30.
If Stream exposed a total: number in the API response we could request once to get the first page and the total, then fetch the rest in parallel which would improve the performance for us pretty drastically.
With the current way the API is designed we can sorta force parallel requests with something like this (naïve implementation):
// Super rough and too staticconstESTIMATED_MAX_CHANNELS=300constrequests=Math.ceil(ESTIMATED_MAX_CHANNELS/QUERY_LIMIT)constrange=Array.from({length: requests},(_,i)=>i*QUERY_LIMIT)constresponses=awaitPromise.all(range.map((offset)=>client.queryChannels(filters,undefined,{limit: QUERY_LIMIT,
offset,watch: true,state: true,message_limit: 50,}),),)constdata=responses.flat()
This could be improved by batching 100 channels at a time in parallel so we can overcome the ESTIMATED_MAX_CHANNELS limitation.
All of these are less efficient than the API telling us the total up front however and I'd rather not hit the Stream API more than we need to.
The text was updated successfully, but these errors were encountered:
Unfortunately, we can't accommodate new features that support non-standard use cases of our API. Front loading all channels is something we advise against as it can cause performance issues on the client-side and rate limit issues.
Although I'm not familiar with your use-case, but if the issue is sorting ("We cannot categorise on scroll and infinitely load that way because channels will jump around.") you can use custom data on channels to categorize channels and then use queryChannels to filter by category.
queryChannels({'custom.category': 'Category 1'})// once the user scrolled to the bottom of this category, switch to next queryqueryChannels({'custom.category': 'Category 2'})
The limitation is that your channel list will be sorted by category first, but it's a performant solution because you don't have to fire off multiple requests in parallel.
Hello, that's an interesting suggestion. Our categories are a mix of time-based and message-based. For example:
Active Coach
Active User
Needs Follow up
These would need to be computed periodically and/or on event, ideally on the backend. I'll run it by the team.
I do understand that our use-case is non-standard. However if Stream pagination it also makes sense that there be some indicator of the position via a cursor, current page + total pages. I think that's standard for a pagination API.
First off I need to clarify that our usage of Stream is fairly non-standard. In our application we need to fetch all channels up front because we do channel categorisation. We cannot categorise on scroll and infinitely load that way because channels will jump around.
Our issue is that the response from
client.queryChannels
doesn't return a total. Because of this we need to request channels sequentially rather than in parallel. We see some requests for channels in the 3-5s range pretty frequently.Here's the code we have to use:
If a coach has 100 users assigned to them so we need to make 4 requests due to the query limit being 30.
If Stream exposed a
total: number
in the API response we could request once to get the first page and the total, then fetch the rest in parallel which would improve the performance for us pretty drastically.With the current way the API is designed we can sorta force parallel requests with something like this (naïve implementation):
This could be improved by batching 100 channels at a time in parallel so we can overcome the ESTIMATED_MAX_CHANNELS limitation.
All of these are less efficient than the API telling us the total up front however and I'd rather not hit the Stream API more than we need to.
The text was updated successfully, but these errors were encountered: