You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Investigate the number of simultaneous rooms and maximum number of users per room before the sprig collab feature breaks. This should help us put a cap on the max number of users we allow per collab room.
Things we want to investigate
max. number of collab rooms
max. number of users per collab room.
max. capacity of signaling server
The text was updated successfully, but these errors were encountered:
Updates on this issue. I ran a few tests some time and below are the results I got from the test run.
The test is as follows
It starts with two rooms and two clients per room
A random client in each room sends data (a string)
The saving server receives a copy of the data sent as a peer in the room
It saves the data to firebase and sends an acknowledgement to all other peers in the room that the data has been saved
*We measure the time elapsed between when the new update was received via webRTC and when the data was saved to firebase. We later compute the average time here.
On the sender's side, we measure the time elapsed between the last update and when the newest update has been received. We later compute the average time elapsed between the last update and newest update.
At 13 rooms and 14 clients per room, we are in the likes of half a second time elapsed between an update being sent and other peers receiving that update which I think is good.
There's a data point where at 14 rooms and 14 clients per room we're at 2.7 seconds but I think that's just an outlier.
It consistently takes under a second to save data to firebase.
Investigate the number of simultaneous rooms and maximum number of users per room before the sprig collab feature breaks. This should help us put a cap on the max number of users we allow per collab room.
Things we want to investigate
The text was updated successfully, but these errors were encountered: