sliding-sync/sync3/handler/lazy_member.go
Kegan Dougal be8543a21a add extensions for typing and receipts; bugfixes and additional perf improvements
Features:
 - Add `typing` extension.
 - Add `receipts` extension.
 - Add comprehensive prometheus `/metrics` activated via `SYNCV3_PROM`.
 - Add `SYNCV3_PPROF` support.
 - Add `by_notification_level` sort order.
 - Add `include_old_rooms` support.
 - Add support for `$ME` and `$LAZY`.
 - Add correct filtering when `*,*` is used as `required_state`.
 - Add `num_live` to each room response to indicate how many timeline entries are live.

Bug fixes:
 - Use a stricter comparison function on ranges: fixes an issue whereby UTs fail on go1.19 due to change in sorting algorithm.
 - Send back an `errcode` on HTTP errors (e.g expired sessions).
 - Remove `unsigned.txn_id` on insertion into the DB. Otherwise other users would see other users txn IDs :(
 - Improve range delta algorithm: previously it didn't handle cases like `[0,20] -> [20,30]` and would panic.
 - Send HTTP 400 for invalid range requests.
 - Don't publish no-op unread counts which just adds extra noise.
 - Fix leaking DB connections which could eventually consume all available connections.
 - Ensure we always unblock WaitUntilInitialSync even on invalid access tokens. Other code relies on WaitUntilInitialSync() actually returning at _some_ point e.g on startup we have N workers which bound the number of concurrent pollers made at any one time, we need to not just hog a worker forever.

Improvements:
 - Greatly improve startup times of sync3 handlers by improving `JoinedRoomsTracker`: a modest amount of data would take ~28s to create the handler, now it takes 4s.
 - Massively improve initial initial v3 sync times, by refactoring `JoinedRoomsTracker`, from ~47s to <1s.
 - Add `SlidingSyncUntil...` in tests to reduce races.
 - Tweak the API shape of JoinedUsersForRoom to reduce state block processing time for large rooms from 63s to 39s.
 - Add trace task for initial syncs.
 - Include the proxy version in UA strings.
 - HTTP errors now wait 1s before returning to stop clients tight-looping on error.
 - Pending event buffer is now 2000.
 - Index the room ID first to cull the most events when returning timeline entries. Speeds up `SelectLatestEventsBetween` by a factor of 8.
 - Remove cancelled `m.room_key_requests` from the to-device inbox. Cuts down the amount of events in the inbox by ~94% for very large (20k+) inboxes, ~50% for moderate sized (200 events) inboxes. Adds book-keeping to remember the unacked to-device position for each client.
2022-12-14 18:53:55 +00:00

45 lines
1.0 KiB
Go

package handler
type LazyCache struct {
cache map[string]struct{}
rooms map[string]struct{}
}
func NewLazyCache() *LazyCache {
return &LazyCache{
cache: make(map[string]struct{}),
rooms: make(map[string]struct{}),
}
}
func (lc *LazyCache) IsSet(roomID, userID string) bool {
key := roomID + " | " + userID
_, exists := lc.cache[key]
return exists
}
// IsLazyLoading returns true if this room is being lazy loaded.
func (lc *LazyCache) IsLazyLoading(roomID string) bool {
_, exists := lc.rooms[roomID]
return exists
}
func (lc *LazyCache) Add(roomID string, userIDs ...string) {
for _, u := range userIDs {
lc.AddUser(roomID, u)
}
}
// AddUser to this room. Returns true if this is the first time this user has done so, and
// hence you should include the member event for this user.
func (lc *LazyCache) AddUser(roomID, userID string) bool {
lc.rooms[roomID] = struct{}{}
key := roomID + " | " + userID
_, exists := lc.cache[key]
if exists {
return false
}
lc.cache[key] = struct{}{}
return true
}