The RoomFinder accesses s.allRooms and is used when sorting the
room list, where we would expect many accesses. Previously, we
returned copies of room metadata, which caused significant amounts
of GC churn, enough to show up on traces.
Swap to using pointers and rename the function to `ReadOnlyRoom(roomID)`
to indicate that it isn't safe to write to this return value.
Features:
- Add `typing` extension.
- Add `receipts` extension.
- Add comprehensive prometheus `/metrics` activated via `SYNCV3_PROM`.
- Add `SYNCV3_PPROF` support.
- Add `by_notification_level` sort order.
- Add `include_old_rooms` support.
- Add support for `$ME` and `$LAZY`.
- Add correct filtering when `*,*` is used as `required_state`.
- Add `num_live` to each room response to indicate how many timeline entries are live.
Bug fixes:
- Use a stricter comparison function on ranges: fixes an issue whereby UTs fail on go1.19 due to change in sorting algorithm.
- Send back an `errcode` on HTTP errors (e.g expired sessions).
- Remove `unsigned.txn_id` on insertion into the DB. Otherwise other users would see other users txn IDs :(
- Improve range delta algorithm: previously it didn't handle cases like `[0,20] -> [20,30]` and would panic.
- Send HTTP 400 for invalid range requests.
- Don't publish no-op unread counts which just adds extra noise.
- Fix leaking DB connections which could eventually consume all available connections.
- Ensure we always unblock WaitUntilInitialSync even on invalid access tokens. Other code relies on WaitUntilInitialSync() actually returning at _some_ point e.g on startup we have N workers which bound the number of concurrent pollers made at any one time, we need to not just hog a worker forever.
Improvements:
- Greatly improve startup times of sync3 handlers by improving `JoinedRoomsTracker`: a modest amount of data would take ~28s to create the handler, now it takes 4s.
- Massively improve initial initial v3 sync times, by refactoring `JoinedRoomsTracker`, from ~47s to <1s.
- Add `SlidingSyncUntil...` in tests to reduce races.
- Tweak the API shape of JoinedUsersForRoom to reduce state block processing time for large rooms from 63s to 39s.
- Add trace task for initial syncs.
- Include the proxy version in UA strings.
- HTTP errors now wait 1s before returning to stop clients tight-looping on error.
- Pending event buffer is now 2000.
- Index the room ID first to cull the most events when returning timeline entries. Speeds up `SelectLatestEventsBetween` by a factor of 8.
- Remove cancelled `m.room_key_requests` from the to-device inbox. Cuts down the amount of events in the inbox by ~94% for very large (20k+) inboxes, ~50% for moderate sized (200 events) inboxes. Adds book-keeping to remember the unacked to-device position for each client.
Previously we wouldn't send deletions for this, even though they shift
all elements to the left. Add a battery of unit tests for the list delta
algorithm, and standardise on the practice of issuing a DELETE prior to
an INSERT for newly inserted rooms, regardless of where in the window
they appear. Previously, we may skip the DELETE at the end of the list,
which was just inconsistent of us.
This is part of a series of refactors aimed to improve the performance
and complexity of calculating list deltas, which up until now exists in
its current form due to organic growth of the codebase.
This specific refactor introduces a new interface `RoomFinder` which
can map room IDs to `*RoomConnMetadata` which is used by `ConnState`.
All the sliding sync lists now use the `RoomFinder` instead of keeping
their own copies of `RoomConnMetadata`, meaning per-connection, rooms
just have 1 copy in-memory. This cuts down on memory usage as well as
cuts down on GC churn as we would constantly be replacing N rooms for
each update, where N is the total number of lists on that connection.
For Element-Web, N=7 currently to handle Favourites, Low Priority, DMs,
Rooms, Spaces, Invites, Search. This also has the benefit of creating
a single source of truth in `InternalRequestLists.allRooms` which can
be updated once and then a list of list deltas can be calculated off
the back of that. Previously, `allRooms` was _only_ used to seed new
lists, which created a weird imbalance as we would need to update both
`allRooms` _and_ each `FilteredSortableRooms` to keep things in-sync.
This refactor is incomplete in its present form, as we need to make
use of the new `RoomDelta` struct to efficiently package list updates.
We weren't doing this previously, but things didn't blow up because
we would almost always call resort() shortly afterwards which _would_
update the map with the new sort positions. In some cases this could
cause the lists to be sorting with incorrect index positions, notably:
- when an invite is retired.
- when a room no longer meets filter criteria and is removed.
This could be a major source of duplicate rooms.
Relevant actions include:
- People joining/leaving a room
- An m.room.name or m.room.canonical_alias event is sent
- etc..
Prior to this, we just set the room name field for initial=true
rooms only. This meant that if a room name was updated whilst it was
in the visible range (or currently subscribed to), we wouldn't set
this field resulting in stale names for clients. This was particularly
prominent when you created a room, as the initial member event would
cause the room to appear in the list as "Empty room" which then would
never be updated even if there was a subsequent `m.room.name` event
sent.
Fixed with regression tests.
This was caused by the GlobalCache not having a metadata entry for
the new room, which in some cases prevented a stub from being made.
With regression test.
caches pkg is required to avoid import loops as sync3 depends on extensions
as the extension type is embedded in the `sync3.Request`, but extensions will
need to know about the caches for account data to function correctly.
Particularly as the server expands into multiple lists and
filters, having a way to quickly detect off-by-one index
errors is important, so add an assert() function which
will panic() if SYNCV3_DEBUG=1 else log an angry message.
`sync3` contains data structures and logic which is very isolated and
testable (think ConnMap, Room, Request, SortableRooms, etc) whereas
`sync3/handler` contains control flow which calls into `sync3` data
structures.
This has numerous benefits:
- Gnarly complicated structs like `ConnState` are now more isolated
from the codebase, forcing better API design on `sync3` structs.
- The inability to do import cycles forces structs in `sync3` to remain
simple: they cannot pull in control flow logic from `sync3/handler`
without causing a compile error.
- It's significantly easier to figure out where to start looking for
code that executes when a new request is received, for new developers.
- It simplifies the number of things that `ConnState` can touch. Previously
we were gut wrenching out of convenience but now we're forced to move
more logic from `ConnState` into `sync3` (depending on the API design).
For example, adding `SortableRooms.RoomIDs()`.