Features:
- Add `typing` extension.
- Add `receipts` extension.
- Add comprehensive prometheus `/metrics` activated via `SYNCV3_PROM`.
- Add `SYNCV3_PPROF` support.
- Add `by_notification_level` sort order.
- Add `include_old_rooms` support.
- Add support for `$ME` and `$LAZY`.
- Add correct filtering when `*,*` is used as `required_state`.
- Add `num_live` to each room response to indicate how many timeline entries are live.
Bug fixes:
- Use a stricter comparison function on ranges: fixes an issue whereby UTs fail on go1.19 due to change in sorting algorithm.
- Send back an `errcode` on HTTP errors (e.g expired sessions).
- Remove `unsigned.txn_id` on insertion into the DB. Otherwise other users would see other users txn IDs :(
- Improve range delta algorithm: previously it didn't handle cases like `[0,20] -> [20,30]` and would panic.
- Send HTTP 400 for invalid range requests.
- Don't publish no-op unread counts which just adds extra noise.
- Fix leaking DB connections which could eventually consume all available connections.
- Ensure we always unblock WaitUntilInitialSync even on invalid access tokens. Other code relies on WaitUntilInitialSync() actually returning at _some_ point e.g on startup we have N workers which bound the number of concurrent pollers made at any one time, we need to not just hog a worker forever.
Improvements:
- Greatly improve startup times of sync3 handlers by improving `JoinedRoomsTracker`: a modest amount of data would take ~28s to create the handler, now it takes 4s.
- Massively improve initial initial v3 sync times, by refactoring `JoinedRoomsTracker`, from ~47s to <1s.
- Add `SlidingSyncUntil...` in tests to reduce races.
- Tweak the API shape of JoinedUsersForRoom to reduce state block processing time for large rooms from 63s to 39s.
- Add trace task for initial syncs.
- Include the proxy version in UA strings.
- HTTP errors now wait 1s before returning to stop clients tight-looping on error.
- Pending event buffer is now 2000.
- Index the room ID first to cull the most events when returning timeline entries. Speeds up `SelectLatestEventsBetween` by a factor of 8.
- Remove cancelled `m.room_key_requests` from the to-device inbox. Cuts down the amount of events in the inbox by ~94% for very large (20k+) inboxes, ~50% for moderate sized (200 events) inboxes. Adds book-keeping to remember the unacked to-device position for each client.
This is in preparation to allow us to walk over tombstones automatically
in the proxy. Currently, we just emulate `is_tombstoned` behaviour by
checking if `upgraded_room_id != NULL`.
This is a breaking database change as there is no migration path.
If a room moved from one window range to another window range, and the
index of the destination was the leading edge of a different window,
this would trip up the code into thinking it was a no-op move and hence
not issue a DELETE/INSERT for the 2nd window, even though it was in fact
needed. For example:
```
w1 w2
[0,1,2] 3,4,5 [6,7,8]
Move 1 to 6 turns the list into:
[0,2,3] 4,5,6 [1,7,8]
which should be the operations:
DELETE 1, INSERT 2 (val=3)
DELETE 6, INSERT 6 (val=1)
but because DELETE/INSERT both have the same index value, and the target
room is the updated room, we thought this was the same as when you have:
[0,1,2] 3,4,5
Move 0 to 0
which should no-op.
```
Fixed by ensuring that we also check that there is only 1 move operation.
If there are >1 move operations then we are moving between lists and should
include the DELETE/INSERT operation with the same index. This could manifest
itself in updated rooms spontaneously disappearing and/or neighbouring rooms
being duplicated.
Previously, we would just focus on finding _any_ window boundary and then
assume that was the boundary which matched the window for the purposes of
DELETE/INSERT move operations. However, this wasn't always true, especially
in the following case:
```
0..9 [10..20] 21...29 [30...40]
then move 30 to 10
0..9 [30,10...19] 20...28 [29,31...40]
expect:
- DELETE 30, INSERT 30 (val=29)
- DELETE 20, INSERT 10 (val=30)
but we would get:
- DELETE 30, INSERT 20 (val=19)
- DELETE 20, INSERT 10 (val=30)
because the code assumed that there was a window range [20,30] which there wasn't.
```
- Randomly move elements 10,000 times in a sliding window.
- Fixed a bug as a result which would cause the algorithm to
fail to issue a DELETE/INSERT when the room was _inserted_
to the very end of the window range, due to it misfiring
with the logic to not issue operations for no-op moves.
Previously we wouldn't send deletions for this, even though they shift
all elements to the left. Add a battery of unit tests for the list delta
algorithm, and standardise on the practice of issuing a DELETE prior to
an INSERT for newly inserted rooms, regardless of where in the window
they appear. Previously, we may skip the DELETE at the end of the list,
which was just inconsistent of us.
This is so clients can accurately calculate the push rule:
```
{"kind":"room_member_count","is":"2"}
```
Also fixed a bug in the global room metadata for the joined/invited
counts where it could be wrong because of Synapse sending duplicate
join events as we were tracking +-1 deltas. We now calculate these
counts based on the set of user IDs in a specific membership state.
Then just loop over the list deltas when processing the event. This
ensures we don't needlessly loop over lists which did not care and
still do not care about the incoming update.
This is part of a series of refactors aimed to improve the performance
and complexity of calculating list deltas, which up until now exists in
its current form due to organic growth of the codebase.
This specific refactor introduces a new interface `RoomFinder` which
can map room IDs to `*RoomConnMetadata` which is used by `ConnState`.
All the sliding sync lists now use the `RoomFinder` instead of keeping
their own copies of `RoomConnMetadata`, meaning per-connection, rooms
just have 1 copy in-memory. This cuts down on memory usage as well as
cuts down on GC churn as we would constantly be replacing N rooms for
each update, where N is the total number of lists on that connection.
For Element-Web, N=7 currently to handle Favourites, Low Priority, DMs,
Rooms, Spaces, Invites, Search. This also has the benefit of creating
a single source of truth in `InternalRequestLists.allRooms` which can
be updated once and then a list of list deltas can be calculated off
the back of that. Previously, `allRooms` was _only_ used to seed new
lists, which created a weird imbalance as we would need to update both
`allRooms` _and_ each `FilteredSortableRooms` to keep things in-sync.
This refactor is incomplete in its present form, as we need to make
use of the new `RoomDelta` struct to efficiently package list updates.
This could happen with 1-length windows e.g `[0,0]` where an element
was moved from outside the range e.g i=5 to the window index e.g 0.
This then triggered an off-by-one error in the code which snapped
indexes to windows. Fixed with regression tests.
Caused by us not updating the `CanonicalisedName` which is what we use to sort on.
This field is a bit of an oddity because it lived outside the user/global cache
fields because it is a calculated value from the global cache data in the scope
of a user, whereas other user cache values are derived directly from specific
data (notif counts, DM-ness). This is a silly distinction however, since spaces
are derived from global room data as well, so move `CanonicalisedName` to the
UserCache and keep it updated when the room name changes.
Longer term: we need to clean this up so only the user cache is responsible
for updating user cache fields, and connstate treats user room data and global
room data as immutable. This is _mostly_ true today, but isn't always, and it
causes headaches. In addition, it looks like we maintain O(n) caches based on
the number of lists the user has made: we needn't do this and should lean
much more heavily on `s.allRooms`, just keeping pointers to this slice from
whatever lists the user requests.
We weren't doing this previously, but things didn't blow up because
we would almost always call resort() shortly afterwards which _would_
update the map with the new sort positions. In some cases this could
cause the lists to be sorting with incorrect index positions, notably:
- when an invite is retired.
- when a room no longer meets filter criteria and is removed.
This could be a major source of duplicate rooms.
Previously we used a slice as this is slightly cheaper, but
since Synapse can return multiple join events in the timeline
it could cause a user ID to be present multiple times in a room.
When this happened, it would cause the `UserCache` callbacks to
be invoked twice for every event, clearly not ideal. By using a
set instead, we make sure that we don't add the same user more
than one time.
Ref: https://github.com/matrix-org/synapse/issues/9768
With regression tests. Comments explain the edge case, but basically
previously we were not calling `builder.AddRoomsToSubscription` with
the new room because we didn't know it was brand new, as it had a
valid swap operation. It only had a valid swap op because we "inserted"
(read: pretended) that the room has always been there at `len(list)`
so the from index was outside the known range. This works great most
of the time, but failed in the case where you use a large window size
e.g `[[0,20]]` for 3 rooms, then the 4th room is still "inside the range"
and hence is merely an update, not a brand new room, so we wouldn't add
the room to the builder.
Fixed by decoupling adding rooms to the builder and expecting swap/insert
ops; they aren't mutually exclusive.
Relevant actions include:
- People joining/leaving a room
- An m.room.name or m.room.canonical_alias event is sent
- etc..
Prior to this, we just set the room name field for initial=true
rooms only. This meant that if a room name was updated whilst it was
in the visible range (or currently subscribed to), we wouldn't set
this field resulting in stale names for clients. This was particularly
prominent when you created a room, as the initial member event would
cause the room to appear in the list as "Empty room" which then would
never be updated even if there was a subsequent `m.room.name` event
sent.
Fixed with regression tests.
This was caused by the GlobalCache not having a metadata entry for
the new room, which in some cases prevented a stub from being made.
With regression test.
The problem is that there is NOT a 1:1 relationship between request/response,
due to cancellations needing to be processed (else state diverges between client/server).
Whilst we were buffering responses and returning them eagerly if the request data did
not change, we we processing new requests if the request data DID change. This puts us
in an awkward position. We have >1 response waiting to send to the client, but we
cannot just _ignore_ their new request else we'll just drop it to the floor, so we're
forced to process it and _then_ return the buffered response. This is great so long as
the request processing doesn't take long: which it will if we are waiting for live updates.
To get around this, when we detect this scenario, we artificially reduce the timeout value
to ensure request processing is fast.
If we just use websockets this problem goes away...