The server will wait 1s if clients:
- repeat the same request (same `?pos=`)
- repeatedly hit `/sync` without a `?pos=`.
Both of these failure modes have been seen in the wild.
Fixes#93.
The user cache listeners slice is written to by HTTP goroutines
when clients make requests, and is read by the callbacks from v2
pollers. This slice wasn't protected from bad reads, only writes
were protected. Expanded the mutex to be RW to handle this.
cancelOutstandingReq is the context cancellation function to terminate
previous requests when a new request arrives. Whilst the request itself
is held in a mutex, invoking this cancellation function was not held
by anything. Added an extra mutex for this.
- `Conn`s now expose a direct `OnUpdate(caches.Update)` function
for updates which concern a specific device ID.
- Add a bitset in `DeviceData` to indicate if the OTK or fallback keys were changed.
- Pass through the affected `DeviceID` in `pubsub.V2DeviceData` updates.
- Remove `DeviceDataTable.SelectFrom` as it was unused.
- Refactor how the poller invokes `OnE2EEData`: it now only does this if
there are changes to OTK counts and/or fallback key types and/or device lists,
and _only_ sends those fields, setting the rest to the zero value.
- Remove noisy logging.
- Add `caches.DeviceDataUpdate` which has no data but serves to wake-up the long poller.
- Only send OTK counts / fallback key types when they have changed, not constantly. This
matches the behaviour described in MSC3884
The entire flow now looks like:
- Poller notices a diff against in-memory version of otk count and invokes `OnE2EEData`
- Handler updates device data table, bumps the changed bit for otk count.
- Other handler gets the pubsub update, directly finds the `Conn` based on the `DeviceID`.
Invokes `OnUpdate(caches.DeviceDataUpdate)`
- This update is handled by the E2EE extension which then pulls the data out from the database
and returns it.
- On initial connections, all OTK / fallback data is returned.
Features:
- Add `typing` extension.
- Add `receipts` extension.
- Add comprehensive prometheus `/metrics` activated via `SYNCV3_PROM`.
- Add `SYNCV3_PPROF` support.
- Add `by_notification_level` sort order.
- Add `include_old_rooms` support.
- Add support for `$ME` and `$LAZY`.
- Add correct filtering when `*,*` is used as `required_state`.
- Add `num_live` to each room response to indicate how many timeline entries are live.
Bug fixes:
- Use a stricter comparison function on ranges: fixes an issue whereby UTs fail on go1.19 due to change in sorting algorithm.
- Send back an `errcode` on HTTP errors (e.g expired sessions).
- Remove `unsigned.txn_id` on insertion into the DB. Otherwise other users would see other users txn IDs :(
- Improve range delta algorithm: previously it didn't handle cases like `[0,20] -> [20,30]` and would panic.
- Send HTTP 400 for invalid range requests.
- Don't publish no-op unread counts which just adds extra noise.
- Fix leaking DB connections which could eventually consume all available connections.
- Ensure we always unblock WaitUntilInitialSync even on invalid access tokens. Other code relies on WaitUntilInitialSync() actually returning at _some_ point e.g on startup we have N workers which bound the number of concurrent pollers made at any one time, we need to not just hog a worker forever.
Improvements:
- Greatly improve startup times of sync3 handlers by improving `JoinedRoomsTracker`: a modest amount of data would take ~28s to create the handler, now it takes 4s.
- Massively improve initial initial v3 sync times, by refactoring `JoinedRoomsTracker`, from ~47s to <1s.
- Add `SlidingSyncUntil...` in tests to reduce races.
- Tweak the API shape of JoinedUsersForRoom to reduce state block processing time for large rooms from 63s to 39s.
- Add trace task for initial syncs.
- Include the proxy version in UA strings.
- HTTP errors now wait 1s before returning to stop clients tight-looping on error.
- Pending event buffer is now 2000.
- Index the room ID first to cull the most events when returning timeline entries. Speeds up `SelectLatestEventsBetween` by a factor of 8.
- Remove cancelled `m.room_key_requests` from the to-device inbox. Cuts down the amount of events in the inbox by ~94% for very large (20k+) inboxes, ~50% for moderate sized (200 events) inboxes. Adds book-keeping to remember the unacked to-device position for each client.
The problem is that there is NOT a 1:1 relationship between request/response,
due to cancellations needing to be processed (else state diverges between client/server).
Whilst we were buffering responses and returning them eagerly if the request data did
not change, we we processing new requests if the request data DID change. This puts us
in an awkward position. We have >1 response waiting to send to the client, but we
cannot just _ignore_ their new request else we'll just drop it to the floor, so we're
forced to process it and _then_ return the buffered response. This is great so long as
the request processing doesn't take long: which it will if we are waiting for live updates.
To get around this, when we detect this scenario, we artificially reduce the timeout value
to ensure request processing is fast.
If we just use websockets this problem goes away...
We used to rely on the HTTP conn being cancelled for this behaviour.
When the sliding sync proxy is used behind a reverse proxy there is
no guarantee that the upstream conn will be cancelled, causing very
laggy and poor performance. We now manually cancel() the previous
request.
Pass it to extensions for them to determine if they want to short-circuit
the sync loop. E2EE wants to short-circuit OTK counts on the first request,
as they aren't enough to short-circuit mid-connection.
`sync3` contains data structures and logic which is very isolated and
testable (think ConnMap, Room, Request, SortableRooms, etc) whereas
`sync3/handler` contains control flow which calls into `sync3` data
structures.
This has numerous benefits:
- Gnarly complicated structs like `ConnState` are now more isolated
from the codebase, forcing better API design on `sync3` structs.
- The inability to do import cycles forces structs in `sync3` to remain
simple: they cannot pull in control flow logic from `sync3/handler`
without causing a compile error.
- It's significantly easier to figure out where to start looking for
code that executes when a new request is received, for new developers.
- It simplifies the number of things that `ConnState` can touch. Previously
we were gut wrenching out of convenience but now we're forced to move
more logic from `ConnState` into `sync3` (depending on the API design).
For example, adding `SortableRooms.RoomIDs()`.
Let ConnState directly subscribe to GlobalCache rather than
the awful indirection of ConnMap -> Conn -> ConnState we had before.
We had that before because ConnMap is responsible for destroying old
connections (based on the TTL cache), so we could just subscribe once
and then look through the map to see who to notify. In the interests
of decoupling logic, we now just call ConnState.Destroy() when the
connection is removed from ConnMap which allows ConnState to subscribe
to GlobalCache on creation and remove its subscription on Destroy().
This makes it significantly clearer who and where callbacks are firing
from and to, and now means ConnMap is simply in charge of maintaining
maps of user IDs -> Conn as well as terminating them when they expire
via TTL.
Keep it pure (not dependent on `state.Storage`) to make testing
easier. The responsibility for fanning out user cache updates
is with the Handler as it generally deals with glue code.