This properly propagates the go Context on down to all HTTP calls, which means that outgoing request have the OTLP trace context.
This also adds the Jaeger propagator to the list of OTEL propagators, so that Synapse properly gets the incoming trace context.
It also upgrades all the OTEL libraries
Jaeger spans can be sent as OTLP so this is mostly semantics for
the collector, which is more flexible if it accepts OTLP traces
rather than jaeger.thrift traces.
Using JSONB columns adds too much DB load. Prefer a slightly
faster serialisation format instead, and use the old system of
handling BYTEA, which is about 2x faster.
```
BenchmarkSerialiseDeviceDataJSON-12 1770 576646 ns/op 426297 B/op 6840 allocs/op
BenchmarkSerialiseDeviceDataCBOR-12 4635 247509 ns/op 253971 B/op 4796 allocs/op
```
This was using a growing list of 1000 device list changes.
Features:
- Add `typing` extension.
- Add `receipts` extension.
- Add comprehensive prometheus `/metrics` activated via `SYNCV3_PROM`.
- Add `SYNCV3_PPROF` support.
- Add `by_notification_level` sort order.
- Add `include_old_rooms` support.
- Add support for `$ME` and `$LAZY`.
- Add correct filtering when `*,*` is used as `required_state`.
- Add `num_live` to each room response to indicate how many timeline entries are live.
Bug fixes:
- Use a stricter comparison function on ranges: fixes an issue whereby UTs fail on go1.19 due to change in sorting algorithm.
- Send back an `errcode` on HTTP errors (e.g expired sessions).
- Remove `unsigned.txn_id` on insertion into the DB. Otherwise other users would see other users txn IDs :(
- Improve range delta algorithm: previously it didn't handle cases like `[0,20] -> [20,30]` and would panic.
- Send HTTP 400 for invalid range requests.
- Don't publish no-op unread counts which just adds extra noise.
- Fix leaking DB connections which could eventually consume all available connections.
- Ensure we always unblock WaitUntilInitialSync even on invalid access tokens. Other code relies on WaitUntilInitialSync() actually returning at _some_ point e.g on startup we have N workers which bound the number of concurrent pollers made at any one time, we need to not just hog a worker forever.
Improvements:
- Greatly improve startup times of sync3 handlers by improving `JoinedRoomsTracker`: a modest amount of data would take ~28s to create the handler, now it takes 4s.
- Massively improve initial initial v3 sync times, by refactoring `JoinedRoomsTracker`, from ~47s to <1s.
- Add `SlidingSyncUntil...` in tests to reduce races.
- Tweak the API shape of JoinedUsersForRoom to reduce state block processing time for large rooms from 63s to 39s.
- Add trace task for initial syncs.
- Include the proxy version in UA strings.
- HTTP errors now wait 1s before returning to stop clients tight-looping on error.
- Pending event buffer is now 2000.
- Index the room ID first to cull the most events when returning timeline entries. Speeds up `SelectLatestEventsBetween` by a factor of 8.
- Remove cancelled `m.room_key_requests` from the to-device inbox. Cuts down the amount of events in the inbox by ~94% for very large (20k+) inboxes, ~50% for moderate sized (200 events) inboxes. Adds book-keeping to remember the unacked to-device position for each client.
Specifically:
- Remove top-level `ops`, and replace with `lists`.
- Remove list indexes from `ops`, and rely on contextual location information.
- Remove top-level `counts` and instead embed them into each list contextually.
- Refactor connstate to reflect new API shape.
Still to do:
- Remove `rooms` / `room` from the op response, and bundle it into the
top-level `rooms`.
- Remove `UPDATE` op.
- Add `room_id` / `room_ids` field to ops to let clients know which rooms each op relates to.
- Replace `PrevBatch string` in user room data with `PrevBatches lru.Cache`.
This allows us to persist prev batch tokens in-memory rather than doing
N sequential DB lookups which would take ~4s for ~150 rooms on the postgres
instance running the database. The tokens are keyed off a tuple of the
event ID being searched and the latest event in the room, to allow prev
batches to be assigned when new sync v2 responses arrive.
- Thread through context to complex storage functions for profiling
This abstracts the long-pollness of the HTTP connection.
Note that we cannot just maintain a server-side buffer of
events to feed down the connection because the client can
drastically alter _which_ events should be fed to the client.
There still needs to be a request/response cycle, except we
can factor out retry handling (duplicate request detection)
and incrementing of the positions.