We're sending broadcast messages with await-ed delays between each channel.send(). Server timestamps confirm the delays are real. Client receives them simultaneously — but only for the first few batches. After a buffer appears to drain, messages start flowing individually with correct timing.
We ran 3 identical tests changing only the server delay. Client processing delay was 0ms — measuring raw arrival via Date.now() in the callback.
Each game round has a setup phase (~12 events, 400-800ms apart) then a play phase (5 groups × 4 rapid sends). Only the play phase shows batching.
% of groups where all 4 messages arrived within 1ms of each other:
Group | 500ms delay | 800ms delay | 2000ms delay ------+-------------+-------------+------------- 1 | 100% | 100% | ~100% 2 | 100% | 100% | partial 3 | 100% | 93% | flowing 4 | 100% | 40% | flowing 5 | 100% | 20% | flowing
When messages DO flow, timing matches the server delay perfectly (~850ms gaps for 800ms, ~2s for 2000ms).
The pattern is consistent across 75+ groups: earlier groups in a round batch, later groups flow. Longer delays drain the buffer faster. At 500ms it never drains within 5 groups. At 2000ms it drains by group 2-3. The setup phase events always arrive individually.
Question: Is there a buffer in the Realtime delivery path (WebSocket layer, Phoenix Channels, Realtime server) that coalesces pending outbound messages? Can it be flushed or disabled?
Free tier, @supabase/supabase-js latest, Node.js server, broadcast only, single channel, all local. Happy to share full logs.
The user is experiencing an issue where broadcast messages sent with delays are being received simultaneously by the client, despite server timestamps confirming the delays. This occurs only during the play phase of a game round, and the behavior changes based on the server delay. The user suspects a buffer in the Realtime delivery path and seeks to know if it can be flushed or disabled.