Skip to main content

Backpressure & flow control

DataChannels have a bufferedAmount — the number of bytes you've queued for transmission that haven't gone out yet. If you call send() faster than the pipe can drain, that number grows without bound. Eventually the browser kills the connection or runs out of memory.

The standard fix is to back off when bufferedAmount gets high, and resume when it drops. rtc.io does that for you, but understanding the mechanism matters as soon as you're sending large payloads (file transfers, codec output, anything > a few hundred KB).

The two watermarks

channel.ts (defaults)
export const HIGH_WATERMARK = 16_777_216; // 16 MB — pause sending above this
export const LOW_WATERMARK = 1_048_576; // 1 MB — resume sending below this

When bufferedAmount ≥ HIGH_WATERMARK, channel.send() returns false and your bytes are held in the JS-side queue. When bufferedAmount falls back through LOW_WATERMARK, the browser fires bufferedamountlow (driven by RTCDataChannel.bufferedAmountLowThreshold, which the library sets to the channel's lowWatermark) and rtc.io emits 'drain' on your channel.

Both are configurable per-channel via ChannelOptions if the defaults don't fit your shape:

const live = socket.peer(id).createChannel("live", {
highWatermark: 8 * 1024 * 1024, // pause once 8 MB are in flight
lowWatermark: 2 * 1024 * 1024, // resume once it's back under 2 MB
});

When to tune them:

  • Lower highWatermark caps the OS-side transport buffer. Less memory pressure and shorter steady-state end-to-end latency, at the cost of throughput on bursty senders (you spend more time pausing).
  • Higher highWatermark lets the browser hold more bytes in flight — useful on high-bandwidth fat-pipe links (gigabit LAN, server-to-server) where you want a deeper pipeline and the memory's available.
  • Higher lowWatermark fires 'drain' sooner, so the sender resumes earlier — smoother throughput, fuller transport buffer on average.
  • Lower lowWatermark fires 'drain' later, after a deeper drain — burstier throughput, more headroom between bursts.

lowWatermark must stay below highWatermark; otherwise drain fires immediately on every send and the throttling collapses. The library doesn't enforce this — it's on you. Tune the ratio (1:16 by default) before tuning the absolute numbers.

The queue budget

Bytes you call send() with before the channel is open (or while bufferedAmount is high) get queued in JS, not in the browser's transport. There's a per-channel cap on that queue:

export const QUEUE_BUDGET = 1_048_576; // 1 MB default

If you exceed it, rtc.io fires 'error' on the channel with a clear message:

RTCIOChannel: queue budget exceeded — wait for 'drain' before sending more

You can raise it per-channel if you have headroom and want more buffering:

const file = socket.peer(id).createChannel("file", { queueBudget: 16 * 1024 * 1024 });

The budget applies to JS queueing only. Once the channel opens and bytes flow into the browser's transport, that's what bufferedAmount measures.

What send returns

const ok = channel.send(arrayBuffer);
  • true — sent immediately. Channel is open and below high-water.
  • false — queued (or refused). One of:
    • Channel is still connecting — buffered, will flush on 'open'.
    • bufferedAmount ≥ the channel's highWatermark — back off until 'drain'.
    • There's already a queue — your send is appended to it.
    • Queue budget exceeded — 'error' fires, no buffering happened.

emit (the higher-level JSON envelope API) wraps send and behaves the same way:

const ok = channel.emit("event", payload); // returns boolean

The drain pattern

For large transfers, the canonical loop is:

async function streamFile(channel, file) {
if (channel.readyState !== "open") {
await new Promise((res) => channel.once("open", res));
}

for (let offset = 0; offset < file.size; offset += CHUNK) {
const buf = await file.slice(offset, offset + CHUNK).arrayBuffer();
if (!channel.send(buf)) {
await new Promise((res) => channel.once("drain", res));
}
}
}

once("drain") resolves the next time bufferedAmount falls through the channel's lowWatermark (1 MB by default). The send-then-await-drain loop is the simplest correct pattern for streaming a file or any large blob.

Picking a chunk size

export const FILE_CHUNK_SIZE = 16 * 1024; // 16 KB

16 KB is the sweet spot:

  • The SCTP message limit advertised by Chromium is 256 KB, but smaller messages trickle through firewalls more reliably.
  • Smaller chunks → finer-grained progress reporting and faster cancellation.
  • Larger chunks → slightly less per-message overhead.

For interactive payloads (cursor positions, RPC requests) you don't need to chunk — emit a JSON envelope and you're done. Chunking matters only when a single payload would dwarf the queue budget.

Send order

If send returns false, the channel queues your data and tries to flush on the next open/drain. Order is preserved. Don't try to recover from a false return by calling send again immediately — that just appends another item. Wait for 'drain'.

Receiving with backpressure

The receiving side has a separate buffer (the SCTP receive window). It can't directly tell you "I'm overwhelmed" — that's not a thing in WebRTC's DataChannel API. If you're processing each 'data' event on a slow main thread (e.g. CPU-bound parsing), the receive buffer can grow.

Two practical mitigations:

  1. Process in chunks but batch state updates. If you setState per chunk, React re-renders on every receive. Buffer chunks in memory and flush state every N ms or N bytes.
  2. Use a Web Worker for parsing. Move the CPU work off the main thread; send the parsed result back as a postMessage.

This is a general "don't block the main thread" issue, not rtc.io specific.

What happens on close

If the channel closes mid-send, the queue is dropped and 'close' fires. Pending await drain promises are reject-resolvable too — wrap them with the channel's 'close' listener:

function waitForDrain(channel) {
return new Promise((resolve, reject) => {
const onDrain = () => { cleanup(); resolve(); };
const onClose = () => { cleanup(); reject(new Error("channel closed")); };
const cleanup = () => {
channel.off("drain", onDrain);
channel.off("close", onClose);
};
channel.on("drain", onDrain);
channel.on("close", onClose);
});
}

The bundled file-transfer helper in our tutorial does exactly this.

Diagnosing buffer bloat

channel.bufferedAmount is read at any time:

console.log(channel.readyState, channel.bufferedAmount);

If you see it pinned near the channel's highWatermark for long stretches, your sender is faster than the link. Either chunk smaller, throttle the producer, accept the existing rtc.io pause/drain cycle, or raise highWatermark if you have memory headroom and want a deeper pipeline.

If bufferedAmount is consistently zero and you're still seeing slowness, the bottleneck is on the wire (cellular, congested router) or in receive-side processing — not in your sender.

Per-peer stats

For end-to-end visibility, socket.getSessionStats(peerId) exposes the round-trip time and the codec stats for the active connection:

const stats = await socket.getSessionStats(peerId);
console.log(stats.rtt, stats.codecs, stats.outboundRTP);

For data-only stats (no media), see Stats.

Live: a real backpressure-aware sender

The whole send() returning falseawait once('drain') loop, in 60 lines.

File transfer · backpressure handled correctly
Pick a file in tab #1 to send to tab #2. The progress bar pauses when the buffer fills past the high watermark.
src/main.ts
import io, { RTCIOChannel } from 'rtc.io';
import { setupRoom } from './room';
import './styles.css';

const { ROOM, NAME } = setupRoom();

const app = document.querySelector<HTMLDivElement>('#app')!;
app.innerHTML = `
<div class="card">
<h1>File transfer · room <code>${ROOM}</code></h1>
<p><small>Per-peer ordered DataChannel · respects backpressure via <code>send() === false</code> &amp; <code>'drain'</code>.</small></p>
<input id="file" type="file" />
<progress id="prog" max="100" value="0" style="width:100%;margin-top:10px;display:none"></progress>
<p id="status"><small>Click <strong>Open 2nd tab ↗</strong> to bring a peer online.</small></p>
<div id="received" style="margin-top:14px;display:flex;flex-direction:column;gap:8px"></div>
<p style="margin-top:10px"><small>You are <code>${NAME}</code>.</small></p>
</div>`;

const fileInput = document.getElementById('file') as HTMLInputElement;
const prog = document.getElementById('prog') as HTMLProgressElement;
const status = document.getElementById('status')!;
const received = document.getElementById('received')!;

const socket = io('https://server.rtcio.dev', {
iceServers: [{ urls: 'stun:stun.l.google.com:19302' }],
});
socket.server.emit('join-room', { roomId: ROOM, name: NAME });

const channels = new Map<string, RTCIOChannel>();

socket.on('peer-connect', ({ id }) => {
// Both sides call createChannel('file'); negotiated:true means each end
// describes the same SCTP stream id in its initial SDP, so the channel is
// open without a DC-OPEN handshake.
const ch = socket.peer(id).createChannel('file', { ordered: true });
channels.set(id, ch);
attachReceiver(ch);
status.innerHTML = `<small>Peer ${id.slice(-4)} ready · pick a file to send.</small>`;
});

socket.on('peer-disconnect', ({ id }) => {
channels.delete(id);
if (channels.size === 0) status.innerHTML = '<small>No peers connected.</small>';
});

interface FileMeta { tid: string; name: string; size: number; mime: string }

function attachReceiver(channel: RTCIOChannel) {
let state: { meta: FileMeta; chunks: ArrayBuffer[]; bytes: number } | null = null;

channel.on('meta', (meta: FileMeta) => {
state = { meta, chunks: [], bytes: 0 };
});

channel.on('data', (chunk: ArrayBuffer) => {
if (!state) return;
state.chunks.push(chunk);
state.bytes += chunk.byteLength;
});

channel.on('eof', () => {
if (!state) return;
const blob = new Blob(state.chunks, { type: state.meta.mime });
const url = URL.createObjectURL(blob);
const row = document.createElement('a');
row.href = url;
row.download = state.meta.name;
row.textContent = `📥 ${state.meta.name} (${(blob.size/1024).toFixed(1)} KB) — click to download`;
row.style.cssText = 'color:var(--accent);text-decoration:underline';
received.appendChild(row);
state = null;
});
}

fileInput.addEventListener('change', async () => {
const file = fileInput.files?.[0];
if (!file) return;
if (channels.size === 0) {
alert('No peers connected — click "Open 2nd tab ↗" first.');
return;
}
prog.style.display = 'block';
prog.value = 0;

const tid = crypto.randomUUID();
const CHUNK = 16 * 1024;

for (const [, channel] of channels) {
channel.emit('meta', { tid, name: file.name, size: file.size, mime: file.type });
}

let sent = 0;
for (let off = 0; off < file.size; off += CHUNK) {
const buf = await file.slice(off, off + CHUNK).arrayBuffer();
for (const [, channel] of channels) {
// send() returning false means the chunk was queued. Wait for the
// 'drain' event before pushing more — this is the entire backpressure
// contract.
if (!channel.send(buf)) {
await new Promise<void>((r) => channel.once('drain', () => r()));
}
}
sent += buf.byteLength;
prog.value = Math.round((sent / file.size) * 100);
}

for (const [, channel] of channels) channel.emit('eof', { tid });
status.innerHTML = `<small>Sent <strong>${file.name}</strong> to ${channels.size} peer(s).</small>`;
});