Options
Two option bags surface in rtc.io: the one passed to io(url, options) and the one passed to createChannel(name, options).
SocketOptions
Passed to io(url, options). Extends socket.io-client's SocketOptions, so any option that works on plain socket.io works here. The fields specific to rtc.io are:
interface SocketOptions extends Partial<RootSocketOptions> {
iceServers: RTCIceServer[];
debug?: boolean;
watchdog?: {
timeout?: number; // ms
hintTimeout?: number; // ms
hintTTL?: number; // ms
};
}
iceServers
iceServers: RTCIceServer[]
Standard RTCIceServer array. Used for the underlying RTCPeerConnection. Default if omitted: a pair of Google STUN servers:
[{ urls: ["stun:stun1.l.google.com:19302", "stun:stun2.l.google.com:19302"] }]
For TURN configuration and credential vending, see ICE and TURN.
debug
debug?: boolean // default false
Turns on per-step library logging. Useful while wiring up your first connection, noisy in production.
const socket = io(URL, { iceServers: [...], debug: true });
Lines look like:
[rtc-io][X6AAAJ] Initialized polite peer { peer: "abc123" }
[rtc-io][X6AAAJ] Sent offer { peer: "abc123" }
[rtc-io][X6AAAJ] Received answer { peer: "abc123", signalingState: "have-local-offer" }
[rtc-io][X6AAAJ] Ctrl channel open { peer: "abc123" }
The 6-character tag is the last 6 of socket.id.
watchdog
watchdog?: {
timeout?: number; // ms · default 12_000
hintTimeout?: number; // ms · default 2_500
hintTTL?: number; // ms · default 30_000
}
Tunes the per-peer liveness watchdog that decides when an unhealthy WebRTC connection should be torn down. All three fields are in milliseconds. See Lifecycle for the full state machine.
timeout(ms) — how long to wait afterconnectionStateflips todisconnected/failedbefore declaring the peer dead. Larger values tolerate longer NAT rebinds, mobile network handoffs, and ICE restarts; smaller values free resources sooner at the risk of tearing down recoverable peers.hintTimeout(ms) — the shorter grace window used when the server-sidepeer-lefthint corroborates the disconnect withinhintTTL. Set this equal totimeoutto ignore the hint entirely.hintTTL(ms) — how long a freshpeer-lefthint stays "fresh" enough to shorten the watchdog. Beyond this window the hint is ignored. Set to0to never use the shortened path.
Each field is independent; omit one to keep its default.
const socket = io(URL, {
iceServers: [...],
watchdog: {
timeout: 30_000, // ms — tolerate longer mobile network blips
hintTimeout: 5_000, // ms — still trust server hints, but a bit looser
hintTTL: 60_000, // ms — accept hints from up to a minute ago
},
});
Negative values, NaN, and non-numeric inputs are silently ignored — the documented default is used in their place.
Inherited from socket.io-client
Pass any of these directly to io():
auth: { token: jwt }— payload read by yourio.use(...)middleware on the server.query: { roomId: "demo" }— query string appended to the WebSocket URL.reconnection: false— disable auto-reconnect.reconnectionAttempts,reconnectionDelay,reconnectionDelayMax,randomizationFactor— backoff knobs.transports: ["websocket"]— skip the long-poll fallback if you only target browsers that support WebSocket.withCredentials: true— send cookies cross-origin.forceNew: true— don't reuse a multiplexedManagerfor this connection.path: "/socket.io"— server path; only change if your server uses a non-default path.multiplex: true— share aManagerbetween multipleio()calls to the same origin.
The full list lives in the socket.io-client docs.
ChannelOptions
Passed as the second arg to socket.createChannel(name, options) and socket.peer(id).createChannel(name, options).
interface ChannelOptions {
ordered?: boolean;
maxRetransmits?: number;
maxPacketLifeTime?: number;
queueBudget?: number;
highWatermark?: number;
lowWatermark?: number;
}
ordered
ordered?: boolean // default true
True (the default) means in-order delivery — slightly higher latency in exchange for predictable ordering. Right for chat, file transfer, anything where order matters.
False means the SCTP layer can deliver out-of-order, but with lower latency on packet loss. Right for telemetry-style messages (cursor positions, joystick inputs) where freshness matters more than sequence.
const cursors = socket.createChannel("cursor", { ordered: false });
maxRetransmits
maxRetransmits?: number // default unlimited
Cap on retransmit attempts per packet. Mutually exclusive with maxPacketLifeTime — set one or the other, not both.
maxRetransmits: 0 plus ordered: false is the lowest-latency setting: each packet is sent once, and if it's lost, it's gone.
const realtime = socket.createChannel("input", {
ordered: false,
maxRetransmits: 0,
});
maxPacketLifeTime
maxPacketLifeTime?: number // default unlimited, in milliseconds
Time-based equivalent of maxRetransmits — the SCTP layer keeps retrying for up to this many milliseconds, then gives up. Useful when you want bounded latency:
const ranged = socket.createChannel("position", {
ordered: false,
maxPacketLifeTime: 100, // give up after 100 ms
});
queueBudget
queueBudget?: number // default 1 MB (1_048_576)
Library-side cap on the number of bytes that can sit in the JS-side queue (used while the channel is connecting or while bufferedAmount is at high-water). Exceeding it fires error on the channel.
This is library state, not passed through to RTCDataChannel — it just controls how much we're willing to buffer for you before the channel is ready.
const file = socket.peer(id).createChannel("file", {
ordered: true,
queueBudget: 32 * 1024 * 1024, // 32 MB
});
For most apps the 1 MB default is fine. Raise it for big single-file transfers; lower it if you're tight on memory and want immediate backpressure.
highWatermark
highWatermark?: number // default 16 MB (16_777_216)
The bufferedAmount threshold above which the channel is considered full. While bufferedAmount ≥ highWatermark, channel.send() returns false and your bytes are held in the JS queue (subject to queueBudget) until the browser drains the transport.
Lower it to cap the OS-side transport buffer — less memory, lower steady-state end-to-end latency, but throughput on bursty writes drops because you spend more time in the pause/drain cycle. Raise it for high-bandwidth fat-pipe links (gigabit LAN, server-to-server) where you want a deeper pipeline and the memory is available.
// Embedded / low-RAM peer: keep the transport buffer small.
const tight = socket.peer(id).createChannel("file", {
highWatermark: 2 * 1024 * 1024, // 2 MB
});
// Bulk transfer over LAN: fill the pipe.
const bulk = socket.peer(id).createChannel("bulk", {
highWatermark: 64 * 1024 * 1024, // 64 MB
});
This is library state; it gates send()'s true/false return and the internal _flush loop. It's not passed to RTCDataChannel — the browser doesn't expose a high-water threshold as a constructor option (only the low-water bufferedAmountLowThreshold, which lowWatermark sets).
lowWatermark
lowWatermark?: number // default 1 MB (1_048_576)
The bufferedAmount value that re-arms the 'drain' event. Once bufferedAmount rises above highWatermark and then falls back through lowWatermark, the browser fires bufferedamountlow and rtc.io fires 'drain' on your channel — your await channel.once('drain', ...) resolves and you can resume sending.
This value is forwarded to RTCDataChannel.bufferedAmountLowThreshold, so it directly controls when the platform notifies us.
- Higher
lowWatermark→ drain fires earlier, sender resumes sooner, smoother throughput, the transport buffer stays fuller on average (more memory, more steady-state queueing latency). - Lower
lowWatermark→ drain fires later, sender resumes after a deeper drain. Burstier but the buffer empties more between bursts.
// Smoother streaming: resume as soon as we've cleared 4 MB worth of buffer.
const live = socket.peer(id).createChannel("live", {
highWatermark: 16 * 1024 * 1024,
lowWatermark: 4 * 1024 * 1024,
});
Constraints: lowWatermark must be lower than highWatermark — if it isn't, the browser will fire bufferedamountlow immediately on every send and drain spam will defeat the throttling. The library doesn't enforce this for you. The default 1 MB / 16 MB ratio is a good starting point; tune the ratio rather than the absolute numbers unless you know what you're targeting.
Defaults at a glance
| Option | Default |
|---|---|
iceServers | Google STUN (1 + 2) |
debug | false |
watchdog.timeout | 12_000 ms |
watchdog.hintTimeout | 2_500 ms |
watchdog.hintTTL | 30_000 ms |
ordered | true |
maxRetransmits | unlimited (no cap) |
maxPacketLifeTime | unlimited |
queueBudget | 1 MB |
highWatermark | 16 MB |
lowWatermark | 1 MB |
Worked recipes
A reliable broadcast chat:
const chat = socket.createChannel("chat", { ordered: true });
Low-latency telemetry, lossy:
const t = socket.createChannel("position", { ordered: false, maxRetransmits: 0 });
Bounded-latency game state:
const g = socket.createChannel("input", { ordered: false, maxPacketLifeTime: 50 });
Per-peer file transfer with bigger queue:
const f = socket.peer(id).createChannel("file", {
ordered: true,
queueBudget: 16 * 1024 * 1024,
});
High-bandwidth LAN bulk transfer (deeper pipeline):
const bulk = socket.peer(id).createChannel("bulk", {
ordered: true,
queueBudget: 64 * 1024 * 1024, // 64 MB — held in JS until DC accepts
highWatermark: 64 * 1024 * 1024, // 64 MB — pause threshold
lowWatermark: 16 * 1024 * 1024, // 16 MB — resume threshold
});
Low-memory / latency-sensitive channel (tight throttling):
const tight = socket.peer(id).createChannel("ctrl-stream", {
ordered: true,
highWatermark: 1 * 1024 * 1024, // 1 MB — pause early
lowWatermark: 256 * 1024, // 256 KB — drain almost fully before resuming
});
Production socket with TURN and verbose logging during incident:
const socket = io(URL, {
iceServers: [
{ urls: "stun:stun.l.google.com:19302" },
{ urls: "turn:turn.example.com:3478", username, credential },
],
debug: location.search.includes("debug=1"),
auth: { token: jwt },
});