Skip to main content

RTCIOChannel

import type { RTCIOChannel } from "rtc.io";

const ch = socket.peer(id).createChannel("file", { ordered: true });

RTCIOChannel wraps a single RTCDataChannel between you and one peer. You don't construct it directly — you get one back from socket.peer(id).createChannel(name, options) or as the per-peer entries inside an RTCIOBroadcastChannel.

Sending

emit(name, ...args)

ch.emit(eventName: string, ...args: any[]): void

Sends a JSON envelope { e: name, d: args }. Receivers handle it via ch.on(name, ...). Same idiom as socket.emit, scoped to this channel:

ch.emit("hello", { from: "alice" });
ch.emit("update", 1, 2, 3); // multi-arg

A trailing function argument (the socket.io ack idiom) is dropped with a warning.

send(data)

ch.send(data: ArrayBuffer | string): boolean

Send raw bytes or a raw string. Used for streaming binary blobs (file chunks, codec output) where the JSON envelope shape doesn't fit. Returns true if sent immediately, false if queued or refused (channel full, queue budget exceeded).

const buf = await file.slice(offset, offset + CHUNK).arrayBuffer();
const ok = ch.send(buf);
if (!ok) await new Promise((r) => ch.once("drain", r));

See Backpressure & flow control for the full pattern.

Receiving

on(event, handler) / off(event, handler) / once(event, handler)

ch.on(event: string, handler: (...args: any[]) => void): this
ch.off(event: string, handler: (...args: any[]) => void): this
ch.once(event: string, handler: (...args: any[]) => void): this

Standard EventEmitter-style listener registration. once auto-removes the handler after the first invocation.

Three special event names are dispatched by the library itself:

EventArgsFires when
opennoneChannel opened (SCTP up, ready to send)
closenoneChannel closed (peer left, you called close, transport died)
error(err)Channel error or queue-budget overrun
data`(buf: ArrayBufferstring)`
drainnonebufferedAmount fell below lowWatermark (1 MB by default; configurable via ChannelOptions)

Plus any event name you've emited: ch.emit("chat", msg)ch.on("chat", (msg) => ...).

ch.on("open", () => console.log("ready"));
ch.on("close", () => console.log("gone"));
ch.on("error", (e) => console.error("channel error:", e));
ch.on("data", (chunk) => receiver.push(chunk));
ch.on("drain", () => console.log("buffer drained"));
ch.on("msg", (text) => append(text)); // user-defined event

State

readyState: RTCDataChannelState

"connecting" | "open" | "closing" | "closed"

Live property. Mirrors RTCDataChannel.readyState; if the channel hasn't been attached yet (rare), defaults to "connecting".

bufferedAmount: number

Live property. Mirrors RTCDataChannel.bufferedAmount — bytes queued in the browser's transport, not yet sent. Use this if you're implementing your own throttling on top of (or instead of) the built-in watermark/drain pattern:

const PAUSE_AT = 16 * 1024 * 1024; // matches the default highWatermark
if (ch.bufferedAmount > PAUSE_AT) {
// back off
}

The defaults are highWatermark: 16 MB and lowWatermark: 1 MB; both are overridable per-channel via ChannelOptions.

Closing

close()

ch.close(): void

Closes the underlying RTCDataChannel and clears any queued payloads. Fires close on both ends.

If the channel is already in closing or closed state, this is a no-op.

Watermarks and queue budget

Three knobs govern how much the channel will buffer before refusing or draining:

DefaultOptionRole
16 MBhighWatermarkbufferedAmount ≥ this → send() returns false and the library queues your bytes.
1 MBlowWatermarkbufferedAmount falls back through this → 'drain' fires. Forwarded to RTCDataChannel.bufferedAmountLowThreshold.
1 MBqueueBudgetHard cap on the JS-side queue (held while the channel is connecting or while above the high watermark). Exceeding fires 'error'.

All three are configurable per-channel:

const ch = socket.peer(id).createChannel("file", {
queueBudget: 32 * 1024 * 1024, // 32 MB held in JS until the DC accepts
highWatermark: 32 * 1024 * 1024, // pause threshold matched to budget
lowWatermark: 8 * 1024 * 1024, // drain fires at 8 MB
});

Keep lowWatermark below highWatermark — otherwise the bufferedamountlow event fires immediately on every send and the throttling collapses. See Backpressure & flow control and ChannelOptions for the tuning guide.

Internal: _attach(dc) / _isAttached()

These are library internals you'll see in stack traces. They wire the wrapper to an underlying RTCDataChannel. You don't call them.

Patterns

Backpressure-aware streaming send

async function streamFile(ch, file) {
if (ch.readyState !== "open") {
await new Promise((r) => ch.once("open", r));
}
const CHUNK = 16 * 1024;
for (let offset = 0; offset < file.size; offset += CHUNK) {
const buf = await file.slice(offset, offset + CHUNK).arrayBuffer();
if (!ch.send(buf)) {
await new Promise((r) => ch.once("drain", r));
}
}
ch.emit("eof");
}

Receiver assembling chunks

const chunks: ArrayBuffer[] = [];
ch.on("data", (chunk) => chunks.push(chunk));
ch.on("eof", () => {
const blob = new Blob(chunks, { type: "application/octet-stream" });
download(blob);
});

Channel-scoped pub/sub

const ch = socket.createChannel("game-events", { ordered: false });
ch.on("position", (p) => updatePeer(p));
ch.on("score", (s) => bumpScore(s));

ch.emit("position", { x, y });
ch.emit("score", 1);

Error handling

error fires for two reasons:

  1. The underlying RTCDataChannel raised an error event — usually a transport problem.
  2. You exceeded the queue budget while the channel was queueing — RTCIOChannel: queue budget exceeded — wait for 'drain' before sending more.

If you see (2), either raise the budget for that channel or back off your sender.

If a send/emit triggers an exception in the underlying transport (rare; usually means the channel was closed mid-call), error fires with the underlying error and send returns false.

What's not on RTCIOChannel

  • No ack callbacks. DataChannels don't have them. If you need confirmation, encode it in your protocol.
  • No "pause"/"resume" methods. The watermark + drain pattern is the API.
  • No re-open after close. If you want a fresh channel, call createChannel(name) again — the library returns the existing instance if the channel is still alive, otherwise opens a new one.

Live example

A full file transfer with the send() / 'drain' backpressure contract — the chunk-and-await pattern that lets you ship multi-GB files without OOMing the tab.

File transfer over a per-peer RTCIOChannel
Pick a file in tab #1 to send it to tab #2. Watch the progress bar — backpressure pauses sends when the buffer is full.
src/main.ts
import io, { RTCIOChannel } from 'rtc.io';
import { setupRoom } from './room';
import './styles.css';

const { ROOM, NAME } = setupRoom();

const app = document.querySelector<HTMLDivElement>('#app')!;
app.innerHTML = `
<div class="card">
<h1>File transfer · room <code>${ROOM}</code></h1>
<p><small>Per-peer ordered DataChannel · respects backpressure via <code>send() === false</code> &amp; <code>'drain'</code>.</small></p>
<input id="file" type="file" />
<progress id="prog" max="100" value="0" style="width:100%;margin-top:10px;display:none"></progress>
<p id="status"><small>Click <strong>Open 2nd tab ↗</strong> to bring a peer online.</small></p>
<div id="received" style="margin-top:14px;display:flex;flex-direction:column;gap:8px"></div>
<p style="margin-top:10px"><small>You are <code>${NAME}</code>.</small></p>
</div>`;

const fileInput = document.getElementById('file') as HTMLInputElement;
const prog = document.getElementById('prog') as HTMLProgressElement;
const status = document.getElementById('status')!;
const received = document.getElementById('received')!;

const socket = io('https://server.rtcio.dev', {
iceServers: [{ urls: 'stun:stun.l.google.com:19302' }],
});
socket.server.emit('join-room', { roomId: ROOM, name: NAME });

const channels = new Map<string, RTCIOChannel>();

socket.on('peer-connect', ({ id }) => {
// Both sides call createChannel('file'); negotiated:true means each end
// describes the same SCTP stream id in its initial SDP, so the channel is
// open without a DC-OPEN handshake.
const ch = socket.peer(id).createChannel('file', { ordered: true });
channels.set(id, ch);
attachReceiver(ch);
status.innerHTML = `<small>Peer ${id.slice(-4)} ready · pick a file to send.</small>`;
});

socket.on('peer-disconnect', ({ id }) => {
channels.delete(id);
if (channels.size === 0) status.innerHTML = '<small>No peers connected.</small>';
});

interface FileMeta { tid: string; name: string; size: number; mime: string }

function attachReceiver(channel: RTCIOChannel) {
let state: { meta: FileMeta; chunks: ArrayBuffer[]; bytes: number } | null = null;

channel.on('meta', (meta: FileMeta) => {
state = { meta, chunks: [], bytes: 0 };
});

channel.on('data', (chunk: ArrayBuffer) => {
if (!state) return;
state.chunks.push(chunk);
state.bytes += chunk.byteLength;
});

channel.on('eof', () => {
if (!state) return;
const blob = new Blob(state.chunks, { type: state.meta.mime });
const url = URL.createObjectURL(blob);
const row = document.createElement('a');
row.href = url;
row.download = state.meta.name;
row.textContent = `📥 ${state.meta.name} (${(blob.size/1024).toFixed(1)} KB) — click to download`;
row.style.cssText = 'color:var(--accent);text-decoration:underline';
received.appendChild(row);
state = null;
});
}

fileInput.addEventListener('change', async () => {
const file = fileInput.files?.[0];
if (!file) return;
if (channels.size === 0) {
alert('No peers connected — click "Open 2nd tab ↗" first.');
return;
}
prog.style.display = 'block';
prog.value = 0;

const tid = crypto.randomUUID();
const CHUNK = 16 * 1024;

for (const [, channel] of channels) {
channel.emit('meta', { tid, name: file.name, size: file.size, mime: file.type });
}

let sent = 0;
for (let off = 0; off < file.size; off += CHUNK) {
const buf = await file.slice(off, off + CHUNK).arrayBuffer();
for (const [, channel] of channels) {
// send() returning false means the chunk was queued. Wait for the
// 'drain' event before pushing more — this is the entire backpressure
// contract.
if (!channel.send(buf)) {
await new Promise<void>((r) => channel.once('drain', () => r()));
}
}
sent += buf.byteLength;
prog.value = Math.round((sent / file.size) * 100);
}

for (const [, channel] of channels) channel.emit('eof', { tid });
status.innerHTML = `<small>Sent <strong>${file.name}</strong> to ${channels.size} peer(s).</small>`;
});