Backplane modes
Single, server-bus, or Redis. Pick based on how many relays you run and how much ops you want.
A backplane is what lets two relay instances forward frames to each other. If you only run one relay, you don't need one.
When you need a backplane
- You run more than one relay container.
- Your daemons land on different relay instances than the browsers viewing them.
- You want to horizontally scale relay throughput.
A daemon binds to one relay (via DNS / load balancer). A browser binds to one relay (probably a different one). If they land on different relays, the frames between them need a path. That path is the backplane.
Mode: single
Default. Everyone goes through one relay instance. Simplest. Use when:
- You have one relay container.
- Your throughput fits one container (typically thousands of concurrent connections, millions of frames/day).
docker run -d -e RELAY_BACKPLANE_MODE=single …No external dependency.
Mode: server
The backplane is the platform's API. Multiple relays push frames through relay_bus_frames (a Postgres table on the platform side) and read them out for their own daemons/browsers.
docker run -d \
-e RELAY_BACKPLANE_MODE=server \
-e RELAY_BACKPLANE_URL=https://api.getviewport.com \
…Pros:
- No Redis to run.
- The platform is already where pairings, workspaces, and audit live. One less moving piece.
- Useful when relays are in different regions / VPCs without easy Redis connectivity.
Cons:
- Latency is higher than Redis pub/sub (rough order: 50-200ms additional vs Redis sub-millisecond).
- Throughput ceiling is whatever the platform's API can handle for
relay_bus_frames.
Mode: redis
The backplane is a shared Redis. Relays publish/subscribe on Redis channels keyed by workspace.
docker run -d \
-e RELAY_BACKPLANE_MODE=redis \
-e RELAY_BACKPLANE_URL=redis://redis.your-co.com:6379/0 \
…Pros:
- Lowest latency. Sub-millisecond fan-out.
- Highest throughput. Redis handles tens of thousands of messages/second easily.
Cons:
- You operate Redis. Single point of failure unless you also run Redis Sentinel or Cluster.
- If Redis is in a different network than the relays, you're back to network ops.
Client redirect
When the daemon for workspace X is on relay-A and the browser opens on relay-B, the browser's frames need to reach the daemon. Two options:
- Bus forwarding (default). Relay-B sends the frame to the backplane, relay-A picks it up and forwards to the daemon. Two hops.
- Client redirect. Relay-B sends an HTTP redirect telling the browser to reconnect to relay-A. One hop after redirect.
Client redirect is faster and reduces relay load, but exposes the relay topology to clients. Enable per-instance:
docker run -d -e RELAY_CLIENT_REDIRECT_ENABLED=1 …Pair this with sticky load balancing so daemons consistently land on the same relay, and the redirect is rare.
Picking a mode
| Situation | Recommended mode |
|---|---|
| One relay, small team | single |
| One relay, want HA in the future | single now, plan redis |
| Multi-region, no Redis available | server |
| Multi-region, Redis available | redis |
| High frame throughput, want SLA | redis + client redirect |
| Compliance-sensitive, no Redis on infra | server |
You can switch modes by restarting the relay with new env vars. Daemons reconnect automatically.