Self-host
Self-host overview
What's self-hostable today, what isn't, and how to reason about the trade-offs.
Viewport has three pieces. Two of them you can run yourself today.
| Component | Self-hostable | Notes |
|---|---|---|
Daemon (@viewportai/daemon) | Yes | Runs on your machine by design. Open source. |
| Relay | Yes | Docker image ghcr.io/viewportai/relay. Stateless. |
| Control plane (Laravel API + web) | In beta | Hosted at getviewport.com today. On-prem in private beta. |
What relay self-host buys you today
- Wire path stays inside your network. The relay sees only encrypted, transient frames; it never persists payloads.
- You control TLS, rotation, IP allow-listing on the relay endpoint.
- Daemon ↔ relay traffic does not transit through Viewport-managed infrastructure.
What still hits Viewport-managed infrastructure when only the relay is self-hosted:
- The control-plane REST API (
api.getviewport.com) for workspaces, plans, inbox, audit, vault. - Daemon → platform sync (
/api/runtime/*). - Relay → platform JWT validation.
- Audit log writes.
For most teams, relay self-host meets the bar. For compliance contexts that mandate "no production data on third-party infra," see the on-prem path.
When to self-host
Reasons that work:
- Network policy: agents run on machines in a VPC that can't reach public WS endpoints.
- Throughput: heavy workflow runs where you want relay ops in your control.
- Cost: at high scale, running your own relay can be cheaper than the hosted bill.
- Latency: relay near your developers (especially if your dev population is one or two regions).
Reasons that don't:
- Distrust of cloud per se. The relay alone doesn't get you full-stack on-prem.
- Avoiding daemon updates. The daemon updates via npm and is independent of the relay.
Architecture for self-hosters
your developers' machines your infrastructure Viewport-managed
───────────────────────── ──────────────────── ────────────────
daemon (vpd) ────── wss ─────► your relay container ──── HTTPS ─────► api.getviewport.com
│
▼
audit, plans, inbox,
workflows, members,
ciphertext vaultThe relay runs on your infrastructure. Daemons connect to it. The relay validates JWTs against the platform and forwards frames in-memory. The relay is operationally stateless. Restart anytime; no state at rest.
What you're operating
When you self-host the relay:
- One or more Docker containers running
ghcr.io/viewportai/relay. - A way to terminate TLS (Caddy, Nginx, ELB, whatever).
- DNS pointed at your container(s).
- Optional: Redis or platform-backed backplane if you run multiple relays.
- Optional: Prometheus scraping
/metricsif you want dashboards.
Where to go next
- Deploy the relay. Single-container quickstart.
- Backplane modes. When you outgrow one container.
- Monitor and operate.
/state,/metrics, alerts. - Security posture. What the relay can and cannot see.
- On-prem roadmap. Full-stack timeline.