Hub
The full-stack primary node. Runs the Mac Mini host with the Ubuntu VM, all AI agents, Home Assistant, inference servers, and the complete service catalog. There is exactly one hub per Sanctum instance.
Sanctum is designed to run across multiple physical locations. A primary hub at your main haus coordinates with satellite nodes at secondary properties, mobile nodes on laptops, and (in the future) sensor nodes for lightweight IoT devices. All nodes share a single instance.yaml configuration and communicate over Tailscale.

What you’re looking at is a multi-site distributed system. The kind of thing a mid-sized company might run across three data centers. Except the data centers are hauses, the ops team is one person, and the primary node is next to a coffee maker.
Hub
The full-stack primary node. Runs the Mac Mini host with the Ubuntu VM, all AI agents, Home Assistant, inference servers, and the complete service catalog. There is exactly one hub per Sanctum instance.
Satellite
A lighter deployment at a secondary haus. Runs a subset of services (typically a gateway, Home Assistant, and a small local model). Syncs configuration and state with the hub over Tailscale.
Mobile
A MacBook Pro or similar portable device. Connects to the hub remotely via Tailscale for agent access, SSH, and API calls. Runs a small set of persistent guardrail daemons (see “Persistent Services Across Hosts” below) but no user-facing primary services.
Sensor
A future node type for dedicated IoT or monitoring hardware. Planned for low-power devices that report data to the hub without running the full Sanctum stack.
Each node knows who it is through a single-line file at ~/.sanctum/.node_id:
# On the hub:cat ~/.sanctum/.node_id# hub
# On the satellite:cat ~/.sanctum/.node_id# satelliteOne file. One word. The machine’s entire sense of self lives in a text file smaller than a tweet. And yet if you delete it, everything stops knowing where it is. Identity is fragile — even for computers.
The identity string must match a key in the nodes section of instance.yaml. Scripts and services use this to determine which configuration block applies to the current machine.
source ~/.sanctum/lib/config.sh
NODE=$(sanctum_whoami) # "hub"TYPE=$(sanctum_node_get "$NODE" type) # "hub"IP=$(sanctum_node_get "$NODE" tailscale_ip) # "100.0.0.20"Every node is declared under the nodes key with its network addresses, SSH user, node type, and the list of services it runs:
nodes: hub: type: hub host: 192.168.1.10 tailscale_ip: 100.0.0.20 ssh_user: operator services: - gateway - home_assistant - dashboard - voice_agent - lm_studio - council_mlx - firewalla_bridge - cloudflare_tunnel - watchdog
satellite: type: satellite host: null # Set during on-site install tailscale_ip: 100.0.0.30 ssh_user: operator services: - gateway - home_assistant
macbook: type: mobile host: null # DHCP, varies by network tailscale_ip: 100.0.0.40 ssh_user: operator services: [] # No persistent services| Field | Required | Description |
|---|---|---|
type | Yes | One of hub, satellite, mobile, sensor |
host | No | LAN IP address. null if not on the haus network or DHCP. |
tailscale_ip | Yes | Stable Tailscale IP for cross-network access |
ssh_user | Yes | Username for SSH connections to this node |
services | Yes | List of service keys from services.* that this node runs |
The hub is the authoritative node. It runs every service, hosts the VM with the agent cluster, and is the source that satellites sync from. In organizational terms, this is the head office, the server room, and the IT department — all running on a machine the size of a hardcover book.
instance.yamlThese services only run on the hub and are not deployed to satellites:
| Service | Reason |
|---|---|
| Council MLX | Requires Apple Silicon with sufficient memory |
| LM Studio | Large model inference, hub-only hardware |
| Firewalla Bridge | Direct LAN access to the primary router |
| Cloudflare Tunnel | Single ingress point for the instance |
| Orbi Bridge | Direct LAN access to the access point |
| Sonos Bridge | Native SoCo control of LAN Sonos speakers |
| Voice Agent | Tied to local Sonos Bridge and Sanctum TTS |
The Mac Mini hub runs two virtual machines: the Ubuntu VM (QEMU) and the Docker VM (Colima). Both need network access, but only one needs a bridge to the 10.10.10.0/24 subnet.
QEMU (via socket_vmnet) creates a macOS vmnet bridge (bridge100) for the Ubuntu VM. This bridge carries all agent traffic, SSH tunnels, and inter-node communication on the 10.10.10.0/24 subnet. The Mac host is 10.10.10.1, the VM is 10.10.10.10.
Colima (Docker) uses user-mode networking with port forwarding. It does not need vmnet. Its network.address setting is false in ~/.colima/default/colima.yaml. This is deliberate: if Colima creates a vmnet bridge, it races QEMU for bridge100 and the VM ends up on bridge102 — a different bridge with a different host IP (10.10.10.2), which breaks every socat proxy, SSH tunnel, and service health check that expects 10.10.10.1.
This was learned the hard way on March 29, 2026, when a dual-bridge conflict caused half the test suite to fail. The fix: one vmnet user (QEMU), one bridge (bridge100), one subnet.
The Mac Mini starts every service — including the VM — without a GUI login. No auto-login, no Touch ID removal, no compromises. Two LaunchDaemons handle everything:
com.sanctum.vmnet (root) creates the 10.10.10.x network:
/opt/homebrew/opt/socket_vmnet/bin/socket_vmnet \ --vmnet-mode host --vmnet-gateway 10.10.10.1 \ /opt/homebrew/var/run/socket_vmnet_sanctumcom.sanctum.bootstrap (operator) starts all services in four phases:
system/ since W6/W7/W9 — bootstrap no longer manages their lifecycle.)socket_vmnet_client (passes vmnet as fd=3, no Apple Dev ID signing required, no GUI needed), bridge detected, host IP set to 10.10.10.1, waits for VM SSH# Install or update the bootstrap (one command)bash ~/.sanctum/boot/install-bootstrap.sh
# Run bootstrap tests (30 integration tests)bash ~/.sanctum/boot/test_bootstrap.shmacOS Sequoia+ does not auto-load SSH keys into the agent at boot. The apple-post-boot.sh script runs ssh-add --apple-load-keychain to load all Keychain-stored SSH passphrases into the agent. Without this, non-interactive LaunchAgents (SSH tunnels, skill sync, VM health checks) cannot authenticate to the VM.
The SSH config for VM hosts uses IdentityAgent SSH_AUTH_SOCK to ensure they use the system agent, not the 1Password SSH agent (which is the Host * default).
A satellite is a smaller deployment at a secondary location. It runs a gateway with a lightweight local model and its own Home Assistant instance for location-specific devices.
Think of it as a field office. It can operate independently, make local decisions, and keep the lights on — but the real horsepower stays at headquarters. The satellite doesn’t need five AI agents. It needs to control the heat and not die when the internet goes out.
.node_id file to the satellite’s name (e.g., satellite).host field in instance.yaml on the hub.Satellites pull updates from the hub over Tailscale:
Hub (hub) Satellite (satellite) | | +-- instance.yaml ---- Tailscale ----> instance.yaml (subset) +-- skills repo ---- Tailscale ----> skills repo (rsync) +-- agent config ---- Tailscale ----> agent configMobile nodes are laptops that connect to the Sanctum instance remotely. They run a small set of persistent guardrail daemons (see below) but no user-facing primary services. Otherwise they SSH into the hub, query agents, and access dashboards over Tailscale.
| Action | Command / URL |
|---|---|
| SSH to hub | ssh operator@100.0.0.20 |
| SSH to VM | ssh -J operator@100.0.0.20 ubuntu@10.10.10.10 |
| Dashboard | http://100.0.0.20:1111 |
| Home Assistant | https://ha.example.net (via Cloudflare) |
| Agent query | Via gateway API at 100.0.0.20:1977 |
Three Sanctum daemons run on both the hub and the mobile node. They are deliberately symmetric: each host enforces its own RAM ceiling, sheds its own offenders, and exposes its own Claude Max subscription. There is no central authority — that was the point of the 2026-04-24 capacity doctrine. Each host is responsible for its own air supply.
| Daemon | Hub (Mac Mini) | Mobile (MacBook Pro) | Role |
|---|---|---|---|
com.sanctum.admit | :2189 | :2189 | RAM-pool admission control. Per-host doctrine. |
com.sanctum.pressure-valve | :N/A (no listener) | :N/A (no listener) | Sheds and freezes offenders before the kernel does. Per-host. |
com.sanctum.claude-max-proxy | :3456 | :3456 | OpenAI-compatible HTTP wrapper around the local claude CLI. Each host has its own Claude Max OAuth session; no cross-machine routing required. |
The Claude proxy was unified on 2026-04-27 — before that the hub ran a per-request CLI-spawn proxy on :2001 (com.sanctum.claude-cli-proxy), while the mobile ran the persistent claude-max-api-proxy npm package. Both hosts now run the same npm package via a tiny wrapper at /Users/neo/.sanctum/bin/claude-max-api-tailscale.js. Smart-router cloud-backend fallback is symmetric: either host can serve the other’s escalations if the primary is down.
Per-host symmetry by design: each machine answers for its own air supply (capacity doctrine, 2026-04-24). User-facing and stateful daemons (watchdog, chitti, livekit-server, lmstudio-bridge, home-assistant, outline) remain hub-only — the mobile is a dev and validation surface and doesn’t carry production state.
Both the shell and TypeScript libraries provide functions for working with the node topology:
source ~/.sanctum/lib/config.sh
# Who am I?sanctum_whoami # "hub"
# Get a field from any nodesanctum_node_get satellite tailscale_ip # "100.0.0.30"sanctum_node_get macbook ssh_user # "operator"import { whoami, nodeGet, getNodesByType } from './lib/config';
const me = whoami(); // "hub"const satellites = getNodesByType('satellite'); // ["satellite"]const satelliteIp = nodeGet('satellite', 'tailscale_ip'); // "100.0.0.30" Tailscale Mesh (tail7c6d11.ts.net) _______________________________________________ / | \ Hub: hub Satellite: satellite Mobile: macbook 100.0.0.20 100.0.0.30 100.0.0.40 | | | [Mac Mini M4 Pro] [Mac Mini M1] [MacBook Pro M4 Max] +-- Ubuntu VM +-- Gateway (no services) +-- 6 AI Agents +-- Home Assistant +-- Full service catalog +-- Local LLM (3B) +-- Home Assistant +-- Inference serversThree machines. Two hauses. One tailnet. Zero regrets. Well. Few regrets.