Skip to content

LaunchAgents Audit (2026-04-23)

LaunchAgents Audit — a launchpad control panel with 58 labeled status indicators, some teal (running clean), some amber (scheduled), a few flickering red (error).

58 com.sanctum.* agents run across the two machines. That’s a lot of plists for two boxes. This page is the dated inventory — who runs what, what’s healthy, what’s flapping, and what should probably be retired.

Legend: RUN running clean · SCHED scheduled (no PID, last exit 0) · FAIL error state · OFF disabled.

AgentStatePurpose
com.sanctum.mlxRUNsanctum-mlx serving Qwen3.6-35B-A3B on :1337 mTLS. The brain.
com.sanctum.lmstudio-bridgeRUNKeeps LM Studio’s :1234 alive; Qui-Gon and Ahsoka route through it for Coder-14B.
AgentStateIntervalPurpose
com.sanctum.pressure-valveRUN5 sMemory-pressure watchdog; armed 2026-04-22, now SIGSTOPs on RED
com.sanctum.council-guardianSCHED30 sFast /v1/models probe; restart sanctum-mlx if dead
com.sanctum.council-canarySCHED10 minSlow chat probe (“2+2”) for correctness regression
com.sanctum.council-driftFAIL exit 11 hSHA-check deployed artifacts vs repo. Last error: known — depends on parallel session’s uncommitted work
com.sanctum.council-parity-smokeSCHEDnightly 03:0010-prompt token-level parity test vs Python mlx_lm
com.sanctum.council-integritySCHEDhourlyWeight manifest re-verify
com.sanctum.drift-sentinelFAIL exit 15 minWindu’s Firewalla-vs-ARP drift detector. Exit 1 likely stale threshold — investigate
AgentStatePurpose
com.sanctum.tunnelRUNPrimary ssh tunnel for chalet
com.sanctum.ha-tunnelRUNHome Assistant tunnel
com.sanctum.health-tunnelRUN-flap exit 255Health Center tunnel; respawned after SSH timeout
com.sanctum.graphiti-tunnelRUN-flap exit 255Graphiti service tunnel; same SSH-timeout pattern
com.sanctum.network-control-tunnelRUNFirewalla network-control API tunnel
com.sanctum.bridgeRUNBridge100 sanctum-triage proxy
com.sanctum.firewallaRUNFirewalla Purple API bridge
com.sanctum.orbi-bridgeRUNOrbi router API bridge
com.sanctum.presenceRUNPresence detector (who’s home)
com.sanctum.ha-gatewayRUNHA REST gateway
AgentStatePurpose
com.sanctum.livekit-serverRUNLiveKit voice call server
com.sanctum.voice-agentFAIL exit 1yoda-voice-agent.py; exit 1 recurring — investigate
AgentStatePurpose
com.sanctum.tommyRUNTommy briefing agent (VM-side)
com.sanctum.yoda-token-minterRUNYoda auth token rotation
com.sanctum.claude-max-proxyRUNClaude Max HTTP proxy (npm claude-max-api-proxy) on :3456 — symmetric with the MacBook Pro. Replaced the per-request com.sanctum.claude-cli-proxy CLI-spawn proxy on 2026-04-27.
com.sanctum.signal-cliRUNSignal CLI message daemon
com.sanctum.icloud-filerRUNiCloud file organizer
AgentStateSchedulePurpose
com.sanctum.morning-briefingSCHEDdailyMorning briefing generation
com.sanctum.perf-reviewSCHEDweeklyPerformance review snapshot
com.sanctum.tech-lookoutSCHEDdailyTech news scan
com.sanctum.model-scoutSCHEDweeklyNew model release scan
com.sanctum.fire-drillSCHEDmonthlyRecovery drill
com.sanctum.rotate-secretsSCHEDweeklySecret rotation
com.sanctum.secrets-auditFAIL exit 1dailySecret hygiene audit — investigate exit 1
com.sanctum.token-refreshSCHED1 hOAuth token refresh
com.sanctum.version-checkSCHEDdailySW version drift check
com.sanctum.signal-healthSCHEDhourlySignal CLI health check
com.sanctum.agent-markdown-syncSCHED5 minSync agent prompts ↔ repo
AgentStatePurpose
com.sanctum.watchdogRUN-flap exit 1Top-level process supervisor — recurring exit 1, investigate
com.sanctum.ha-self-healerSCHEDAuto-remediate HA flaps
com.sanctum.openclaw.colimaSCHEDColima (Docker VM) management
com.sanctum.openclaw.ha-healerSCHEDOpenClaw HA healer
com.sanctum.openclaw.docker-startupFAIL exit 1Post-Docker-ready startup hook — exit 1 recurring
com.sanctum.vm-autostartSCHEDVM auto-start on boot
com.sanctum.vm-pushSCHEDPush artifacts to VM
com.sanctum.post-bootFAIL exit 4Post-boot verification script — exit 4 recurring
com.sanctum.rust-readiness-checkFAIL exit 2Pre-flight Rust toolchain check — exit 2 recurring
com.sanctum.memory-consolidateFAIL exit 1Memory-vault consolidation — exit 1 recurring
com.sanctum.force-flowRUNSecurity alert router (bell, notify, escalate)
AgentStatePurpose
com.sanctum.dashboardRUN-flap exit -15Holocron dashboard server; respawned after recent SIGTERM
com.sanctum.rewind-dashboardRUNRewind dashboard (activity timeline)
com.sanctum.health-centerRUN-flap exit 143Health Center API; respawned after SIGTERM
com.sanctum.proxyFAIL exit 101Sanctum proxy launcher — exit 101 recurring, investigate
AgentDisabledReason
com.sanctum.server-mlx.plist.disabled-202604222026-04-22Python mlx_lm fallback retired post-mTLS migration. Delete on 2026-05-05.
com.sanctum.server.plist.disabledearlierRust sanctum-server router not yet promoted. Deliberately deferred.
com.sanctum.living-force.plist.disabledunknownOrphan — investigate; likely safe to delete
AgentStatePurpose
com.sanctum.shadow-mlxRUNsanctum-mlx shadow for HA failover (:8902 plain, :8903 mTLS)
com.sanctum.council-canary-offboxSCHED (10 min)Off-box chat probe to Mini via Tailscale (catches Mini panic before Mini can log it)
com.sanctum.council-drift-offboxFAIL exit 1Off-box drift check via deploy-sanctum-mlx.sh verify. Known-issue: parallel session’s uncommitted work drifts repo
com.sanctum.autoresearchSCHEDOvernight LLM research runner
com.sanctum.backupSCHEDMBP-side backup
com.sanctum.secret-rotation-scanSCHEDMBP-side secret rotation monitor
com.sanctum.agent-markdown-syncSCHEDSame as Mini; cross-machine sync
CandidateStatusAction
com.sanctum.server-mlx.plist.disabled-20260422 (Mini)Disabled since 2026-04-22Delete 2026-05-05 (rollback window closes) — already on calendar
com.sanctum.living-force.plist.disabled (Mini)Orphan, unknown last-useful dateInvestigate + likely delete
com.sanctum.server-mlx.plist.bak-20260420 (Mini)Old .bak fileSafe to delete — precedes .disabled-20260422
sanctum-mlx.old-pre-mtls binary (Mini)Rollback binary from mTLS migrationDelete 2026-05-05 — already on calendar

Ordered by noise level (recurring non-zero exits deserve attention first):

AgentExitPriorityLikely cause
com.sanctum.proxy101High — serves all proxy requestsScript error in proxy-launcher.sh; check ~/.openclaw/logs/sanctum-proxy.err
com.sanctum.post-boot4Medium — runs once at bootHook script expecting something that isn’t there
com.sanctum.rust-readiness-check2Medium — pre-flight gateLikely cargo/toolchain drift or missing binary
com.sanctum.memory-consolidate1Medium — memory-vault integrityCheck log; possibly schema drift
com.sanctum.voice-agent1Medium — yoda-voiceTTS dependency or model missing
com.sanctum.secrets-audit1Medium — security hygienePossibly a moved file or revoked scope
com.sanctum.openclaw.docker-startup1Low — race with Docker readinessProbably benign if Docker eventually loads
com.sanctum.drift-sentinel1Low — Windu’s drift detectorPossibly same kind of stale-threshold issue as council-drift
com.sanctum.watchdog1Low — supervisor; running anywayInvestigate, but non-blocking
com.sanctum.council-drift (Mini) + council-drift-offbox (MBP)1KnownParallel session’s uncommitted work shows up as drift — resolves when they commit

Probably yes. Not because any individual agent is wrong, but because:

  • No single-pane dashboard for their health — you find out an agent is flapping by tailing logs.
  • No standard naming convention for probes vs bridges vs scheduled jobs vs apps — everything is com.sanctum.<noun>.
  • Retirement is manual. Agents linger long past their usefulness unless someone notices.

A future consolidation pass could:

  1. Fold probes into sanctum-server as child tasks once it’s promoted (guardian, canary, drift, integrity, parity-smoke — all probes of the same thing, doesn’t need 6 separate plists).
  2. Adopt naming prefixes: com.sanctum.probe.*, com.sanctum.bridge.*, com.sanctum.app.*, com.sanctum.sched.*. Makes launchctl list | grep probe trivially grep-able.
  3. Wire an SLO dashboard (Holocron panel, Prometheus-style /metrics endpoint) that shows each agent’s last N exit codes and flapping-rate.

None of that is urgent. The system works. This inventory exists so the next reorg has a ground-truth starting point.