Skip to content

Backup & Restore

The stages of backup grief — denial, anger, bargaining, testing, acceptance

If you’re reading this calmly, on a Tuesday afternoon, with a cup of coffee — good. You’re doing this right.

If you’re reading this at 3 AM because a disk just made a sound disks should never make — we’re sorry. But this is exactly why this page exists, and past-you was smart enough to set up backups. Unless past-you didn’t, in which case present-you has our deepest condolences.

Runs automatically at 04:00 via the com.sanctum.backup LaunchAgent — an hour when nothing good or bad is happening, which is precisely the hour you want for the thing standing between you and oblivion.

Terminal window
# Manual run
bash ~/Backups/sanctum-backup.sh

Architecture: restic, content-addressed, dual repo

Section titled “Architecture: restic, content-addressed, dual repo”

The daily backup uses restic — a content-addressed deduplicating backup tool that splits files into ~1 MB chunks (Rabin fingerprinting), stores each unique chunk exactly once, and encrypts everything client-side with AES-256 + Poly1305. Two repos receive the same snapshot every day:

RepoPurposeWhere
T9 (primary)Local, fast restore, full history/Volumes/T9/sanctum-restic
Google Drive (offsite)Disaster recovery, encryptedrclone:gdrive-sanctum:sanctum-restic

Each daily run only ships changed blocks. Typical delta after the first snapshot is tens of MB, not the 46 GB of raw ~/.sanctum. The first snapshot took 20 minutes; daily increments take seconds.

  • OpenClaw config (~/.openclaw/)
  • Sanctum config (~/.sanctum/)
  • Claude Code state (~/.claude/, with cache/telemetry excluded)
  • All LaunchAgent plists
  • SSH keys and config (the whole repo is encrypted, so private keys live alongside everything else)
  • Shell + git config (.zshrc, .zprofile, .gitconfig)
  • TTS models and voice files
  • Selected project source trees (node_modules, .venv, target, model files excluded — restic dedups, but excluding rebuildables saves scan time)
  • Per-run metadata: crontab, Homebrew lists, Docker state, LM Studio model list, tool versions, VM SSH-side configs

The entire repo — every byte that hits T9 or leaves your machine for Google Drive — is encrypted client-side with AES-256 using a passphrase from macOS Keychain (sanctum-backup-key). Even if someone walked off with T9 or compromised your Drive folder, they would see opaque ciphertext blobs.

Retention (GFS, enforced by restic forget —prune)

Section titled “Retention (GFS, enforced by restic forget —prune)”
--keep-daily 7 # last 7 daily snapshots
--keep-weekly 4 # one snapshot per week, last 4 weeks
--keep-monthly 12 # one snapshot per month, last year

Runs inline on every backup. Restic’s prune does block-level garbage collection — old snapshots are removed and any chunks no longer referenced by any surviving snapshot are reclaimed. There is no separate retention script; the policy lives in the backup script itself.

Why we moved off iCloud full-copy snapshots

Section titled “Why we moved off iCloud full-copy snapshots”

Until 2026-04-26 the daily backup was an rsync of ~/.sanctum into a new dated directory under iCloud/Backups/daily/YYYY-MM-DD/. Each snapshot was a full copy. iCloud Drive doesn’t understand hardlinks, so even a --link-dest-based incremental wouldn’t have worked through the Files-app sync layer. The result, observed empirically:

  • 15 daily directories on internal SSD: 356 GB
  • Each snapshot: ~46 GB of mostly-identical bytes copied fresh
  • iCloud was syncing all of that every day in both directions
  • Internal SSD hit 95% capacity, triggering this entire migration

The same retention window in restic: 15 GB on disk after seven days, growing by a few hundred MB per week. Reclaimed 357 GB the moment the legacy iCloud tree was deleted.

Weekly automated snapshot (Sundays 3am):

Terminal window
bash ~/Backups/vm-snapshot.sh

Creates compressed copies of the VM disk image and EFI vars:

  • iCloud/Backups/VM/qcow2.zst
  • iCloud/Backups/VM/efi_vars.fd.zst

Uses live copy (no VM suspend) with SSH sync before snapshot. The VM keeps running. It doesn’t even know it’s being photographed.

Seven phases. Like grief, but more useful.

  1. Homebrew — Restore package list, brew install
  2. Dotfiles & SSH — Restore keys, shell config
  3. App configs — OpenClaw, Sanctum, HA
  4. Projects — Restore git repos
  5. LM Studio — Re-download models (not backed up due to size)
  6. Docker — Restore container configs
  7. LaunchAgents — Restore plists, bootstrap services
Terminal window
bash ~/Backups/sanctum-restore.sh
ScriptPurpose
sanctum-backup.shDaily restic backup → T9 + Google Drive
sanctum-restore.shMulti-phase restore (legacy, predates restic — see “Restoring from restic” below)
vm-snapshot.shWeekly VM disk snapshot
rotate-secrets.shMonthly secret rotation
backup-sanctum.sh (MBP)Weekly cross-machine snapshot, pulls ~/.sanctum/ from manoir to MBP — see below

Treat each one with the respect you’d give a fire extinguisher — boring until the day it’s the only thing that matters.

Terminal window
export RESTIC_PASSWORD=$(security find-generic-password -a sanctum-backup -s sanctum-backup-key -w)
# List snapshots from either repo
restic -r /Volumes/T9/sanctum-restic snapshots
restic -r rclone:gdrive-sanctum:sanctum-restic snapshots
# Restore everything from the latest snapshot to a target dir
restic -r /Volumes/T9/sanctum-restic restore latest --target /tmp/restore
# Restore one file (mount the repo as a read-only FUSE filesystem and just `cp`)
restic -r /Volumes/T9/sanctum-restic mount /tmp/restic-mount # (Linux only; on macOS use `restore --include`)
restic -r /Volumes/T9/sanctum-restic restore latest \
--target /tmp/restore \
--include /Users/neo/.sanctum/instance.yaml
# Verify a repo end-to-end (run quarterly)
restic -r /Volumes/T9/sanctum-restic check
restic -r rclone:gdrive-sanctum:sanctum-restic check

The 2026-04-26 incident — internal SSD at 95% capacity from full-copy iCloud snapshots — exposed a missing layer: there was no early signal. The /status endpoint of the Navigator Sidecar (navigator-sidecar.js, port 3344) now emits a disk block:

{
"disk": {
"path": "/System/Volumes/Data",
"status": "OPERATIONAL",
"usedPct": 55,
"usedGb": 476,
"freeGb": 400,
"totalGb": 926,
"thresholds": { "warn": 85, "fail": 90 },
"message": "55% used"
}
}

Status flips to DEGRADED at 85% (warn) and FAILED at 90% (page). Disk health folds into the aggregate sidecar status, so a Holocron tile turning yellow now means either a project monitor flagged degradation or the disk is filling. Configurable via NAVIGATOR_DISK_PATH, NAVIGATOR_DISK_WARN_PCT, NAVIGATOR_DISK_FAIL_PCT.

Weekly Cross-Machine Snapshot (MBP pulls from manoir)

Section titled “Weekly Cross-Machine Snapshot (MBP pulls from manoir)”

The daily backup lives on the Mini. If the Mini itself is compromised — disk failure, filesystem corruption, malicious actor — that backup is gone with it. The weekly snapshot is the second dimension: pulled from manoir to the MBP every Saturday 04:30, SHA256-pinned, rotated to last 8.

Terminal window
# Manual run from MBP
~/Documents/Claude_Code/tools/backup-sanctum.sh
# ...or the deployed copy that the LaunchAgent invokes:
~/.local/bin/backup-sanctum.sh
  • tools/backup-sanctum.sh tars ~/.sanctum/{scripts,services,state,docs,bin,boot,instance.yaml} plus critical LaunchAgent plists on manoir via SSH.
  • ~/.sanctum/secrets/ is excluded — secrets live in keychain / 1Password / SOPS, not this tarball.
  • ~/.sanctum/logs/ is excluded — too large, not load-bearing.
  • The tarball is pulled to ~/Backups/sanctum-mini/sanctum-mini-<timestamp>.tgz.
  • A .sha256 trailer is written alongside. shasum -a 256 -c <file>.sha256 verifies integrity.
  • Rotation keeps the newest 8 snapshots (≈2 months of weekly history).

~/Library/LaunchAgents/com.sanctum.backup.plist (template in-repo at Claude_Code/tools/launchagents/):

<key>StartCalendarInterval</key>
<dict>
<key>Weekday</key> <integer>6</integer> <!-- Saturday -->
<key>Hour</key> <integer>4</integer>
<key>Minute</key> <integer>30</integer>
</dict>

Activate with:

Terminal window
launchctl bootstrap gui/$(id -u) ~/Library/LaunchAgents/com.sanctum.backup.plist
launchctl enable gui/$(id -u)/com.sanctum.backup
Terminal window
cd ~/Backups/sanctum-mini
shasum -a 256 -c sanctum-mini-20260418-111502.tgz.sha256 # expect: OK
# See what's inside
tar -tzf sanctum-mini-20260417-222652.tgz | head
# Restore a single file onto manoir
tar -xzf sanctum-mini-20260417-222652.tgz \
Users/neo/.sanctum/services/council-mlx.yaml \
-O | ssh neo@100.0.0.25 'cat > /Users/neo/.sanctum/services/council-mlx.yaml'