@@ -38,10 +38,11 @@ There are no tests, linters, or build steps in this repo — it is pure configur| Command | What it does ||---|---|| `backup` | Manual restic backup to B2 (tags snapshot with `$RESTIC_HOST` from `~/.restic/b2-env`) || `restore` | Pull latest snapshot from B2 into volumes; existing data archived first || `sync` | `git fetch` + `git pull --ff-only` for every repo under `~/code/`; skips dirty/divergent || `status` | Last snapshot per host across both webdev and alpine restic repos, plus repo size || `restic-backup` | Manual restic backup to B2 (tags snapshot with `$RESTIC_HOST` from `~/.restic/b2-env`) || `restic-restore` | Pull latest snapshot from B2 into volumes; existing data archived first || `restic-status` | Last snapshot per host across both webdev and alpine restic repos, plus repo size || `code-sync` | `git fetch` + `git pull --ff-only` for every repo under `~/code/` (skips dirty/divergent), then clones any non-archived non-fork repos owned by overshard on GitHub that aren't local yet || `server-health-check` | SSH into alpine and run its `/root/server-health-check.sh` (apk log, restic stats, free, df, docker stats). Override target with `$ALPINE_HOST` |## Architecture
@@ -49,7 +50,7 @@ There are no tests, linters, or build steps in this repo — it is pure configur- **`dotfiles/host/`** — Host-side configs that don't belong in the container: Zed editor settings and Windows SSH config. These are NOT placed automatically by bootstrap.ps1; copy them manually on a fresh machine: - `dotfiles/host/zed-settings.json` -> `%APPDATA%\Zed\settings.json` - `dotfiles/host/ssh-config` -> `~\.ssh\config` (merge into existing entries if you have other Hosts already configured)- **`containers/webdev/`** — Ubuntu 24.04 dev container with Node 22, Python 3 (pip + uv), Bun, Docker CLI, Playwright Chromium (under `/opt/playwright-browsers`, for the Claude playwright MCP), and standard dev tools (neovim, tmux, git, rsync, htop, nmap, unzip, etc.). Stays alive via `sleep infinity`; entered through `docker exec -it ... tmux` for the TUI workflow or over SSH on host port 2222 for editor remote-dev. `entrypoint.sh` starts sshd before exec'ing CMD; host keys persist in the `bythewood-ssh` volume so fingerprints survive rebuilds. Started with `docker run --init` so PID 1 reaps zombies left behind when tmux/sshd children exit. Helper scripts (`backup`, `restore`, `sync`, `status`) are baked in at `/home/dev/scripts/` and on PATH. Host setup is automated by `bootstrap.ps1`.- **`containers/webdev/`** — Ubuntu 24.04 dev container with Node 22, Python 3 (pip + uv), Bun, Docker CLI, Playwright Chromium (under `/opt/playwright-browsers`, for the Claude playwright MCP), and standard dev tools (neovim, tmux, git, rsync, htop, nmap, unzip, etc.). Stays alive via `sleep infinity`; entered through `docker exec -it ... tmux` for the TUI workflow or over SSH on host port 2222 for editor remote-dev. `entrypoint.sh` starts sshd before exec'ing CMD; host keys persist in the `bythewood-ssh` volume so fingerprints survive rebuilds. Started with `docker run --init` so PID 1 reaps zombies left behind when tmux/sshd children exit. Helper scripts (`restic-backup`, `restic-restore`, `restic-status`, `code-sync`, `server-health-check`) are baked in at `/home/dev/scripts/` and on PATH. Host setup is automated by `bootstrap.ps1`.- **`hosts/alpine/`** — Production server setup: Caddy reverse proxy (auto HTTPS), Docker Compose for services, restic backups to Backblaze B2, UFW firewall, push-to-deploy via git hooks.## Deployed Projects
@@ -23,9 +23,11 @@ taproot/│ └── webdev/│ ├── Dockerfile the vessel — Ubuntu 24.04 dev image│ ├── bootstrap.ps1 one-shot host setup (Windows)│ ├── backup.sh, restore.sh restic to / from B2│ ├── sync.sh git pull every repo under ~/code/│ └── status.sh last snapshot per host across repos│ ├── restic-backup.sh manual restic snapshot to B2│ ├── restic-restore.sh pull latest snapshot from B2│ ├── restic-status.sh last snapshot per host across repos│ ├── code-sync.sh pull existing repos + clone new ones from GitHub│ └── server-health-check.sh ssh into alpine and run its health check└── hosts/ └── alpine/ ├── quickstart.sh provision a fresh server
@@ -93,10 +95,11 @@ All in `~/scripts/` and on `PATH`:| Command | What it does ||---|---|| `backup` | Manual restic backup to B2; snapshot tagged with `$RESTIC_HOST` || `restore` | Pull latest snapshot from B2; existing data archived first || `sync` | `git fetch && git pull --ff-only` for every repo under `~/code/` || `status` | Last snapshot per host across both restic repos, plus repo size || `restic-backup` | Manual restic backup to B2; snapshot tagged with `$RESTIC_HOST` || `restic-restore` | Pull latest snapshot from B2; existing data archived first || `restic-status` | Last snapshot per host across both restic repos, plus repo size || `code-sync` | `git fetch && git pull --ff-only` for every repo under `~/code/`, then clones any non-archived non-fork repos owned by overshard on GitHub that aren't local yet || `server-health-check` | SSH into alpine and run its `/root/server-health-check.sh`. Override target with `$ALPINE_HOST` |## The dotfiles
@@ -155,7 +158,7 @@ export RESTIC_HOST="desktop" # or "laptop"```Optional: drop the alpine repo password at `~/.restic/alpine-password`(prompted for during bootstrap) so `status` can report on the alpine repo too.(prompted for during bootstrap) so `restic-status` can report on the alpine repo too.### Alpine credentials
@@ -166,9 +169,9 @@ pattern), at `/root/.restic/password` and `/root/.restic/b2-env`. The alpine### Daily flow```shbackup # take a snapshot from this machinestatus # check fleet health (both repos, every host) from anywheresync # pull every repo under ~/code/ to GitHub HEADrestic-backup # take a snapshot from this machinerestic-status # check fleet health (both repos, every host) from anywherecode-sync # pull every repo under ~/code/ + clone any new ones from GitHub```### Restore
@@ -178,7 +181,7 @@ Existing data is moved aside to `~/before-restore-<UTC-ISO>/` (webdev) orsnapshot back:```shrestore # webdev (from inside the container)restic-restore # webdev (from inside the container)ssh root@server /root/restore.sh --up # alpine; --up auto-restarts containers```
modified
containers/webdev/Dockerfile
@@ -18,10 +18,13 @@# Connect (SSH): ssh -p 2222 dev@localhost## Helper scripts inside the container (in PATH at /home/dev/scripts/):# backup manual restic backup to B2# restore pull latest snapshot from B2# sync git fetch + ff-only pull every repo under ~/code/# status last snapshot per host across both restic repos# restic-backup manual restic backup to B2# restic-restore pull latest snapshot from B2# restic-status last snapshot per host across both restic repos# code-sync ff-only pull every ~/code/ repo, then clone any# new non-archived owned repos from GitHub# server-health-check ssh into alpine and run the server-side health# script (apk, restic, free, df, docker stats)## The bythewood-* volumes survive image rebuilds, so /home/dev/code,# /home/dev/.claude, /home/dev/.ssh, and /home/dev/.restic persist. Fresh
@@ -49,7 +52,7 @@ ENV DEBIAN_FRONTEND=noninteractive \RUN apt-get update && \ apt-get install -y --no-install-recommends \ # Dev tools curl git rsync neovim openssh-client openssh-server tmux whois nmap unzip htop tree sudo \ curl git rsync neovim openssh-client openssh-server tmux whois nmap unzip htop tree sudo jq \ # Backups restic \ # Build tools
@@ -110,10 +113,11 @@ RUN mkdir -p /home/dev/code /home/dev/.claude /home/dev/.ssh /home/dev/.restic & chmod 600 /home/dev/.ssh/config && \ chown -R dev:dev /home/dev/code /home/dev/.claude /home/dev/.ssh /home/dev/.resticCOPY containers/webdev/backup.sh /home/dev/scripts/backupCOPY containers/webdev/restore.sh /home/dev/scripts/restoreCOPY containers/webdev/sync.sh /home/dev/scripts/syncCOPY containers/webdev/status.sh /home/dev/scripts/statusCOPY containers/webdev/restic-backup.sh /home/dev/scripts/restic-backupCOPY containers/webdev/restic-restore.sh /home/dev/scripts/restic-restoreCOPY containers/webdev/restic-status.sh /home/dev/scripts/restic-statusCOPY containers/webdev/code-sync.sh /home/dev/scripts/code-syncCOPY containers/webdev/server-health-check.sh /home/dev/scripts/server-health-checkRUN chmod +x /home/dev/scripts/* && \ chown -R dev:dev /home/dev/scripts
modified
containers/webdev/bootstrap.ps1
@@ -48,8 +48,8 @@ container, ssh, restic-password, b2-env, alpine-password, restore..PARAMETER Restore After all setup steps, run ~/scripts/restore inside the container to pull data from the latest B2 snapshot. Opt-in. After all setup steps, run ~/scripts/restic-restore inside the container to pull data from the latest B2 snapshot. Opt-in..PARAMETER Force Pull latest taproot, rebuild the image, remove the existing container,
@@ -360,7 +360,7 @@ function Step-AlpinePassword { Skip "/home/dev/.restic/alpine-password already set" return } Write-Host " Optional: lets ~/scripts/status query the alpine repo too." -ForegroundColor DarkGray Write-Host " Optional: lets ~/scripts/restic-status query the alpine repo too." -ForegroundColor DarkGray $pw = Read-Secret "Alpine repo password (or empty to skip)" if (-not $pw) { Skip "no value provided"
@@ -377,8 +377,8 @@ function Step-Restore { Skip "use -Restore to pull data from B2" return } Done "running ~/scripts/restore inside container" docker exec -it $ContainerName /home/dev/scripts/restore Done "running ~/scripts/restic-restore inside container" docker exec -it $ContainerName /home/dev/scripts/restic-restore}# ---------------------------------------------------------------------------
renamed
containers/webdev/sync.sh → containers/webdev/code-sync.sh
renamed
containers/webdev/backup.sh → containers/webdev/restic-backup.sh
renamed
containers/webdev/restore.sh → containers/webdev/restic-restore.sh
renamed
containers/webdev/status.sh → containers/webdev/restic-status.sh
added
containers/webdev/server-health-check.sh
@@ -0,0 +1,17 @@#!/bin/sh## server-health-check.sh## Run the alpine server's /root/server-health-check.sh from inside webdev,# streaming its output back. Same script that's printed in the daily MOTD on# the server itself.## Override the host with $ALPINE_HOST if it's ever something other than# root@bythewood.me.#set -euALPINE_HOST="${ALPINE_HOST:-root@bythewood.me}"exec ssh "$ALPINE_HOST" /root/server-health-check.sh