Status
A self-hosted uptime monitor and status page. HTTP checks every 3 minutes, daily Lighthouse audits, weekly in-process SEO crawls, and alerts via email and Discord webhook on state transitions.
Single-binary axum service backed by SQLite. Originally a Django service;
that codebase has been retired and replaced with this rust port. The data
from the Django era can be migrated in via ./status migrate <django.sqlite3>
(preserves Property UUIDs so existing public status URLs keep working).
Features
- HTTP uptime checks with rolling uptime percentages and recent-uptime bars
- Lighthouse audits (performance, accessibility, best practices, SEO) with weighted breakdown and top savings opportunities
- In-process SEO crawler (reqwest + scraper) extracting title, description, canonical, OG tags, and H1 per page, plus 38 SEO/a11y/perf/security checks
- Security header analysis (HTTPS, HSTS, HSTS preload, X-Frame-Options, etc.)
- Alert state machine with debounce on flaps (two consecutive non-200s to go down, immediate 200 to come back up)
- Direct-to-MX email and Discord webhook alerts on state transitions only
- PDF + markdown report export per property (PDF rendered in-process via embedded Typst, no chromium subprocess; markdown rendered from a template)
Stack
| Concern | Crate / Tool |
|---|---|
| Web framework | axum + tokio |
| Database | sqlx + SQLite (WAL, synchronous=NORMAL) |
| Auth | tower-cookies signed sessions |
| Template engine | minijinja |
| HTTP client | reqwest (rustls) |
| Crawler | scraper + html5ever + robotstxt + hickory-resolver |
| Lighthouse | bun run --bun node_modules/.bin/lighthouse |
| lettre (direct-to-MX, opportunistic STARTTLS) | |
embedded Typst (typst + typst-pdf + typst-kit) | |
| Static assets | Vite + Bun, Bootstrap 5, Chart.js, monaspace font |
Requirements
You need docker installed for a quick production start, or you can read the
Dockerfile for the exact dependency list and adjust for your distro.
For local development:
- rust (cargo) for the backend
- bun for everything JS: the frontend bundler (Vite) AND the
lighthouseCLI. The rust binary invokes lighthouse viabun run --bun node_modules/.bin/lighthouse, which symlinksnode→bunso the shim’s#!/usr/bin/env nodeshebang resolves to bun’s runtime. No nodejs/npm required. - chromium for Lighthouse’s own audits (PDF reports do not need chromium; they go through embedded Typst)
Running locally
cp samplefiles/env.sample .env # set STATUS_PASSWORD at minimum
make run # vite watch + cargo run on port 8000
Server boots, applies migrations, and starts the scheduler in-process. Open
http://localhost:8000, log in with the password from .env, add a property
URL.
Configuration
All config comes from .env (loaded via dotenvy):
| Variable | Required | Purpose |
|---|---|---|
STATUS_PASSWORD | yes | Single operator password |
BASE_URL | yes for prod | Used in absolute URLs (sitemap, og tags, alert email links). No trailing slash |
PORT | no (default 8000) | HTTP listen port |
STATUS_COOKIE_SECRET | no | 32+ bytes for signing the session cookie. Falls back to a SHA-512 of the password, so rotating the password invalidates sessions |
ALERT_EMAIL | no | Recipient for outage / recovery emails. Leave unset to disable email |
DISCORD_WEBHOOK_URL | no | Discord webhook for outage / recovery embeds. Leave unset to disable |
STATUS_DATA_DIR | no (default ./data) | Where the SQLite db lives. Production sets this to /data |
STATUS_ROOT | no | Override the project root (where templates/, dist/, migrations/ are read from) |
CHROMIUM_BIN | no | Path to chromium for Lighthouse. Falls back to PATH lookup, then a /opt/playwright-browsers/ glob |
Make targets
| Target | What it does |
|---|---|
make run (default) | Vite watch + cargo run on port 8000, plus the in-process scheduler |
make build | Vite assets + release binary (target/release/status) |
make start | Run the release binary (after make build) |
make pull | rsync the production sqlite db from git remote server into data/ |
make migrate FROM=<path-to-django.sqlite3> | One-shot import of an existing Django status database, preserving Property UUIDs so public status URLs keep working. Add FORCE=1 to wipe first |
make push | git push to every configured remote |
make clean | Remove target/, dist/, frontend/node_modules/, root node_modules/, and data/ |
There are no tests or linters configured.
Importing an existing Django status DB
If you have a SQLite from the Django version of this project, you can keep your existing properties + check history:
make migrate FROM=/path/to/django/db.sqlite3
Add FORCE=1 to wipe an existing local rust DB first. The migration preserves
Property UUIDs so any public status URLs you’ve shared keep working.
Production deploy
The same git push server master post-receive hook flow used by the rest of
my projects:
Server:
apk update && apk upgrade && apk add docker docker-compose caddy git iptables ip6tables ufw
ufw allow 22/tcp && ufw allow 80/tcp && ufw allow 443/tcp && ufw --force enable
rc-update add docker boot && service docker start
mkdir -p /srv/git/status.git && cd /srv/git/status.git && git init --bare
Local:
git remote add server root@status.example.com:/srv/git/status.git
git push --set-upstream server master
Server:
mkdir -p /srv/docker && cd /srv/docker && git clone /srv/git/status.git status && cd /srv/docker/status
cp samplefiles/Caddyfile.sample /etc/caddy/Caddyfile
cp samplefiles/env.sample .env # edit STATUS_PASSWORD, BASE_URL, ALERT_EMAIL, DISCORD_WEBHOOK_URL
cp samplefiles/post-receive.sample /srv/git/status.git/hooks/post-receive && chmod +x /srv/git/status.git/hooks/post-receive
mkdir -p /srv/data/status && chown -R 1000:1000 /srv/data/status
docker-compose up --build --detach
rc-update add caddy boot && service caddy start
Backups
All data is stored in /srv/data/status/ and your repo is in
/srv/git/status.git/. Back up both of those folders and you have a complete
backup. The Caddyfile and .env are easy enough to recreate but back them
up too if you want to be thorough.
Support
I won’t be providing user support for this project. I’m happy to accept good pull requests and fix bugs but I don’t have time to help people run or use this project.