Skip to main content
Features

Status Pages

Public and admin status dashboards with 24-hour, 7-day, and 30-day uptime views, mode-based filtering, and incident timeline

February 28, 2026 Updated March 13, 2026

Status Pages

ArgoBox has two status pages with different audiences:

  • Public (/status) — Shows all infrastructure except Build Swarm and Workstations. Showcases homelab infrastructure alongside user-facing services. Features dual scoring: public services uptime shown separately from infrastructure uptime.
  • Admin (/admin/status) — Shows every monitor from Uptime Kuma. Includes Build Swarm Orchestrators, Gateway, and Workstations.

Both pages use the same components with a mode prop ("public" or "admin") that controls filtering at runtime via data-mode HTML attributes.

Architecture

Browser (status page)
  │
  ├── /api/uptime-kuma/status-page/public          → Uptime Kuma config + groups
  ├── /api/uptime-kuma/status-page/heartbeat/public → Per-monitor heartbeat lists
  ├── /api/kuma-history/history/hourly?days=N       → Server-side hourly aggregates
  └── /api/kuma-history/history/services?days=N     → Per-service daily snapshots

All Uptime Kuma data is proxied through Cloudflare Workers (production: https://status.argobox.com) or the Astro dev server. The browser never connects to Uptime Kuma directly.

The admin page also includes AIServiceStatus (Ollama health check via /api/status/ai-services), which is not on the public page since Ollama is not reachable from CF Pages.

Mode-Based Filtering

All status components accept a mode?: 'public' | 'admin' prop:

<ServiceDashboard title="Service Status" mode="public" />
<ServiceDashboard title="All Infrastructure" mode="admin" />

The mode is rendered as data-mode={mode} on the component's section element. Client-side JavaScript reads this attribute to determine filtering behavior.

HIDDEN_PUBLIC_GROUPS (blacklist)

Public mode hides these groups (admin shows everything):

const HIDDEN_PUBLIC_GROUPS = [
  'Build Swarm - Drones',
  'Build Swarm - Orchestrators',
  'Build Swarm - Gateway',
  'Workstations',
];

All other groups are shown on both pages. This approach means new Uptime Kuma groups are visible by default — only explicitly blacklisted groups are hidden.

HIDDEN_NAMES

These monitors are hidden from both public and admin display (security-sensitive):

const HIDDEN_NAMES = ['Vault', 'Bitwarden', 'Vaultwarden', 'Password Manager'];

Three Time Views

Both pages have three tabbed views. All three use the same HIDDEN_PUBLIC_GROUPS and mode detection, ensuring consistent filtering across time ranges.

24-Hour View (default)

Component: ContributionGrid.astro

  • 288 cells (5-minute intervals over 24 hours) displayed as a horizontal timeline bar
  • Color levels: level-0 (gray/no data), level-1 (red/<50%), level-2 (amber/50-75%), level-3 (light green/75-90%), level-4 (bold green/90%+)
  • Stats: average uptime, best streak, total incidents
  • Seeds from kuma-history/hourly?days=2 to fill gaps when browser was closed
  • localStorage key: argobox-uptime-5min-v4 (retains 31 days of 5-min slots)
  • Refreshes every 5 minutes

Also shown in the 24-hour view:

  • AIServiceStatus — Ollama status (admin page only)
  • ServiceDashboard — All monitors grouped by category with 24-hour uptime bars
  • TimelineView — Recent incident timeline (10 items public, 20 admin)

7-Day View

Component: WeekView.astro

  • 7 rows (one per day), each with 24 hourly cells
  • Per-service weekly status table with daily uptime cells + 7-day average
  • Seeds from kuma-history/hourly?days=8 and kuma-history/services?days=8
  • Stats: 7-day average, best day, total incident hours

30-Day View

Component: MonthView.astro

  • 30 calendar-style cells (one per day) with date numbers and uptime percentages
  • Per-service monthly horizontal bar chart with uptime percentages
  • Seeds from kuma-history/hourly?days=31 to backfill data
  • Stats: 30-day average, best streak, incident days, days tracked

Uptime Kuma (45 active monitors, 9 groups on status page)

Group Monitors Public Admin Score
Public Services 8 Shown Shown Public
Internal Services 13 Shown Shown Infrastructure
Docker Services 7 Shown Shown Infrastructure
Tarn Infrastructure 5 Shown Shown Infrastructure
Hypervisors 2 Shown Shown Infrastructure
NAS Storage 2 Shown Shown Infrastructure
Network Infrastructure 2 Shown Shown Infrastructure
Media Services 3 Shown Shown Infrastructure
Workstations 2 Hidden Shown Excluded

Removed from status page (2026-03-10): Build Swarm Orchestrators, Build Swarm Gateway, Vault (Bitwarden). Monitors disabled in Uptime Kuma.

Uptime Kuma runs on argobox-lite (10.0.0.199:3003) with a Cloudflare Tunnel at status.argobox.com. Telegram alerts configured for 20 critical monitors.

Monitor Groups and Icons

Group Icon Description
Public Services 🌐 User-facing (ArgoBox, Gitea, Dev, Files, Blog, Labs Engine, Playground Switch, Status Page)
Internal Services 🔧 Backend (Grafana, Prometheus, OpenWebUI, Command Center, etc.)
Docker Services 🐳 Container services (cAdvisor, Dozzle, IT-Tools, etc.)
Tarn Infrastructure ⚙️ Remote Proxmox VMs/CTs (Workbench, Forge Relay, Lab Engine, Binhost)
Hypervisors 🖥️ Proxmox hosts (Tarn, Izar)
NAS Storage 💾 Storage arrays (Meridian, Cassiel)
Network Infrastructure 🌐 Routers (Sentinel, Andromeda)
Media Services 📺 Plex instances + Shield
Workstations 💻 Personal machines (hidden from public)

Dual Uptime Scoring (Public Page)

The public status page shows two separate uptime percentages:

  • Public Uptime — Only "Public Services" group. Drives the headline %, beacon color, overall status text, and Issues count. Target: 99.9%+.
  • Infrastructure — All other visible groups (Internal Services, Docker, Tarn, Hypervisors, NAS, Network, Media). Shown separately so it doesn't drag down the public score.

This lets prospective employers and clients see high public service reliability while also seeing the impressive number of infrastructure components being managed.

PUBLIC_SCORE_GROUPS in ServiceDashboard and UptimeHero controls which groups count toward the public headline score. Everything else is automatically classified as infrastructure.

Hidden monitors (Vault/Bitwarden) are always excluded from both scores.

Galactic Identity Sanitization

All monitor names are sanitized through nameOverrides maps. Real hostnames are never exposed publicly.

Key mappings:

  • Proxmox: TitanHypervisor: Tarn
  • Proxmox: IzarHypervisor: Izar
  • NAS: MasaiMaraNAS: Meridian
  • Workstation: callistoWorkstation: Capella
  • Orch: IzarOrchestrator: Izar

Consistency Requirements

Five components share filtering logic and must stay in sync:

Constant Components
HIDDEN_PUBLIC_GROUPS ServiceDashboard, UptimeHero, ContributionGrid, WeekView, TimelineView
HIDDEN_NAMES ServiceDashboard, UptimeHero, ContributionGrid, WeekView, TimelineView
nameOverrides ServiceDashboard, WeekView, TimelineView
groupNameOverrides ServiceDashboard
PUBLIC_SCORE_GROUPS ServiceDashboard, UptimeHero
Mode detection (data-mode) All components

MonthView inherits filtering from WeekView's localStorage snapshots.

When adding or removing monitors/groups, update all five components.

Color Thresholds

Level Color Meaning Threshold
4 Bold green Fully operational >= 90%
3 Light green Minor issues >= 75%
2 Amber Degraded >= 50%
1 Red Down > 0%
0 Gray No data No data available

Key Files

File Purpose
src/pages/status.astro Public status page (mode="public")
src/pages/admin/status.astro Admin status page (mode="admin")
src/config/modules/status.ts Admin sidebar module registration
src/components/status/ContributionGrid.astro 24-hour 5-min timeline bar
src/components/status/WeekView.astro 7-day hourly grid + per-service weekly
src/components/status/MonthView.astro 30-day calendar grid + per-service monthly
src/components/status/ServiceDashboard.astro All monitors grouped with uptime bars
src/components/status/UptimeHero.astro Hero section with overall uptime %
src/components/status/TimelineView.astro Recent incident timeline
src/components/status/AIServiceStatus.astro Ollama status widget (admin only)
src/pages/api/uptime-kuma/[...path].ts API proxy to Uptime Kuma (KV-cached)
src/pages/api/kuma-history/[...path].ts API proxy to kuma-history (KV-cached)
src/pages/api/cache/warmup.ts Cron endpoint warming all KV caches
src/lib/kv-cache.ts Stale-while-revalidate KV cache engine
src/components/status/ResponseTimeChart.astro SVG response time chart with ping data

Durability Layer (2026-03-13)

The status page is designed to always render meaningful data even when backend services (Uptime Kuma, kuma-history) are unavailable. Three layers of caching protect against outages:

Layer 1: Server-Side KV Caching

Both API proxies (uptime-kuma/[...path].ts and kuma-history/[...path].ts) use Cloudflare KV with stale-while-revalidate semantics via cachedFetch() from src/lib/kv-cache.ts:

Endpoint Pattern Fresh TTL Stale TTL KV Expiry Rationale
uptime-kuma:status-page/heartbeat 30s 15min 2h Heartbeats change every 30s
uptime-kuma:status-page 30s 15min 2h Config changes rarely
kuma-history:history/hourly 2min 30min 4h Hourly aggregates
kuma-history:history/daily 5min 1h 4h Daily data changes slowly
kuma-history:history/services 5min 1h 4h Service snapshots

Fallback chain: Fresh KV hit (~2ms) → Stale KV + background revalidate → Synchronous origin fetch → Expired stale KV → Graceful error response.

Cache warmup: The /api/cache/warmup cron (every 5 min) proactively warms all status endpoints so KV never goes cold:

  • status-page/public, status-page/heartbeat/public
  • history/hourly?days=2, history/hourly?days=8, history/hourly?days=31, history/services?days=8

Cache source is exposed via X-Cache-Source response header (kv-fresh, kv-stale, origin, fallback).

Layer 2: Client-Side localStorage Persistence

Three components persist their rendered data to localStorage so they can show cached content when fetches fail:

Component Cache Key What's Cached
UptimeHero argobox-hero-stats Uptime %, services online count, incidents
ServiceDashboard argobox-svc-groups Full group/monitor tree with status
TimelineView argobox-incidents Parsed incident list with timestamps

Behavior on failure:

  • UptimeHero: Fills stat numbers from cache, keeps beacon red ("Unable to connect"), shows "cached Xm ago" timestamp
  • ServiceDashboard: Renders full cached dashboard with amber banner: "Showing cached data from X min ago. Live data unavailable."
  • TimelineView: Shows cached incidents; if cache is empty, shows "Unable to load incidents" instead of misleading "All Clear"

Layer 3: Client-Side Circuit Breaker

All three polling components (UptimeHero, ServiceDashboard, TimelineView) implement an inline circuit breaker:

  • After 3 consecutive failures, the circuit opens for 60 seconds
  • During open state, fetch functions return immediately (null/cached) without hitting the network
  • On next successful fetch, the circuit resets

This prevents unnecessary polling during extended outages and reduces error noise in the console.

Monitor Deduplication

ServiceDashboard deduplicates monitors by display name. Multiple Uptime Kuma monitors that map to the same nameOverrides value (e.g., 4 monitors all becoming "Workstation: Capella") are merged — the one with status: 'up' or highest uptime wins.

View Transitions

All components use astro:page-load (NOT DOMContentLoaded) for initialization, which is required for the Astro Client Router / View Transitions. Each component also registers cleanup via astro:before-swap to clear intervals when navigating away.

statusuptimemonitoringuptime-kumainfrastructure