Skip to main content

Tutorial Playbooks

Real syntax, real deployment flow: each playbook includes complete files plus explicit deploy, verify, and rollback steps.

4 hands-on playbooks 75 blog posts 72 journal entries

Source Method

Patterns here are aligned to primary documentation and adapted for ArgoBox-style infrastructure. Use these as starting templates, then tune hostnames, ports, and auth to your environment.

Compose Stack with Traefik + Postgres + Healthchecks

A restart-safe stack with TLS routing, service health gates, and secret-based DB auth.

Intermediate 30-45 min docker
Prerequisites
  • Docker Engine + Compose v2
  • A DNS record pointing to your host
  • Ports 80/443 reachable
/opt/argobox/stack/docker-compose.yml
services:
  traefik:
    image: traefik:v3.1
    command:
      - --api.dashboard=true
      - --providers.docker=true
      - --providers.docker.exposedbydefault=false
      - --entrypoints.web.address=:80
      - --entrypoints.websecure.address=:443
      - --certificatesresolvers.letsencrypt.acme.email=admin@argobox.com
      - --certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json
      - --certificatesresolvers.letsencrypt.acme.httpchallenge=true
      - --certificatesresolvers.letsencrypt.acme.httpchallenge.entrypoint=web
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - ./letsencrypt:/letsencrypt
    restart: unless-stopped

  postgres:
    image: postgres:16
    environment:
      POSTGRES_DB: app
      POSTGRES_USER: app
      POSTGRES_PASSWORD_FILE: /run/secrets/postgres_password
    secrets:
      - postgres_password
    volumes:
      - pgdata:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U app -d app"]
      interval: 10s
      timeout: 5s
      retries: 6
    restart: unless-stopped

  app:
    image: ghcr.io/traefik/whoami:v1.10
    depends_on:
      postgres:
        condition: service_healthy
    labels:
      - traefik.enable=true
      - traefik.http.routers.app.rule=Host(`app.argobox.com`)
      - traefik.http.routers.app.entrypoints=websecure
      - traefik.http.routers.app.tls.certresolver=letsencrypt
      - traefik.http.services.app.loadbalancer.server.port=80
    restart: unless-stopped

secrets:
  postgres_password:
    file: ./secrets/postgres_password.txt

volumes:
  pgdata:
/opt/argobox/stack/secrets/postgres_password.txt
replace-with-a-long-random-password

Deploy

mkdir -p /opt/argobox/stack/{letsencrypt,secrets}
chmod 700 /opt/argobox/stack/letsencrypt
chmod 600 /opt/argobox/stack/secrets/postgres_password.txt
cd /opt/argobox/stack
docker compose pull
docker compose up -d --remove-orphans

Verify

docker compose ps
docker compose logs --no-log-prefix postgres | tail -n 30
curl -I https://app.argobox.com
docker exec $(docker compose ps -q postgres) pg_isready -U app -d app

Rollback

cd /opt/argobox/stack
docker compose down
docker compose up -d --remove-orphans

Kubernetes Deployment with Startup/Readiness/Liveness + Ingress

A rollout-safe deployment that survives slow startup and only receives traffic when healthy.

Advanced 35-50 min kubernetes
Prerequisites
  • Working Kubernetes cluster
  • Ingress controller installed
  • kubectl access to target namespace
/opt/argobox/k8s/app-stack.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: app-prod
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: app-api
  namespace: app-prod
spec:
  replicas: 3
  revisionHistoryLimit: 5
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 0
      maxSurge: 1
  selector:
    matchLabels:
      app: app-api
  template:
    metadata:
      labels:
        app: app-api
    spec:
      containers:
        - name: api
          image: ghcr.io/acme/app-api:1.8.2
          ports:
            - containerPort: 8080
          env:
            - name: APP_ENV
              value: production
          resources:
            requests:
              cpu: 150m
              memory: 256Mi
            limits:
              cpu: 1
              memory: 1Gi
          startupProbe:
            httpGet:
              path: /health/startup
              port: 8080
            periodSeconds: 5
            failureThreshold: 30
          readinessProbe:
            httpGet:
              path: /health/ready
              port: 8080
            periodSeconds: 10
            timeoutSeconds: 2
            failureThreshold: 3
          livenessProbe:
            httpGet:
              path: /health/live
              port: 8080
            periodSeconds: 15
            timeoutSeconds: 2
            failureThreshold: 3
---
apiVersion: v1
kind: Service
metadata:
  name: app-api
  namespace: app-prod
spec:
  selector:
    app: app-api
  ports:
    - name: http
      port: 80
      targetPort: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: app-api
  namespace: app-prod
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
  ingressClassName: nginx
  tls:
    - hosts: ["api.argobox.com"]
      secretName: app-api-tls
  rules:
    - host: api.argobox.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: app-api
                port:
                  number: 80

Deploy

kubectl apply -f /opt/argobox/k8s/app-stack.yaml
kubectl rollout status deploy/app-api -n app-prod --timeout=180s

Verify

kubectl get pods -n app-prod -o wide
kubectl get ingress -n app-prod
kubectl describe deploy app-api -n app-prod | rg -n "Strategy|Replicas|Probe" -N
curl -fsS https://api.argobox.com/health/ready

Rollback

kubectl rollout undo deploy/app-api -n app-prod
kubectl rollout status deploy/app-api -n app-prod

SSH Hardening + Fail2Ban for Admin Access

Password auth disabled, limited attack surface, and automatic ban rules for repeated auth failures.

Intermediate 25-35 min security
Prerequisites
  • Console/physical recovery path
  • At least one tested SSH public key
  • Root/sudo access
/etc/ssh/sshd_config.d/99-hardening.conf
Port 22
AddressFamily inet
ListenAddress 0.0.0.0

PermitRootLogin no
PubkeyAuthentication yes
PasswordAuthentication no
KbdInteractiveAuthentication no
ChallengeResponseAuthentication no
UsePAM yes

AllowAgentForwarding no
AllowTcpForwarding no
X11Forwarding no
PermitTunnel no
AllowStreamLocalForwarding no

MaxAuthTries 3
MaxSessions 5
LoginGraceTime 30
ClientAliveInterval 300
ClientAliveCountMax 2

LogLevel VERBOSE
/etc/fail2ban/jail.d/sshd.local
[sshd]
enabled = true
backend = systemd
port = ssh
maxretry = 4
findtime = 10m
bantime = 1h
bantime.increment = true
bantime.rndtime = 5m

Deploy

sshd -t
sudo systemctl reload sshd || sudo rc-service sshd reload
sudo systemctl restart fail2ban || sudo rc-service fail2ban restart

Verify

ssh -o PreferredAuthentications=password user@host  # should fail
ssh -T user@host
sudo fail2ban-client status sshd

Rollback

sudo mv /etc/ssh/sshd_config.d/99-hardening.conf /etc/ssh/sshd_config.d/99-hardening.conf.bak
sudo systemctl reload sshd || sudo rc-service sshd reload

Tailscale Subnet Router + ACL Policy

Expose LAN subnets through a controlled router node with explicit ACL ownership.

Intermediate 20-30 min networking
Prerequisites
  • Tailscale installed on router node
  • Admin access to Tailscale policy file
  • Known LAN CIDR
/etc/tailscale/bootstrap-subnet-router.sh
#!/usr/bin/env bash
set -euo pipefail

LAN_CIDR="10.42.0.0/24"

tailscale up   --ssh   --advertise-routes="${LAN_CIDR}"   --accept-routes=false   --accept-dns=false

echo "Approve the advertised route in the admin console before testing."
tailscale-acl.json
{
  "tagOwners": {
    "tag:subnet-router": ["autogroup:admin"]
  },
  "acls": [
    {
      "action": "accept",
      "src": ["group:admins"],
      "dst": ["10.42.0.0/24:*"]
    }
  ],
  "groups": {
    "group:admins": ["you@argobox.com"]
  }
}

Deploy

chmod +x /etc/tailscale/bootstrap-subnet-router.sh
sudo /etc/tailscale/bootstrap-subnet-router.sh
tailscale status

Verify

tailscale status --json | jq ".Self.AdvertisedRoutes"
tailscale ping <remote-node>
ip route | rg "10.42.0.0/24"

Rollback

tailscale up --reset
tailscale status