Skip to main content
Playgrounds

VM and Advanced Labs

QEMU VM-based and AI-focused playground labs -- Argo OS graphical desktop via VNC, Ollama model operations, and RAG pipeline exercises

February 23, 2026

VM and Advanced Labs

Three playground labs provision heavier resources than the standard container labs. The Argo OS lab uses a full QEMU virtual machine with graphical VNC access. The Ollama and RAG labs provision LXC containers but focus on AI model operations and retrieval-augmented generation pipelines. All three use the same lab engine at 10.0.0.210:8094 on the isolated vmbr99 network (10.99.0.1/24).

Argo OS Experience (/playground/argo-os)

Route: src/pages/playground/argo-os.astro Template ID: argo-os-experience

This is the flagship lab -- a full Gentoo-based QEMU virtual machine with a KDE Plasma desktop accessible through noVNC in the browser and a terminal console via xterm.js. Users get both graphical and command-line access to a real Argo OS installation.

Components

The page imports five lab components:

  • LabLauncher -- handles provisioning with QEMU-specific text (shows "Cloning Argo OS VM (~25s)" instead of the standard LXC messages, extends timeout to 90 seconds)
  • VNCEmbed -- noVNC-based graphical desktop viewer
  • TerminalEmbed -- xterm.js terminal for CLI access
  • SessionBar -- session timer and resource monitoring
  • ChallengeTracker + DocumentationPanel -- guided exercises with reference material

VNCEmbed Component

src/components/labs/VNCEmbed.astro provides graphical access to the QEMU VM's display. It uses noVNC (self-hosted in public/novnc/ as native ES modules because noVNC's top-level await breaks Vite bundling).

Connection flow: the VNCConnection class builds the WebSocket URL via labVNCWebSocketUrl(sessionId) (production: wss://labs.argobox.com/ws/vnc/{sessionId}), creates a raw WebSocket, captures server-side close codes (4xxx range) via addEventListener before noVNC overwrites onclose, then receives the Proxmox VNC ticket as the first text message. noVNC's RFB class is instantiated with the open WebSocket and ticket as credentials, configured with scaleViewport: true, resizeSession: true, quality 6, compression 2.

The viewer supports fullscreen (button, double-click, or F11) with auto-hiding chrome. Auto-reconnect uses linear scaling (3s base delay * min(attempts, 5), capping at 15s).

VNC Ticket Flow (Proxmox Integration)

The lab engine acts as a VNC proxy between the browser and Proxmox's built-in VNC server:

Browser → WebSocket to lab engine → lab engine requests VNC ticket from Proxmox API
  → Proxmox returns WebSocket URL + ticket
  → lab engine sends ticket as first message to browser
  → browser passes ticket to noVNC as RFB credentials
  → noVNC completes VNC authentication with Proxmox
  → graphical desktop stream begins

This is the same mechanism Proxmox's own web console uses. The lab engine proxies it so users do not need direct access to the Proxmox API.

Challenges

Challenges are organized by difficulty tier and cover Gentoo-specific tasks:

  • Beginner: Search for packages with emerge --search, view installed packages with qlist -I, inspect /etc/portage/make.conf, check package versions with equery
  • Intermediate: USE flags, world set management, portage tree structure
  • Advanced: Custom ebuild creation, overlay management, build optimization
  • Expert: Kernel configuration, cross-compilation, binary package hosting

Ollama Lab (/playground/ollama)

Route: src/pages/playground/ollama.astro Template ID: ollama-lab

The Ollama lab provisions an LXC container for AI model operations. The page includes a "command deck" with pre-built commands and simulated output:

Command Purpose
systemctl status ollama Check service health
ollama list View installed models
curl -s http://127.0.0.1:11434/api/tags | jq API smoke test
systemctl show -p Environment ollama Verify model path override
Remote connectivity test Check cross-host access

Challenges cover service setup, model storage configuration (systemd overrides for alternate model paths), API diagnostics, and OpenWebUI integration. Difficulty: Intermediate (15-35 min).

RAG Workshop (/playground/rag)

Route: src/pages/playground/rag.astro Template ID: iac-playground (LabLauncher) / rag-workshop (ChallengeTracker)

The RAG workshop provisions a container for building a local-first RAG pipeline from markdown vaults. Three-step workflow:

  1. Index -- argo-rag index --vault ~/Vaults/knowledge-base -- chunk markdown files, generate embeddings, build a vector index
  2. Search -- argo-rag search "query" -- run semantic lookup against the index
  3. Serve -- argo-rag web -- open a local UI to compare query phrasing and result quality

The lab uses LabLauncher, TerminalEmbed, SessionBar, ChallengeTracker, and DocumentationPanel. Challenges ("missions") focus on:

  • Intermediate: Index a vault, run basic semantic queries, compare results
  • Advanced: Tune chunking strategies, experiment with embedding models, analyze retrieval scoring
  • Expert: Build a full RAG pipeline end-to-end, optimize recall vs. precision, integrate with local LLMs

Difficulty: Advanced. Estimated time: 20-45 minutes.

Resource Differences

VM labs consume significantly more resources than container labs:

Lab Backend Boot Time Memory Timeout
Container labs LXC ~5-10s 512MB 30s poll
Argo OS QEMU VM ~25-45s 2-4GB 90s poll
Ollama LXC + service ~10-15s 1GB+ 30s poll
RAG LXC + tools ~10-15s 1GB+ 30s poll

The LabLauncher adjusts its UI and timeouts per template. For argo-os-experience, it shows an ETA and uses QEMU-specific step descriptions.

VM Template Configuration (VM 900)

The Argo OS lab template (VMID 900 on Titan) runs Gentoo with KDE Plasma 6.5.4, SDDM display manager, and Xorg. Key configuration decisions:

SDDM Auto-Login

SDDM is configured for auto-login (/etc/sddm.conf) so users land directly on the KDE Plasma desktop without seeing a login screen. Session type: plasmax11.

DPMS and Screen Blanking (Disabled)

Warm pool VMs sit idle for extended periods before a user claims them. Without explicit DPMS disabling, KDE's power management would blank the display after 30 minutes, causing users to see a black screen when connecting via VNC.

Three layers prevent screen blanking:

  1. Xorg server config (/etc/X11/xorg.conf.d/10-no-dpms.conf) -- Sets BlankTime, StandbyTime, SuspendTime, and OffTime to 0 and disables DPMS at the monitor level.

  2. KDE PowerDevil (~argo/.config/powerdevilrc) -- All display dim/off timeouts set to 0 across all power profiles. Auto-suspend disabled.

  3. XDG autostart (/etc/xdg/autostart/disable-dpms.desktop) -- Runs xset -dpms; xset s off; xset s noblank at session login as a runtime safety net.

The screen locker is also disabled (~argo/.config/kscreenlockerrc: Autolock=false, Timeout=0).

QEMU Guest Agent

The VM config has agent: enabled=1 but the qemu-guest-agent package is not yet installed in the template. This means Proxmox cannot execute commands inside the VM or perform graceful shutdown via the agent. To install it: boot VM 900, run emerge app-emulation/qemu-guest-agent && rc-update add qemu-guest-agent default, then convert back to template.

playgroundsvmqemuvncnovncollamaragai