I typed curl http://rutorrent.domain.com fully expecting another failure. I’d been doing this for four days. The muscle memory of disappointment was strong.
HTTP/1.1 200 OK
Server: nginx
200 OK. At approximately 11 PM on a Saturday night, after 463 messages of back-and-forth debugging across multiple platforms, Traefik finally routed traffic to a container. My daughter walked past. “Did your computer do something?” Yeah. It finally worked.
“Okay,” she said, and went back to Roblox.
Fair enough.
September 2023: The Before Times
Here’s what my infrastructure looked like before this week destroyed me: services installed directly on hosts. Manual port management. Mental gymnastics to remember that Plex was on port 32400, Sonarr on 8989, and that one thing I forgot about on 9000-something. Access was always http://192.168.20.50:PORT — ugly, unmemorable, and exactly one misconfigured firewall rule from disaster.
It worked. Mostly. The way duct tape works. You know it’s holding, but you don’t want to bump it.
Then I discovered Traefik. Automatic service discovery from Docker labels. SSL termination. Clean URLs like https://service.domain.com instead of memorizing port numbers like a phone book from 1997. The marketing made it sound effortless.
How hard could it be?
Four days hard. 463 messages hard. “Maybe containers aren’t for me” hard.
Day 1: The Optimism Phase (114 Messages)
Morning. Read the Traefik docs. Felt confident. This is the dangerous part — the docs make it look like three YAML files and a dream.
By afternoon, I had my first docker-compose up. Container started. Dashboard appeared. I remember thinking, genuinely and without irony, “This is easy.”
By evening, nothing routed correctly. Services unreachable. Traefik labels apparently being treated as decorative suggestions. Containers were up, green lights everywhere, and yet every request hit a wall.
Night: Stack Overflow. Reddit. More messages. The answers all said things like “just add the labels” as if I hadn’t been doing exactly that for six hours.
Day 1 emotional arc: Confident to Confused to Frustrated to “I’ll figure it out tomorrow.”
Tomorrow was worse.
Day 2: Network Hell (83 Messages)
The containers were running. Traefik was running. They could not talk to each other. This is the Docker networking experience in a nutshell — everything is up, nothing is connected, and docker network ls is gaslighting you.
docker network ls
# traefik-public exists
docker network inspect traefik-public
# All containers attached
curl http://service.domain.local
# Connection refused
All the containers were on the same network. Traefik could see them. The labels were there. And yet: connection refused. Every single time.
I started trying different approaches simultaneously, comparing answers from different sources. One said bridge networking. Another said host networking. A third said overlay. I tried all three, sometimes in the same hour. None of them worked, but each one failed differently, which I suppose counts as progress.
Then the images stopped pulling:
Error response from daemon: Get https://registry-1.docker.io/v2/:
net/http: request canceled while waiting for connection
Network instability. DNS issues. Docker Hub throttling. All converging on the same Tuesday evening to make sure I questioned every life choice that led me to this terminal.
Day 2 emotional arc: Determined to Exhausted to “Maybe containers aren’t for me.”
I went to bed at 1 AM thinking about bridge networks. That’s not a metaphor. I literally dreamed about Docker subnets.
Day 3: Two Hundred and Thirty-Eight Messages
238 messages. In one day. I counted afterward.
Looking back, I understand why. I wasn’t debugging one problem. I was learning six things simultaneously, and every answer revealed three new questions:
10:00 AM: "How do I expose a container port?"
10:15 AM: "Why can't containers see each other?"
10:30 AM: "What's the difference between expose and ports?"
10:45 AM: "Why does DNS work sometimes and not others?"
11:00 AM: "How do Traefik labels work?"
11:15 AM: "Why are my labels being ignored?"
...
11:00 PM: "Is any of this actually working?"
Docker networking models — bridge, host, overlay — each with their own set of rules about who can see what and when. Traefik dynamic configuration that silently ignores misconfigured labels instead of telling you something’s wrong. Container DNS resolution that works inside the network but not from the host. Inter-container communication that requires the right network AND the right ports AND the right labels AND a sacrifice to the container gods.
The ruTorrent migration alone ate 50 messages. It had its own web server, its own port expectations, and its own opinions about reverse proxies. Cloudflare DDNS was another 40 messages — getting the API integration working so my domain would actually point at my dynamic IP.
By 6 PM I had a partial understanding. By 9 PM something almost worked. By 11 PM I was back to nothing working but I understood why it wasn’t working, which felt like real progress even though the result was identical.
Day 3 emotional arc: Information overload to Gradual understanding to “Wait, I think I see it now.”
That last part was a lie. I didn’t see it yet. But I was getting close.
Day 4: The Breakthrough (225 Messages)
Still broken. But differently broken. The errors had evolved from “connection refused” to “bad gateway,” which in Traefik land means traffic is reaching the proxy but the backend connection is failing. Progress. Ugly, frustrating progress.
Three final obstacles stood between me and a working infrastructure.
Obstacle 1: “Network already exists”
Error: network traefik-public already exists
Docker Compose kept trying to create a network that already existed because I’d created it manually earlier. Every docker-compose up threw this error and bailed.
# The fix - tell Compose the network is external
networks:
traefik-public:
external: true
One line. external: true. That’s it. That’s what three days of network confusion partially came down to. Docker Compose was trying to own a network it didn’t create, and instead of joining the existing one, it threw its hands up.
Obstacle 2: The Firewall
pfSense was blocking Docker traffic. This one was sneaky because everything looked correct from inside the Docker host. Containers could talk to each other. Traefik could see the backends. But traffic from the outside — from my browser, from curl on another machine — was dying at the firewall.
Docker creates its own network subnets. 172.17.0.0/16, 172.18.0.0/16, whatever it feels like that day. pfSense had no rules for these subnets. As far as the firewall was concerned, traffic to 172.18.0.2 was traffic to an unknown network. Dropped.
I added explicit firewall rules for Docker’s subnets. Traffic started flowing.
Obstacle 3: The Port That Was Wrong for Three Days
Traefik needs to know which port inside the container to route to. Not the published port. Not the host port. The internal container port. I’d been specifying the published port for three days.
labels:
- "traefik.enable=true"
- "traefik.http.routers.app.rule=Host(`app.domain.com`)"
- "traefik.http.services.app.loadbalancer.server.port=8080" # THIS ONE
That loadbalancer.server.port label. The internal port. The one the application actually listens on inside its container. I’d been putting the external mapped port there because that’s what made intuitive sense. It does not make intuitive sense to Traefik. Traefik talks to the container directly over the Docker network. It never sees the host port mapping. It needs the container’s internal port.
Three days of wrong port. Three days.
The Moment
Day 4. Approximately 11 PM. I’d fixed all three: external network declaration, firewall rules for Docker subnets, correct internal ports on the Traefik labels.
DNS was resolving. Cloudflare DDNS was updating. The Traefik dashboard showed green backends for the first time since this nightmare began.
I typed the curl command one more time. Four days of muscle memory expecting failure.
curl -v http://rutorrent.domain.com
HTTP/1.1 200 OK
Server: nginx
Content-Type: text/html
...
I stared at the terminal. Then I stared harder because I’d been burned before by misleading output.
200 OK. Real content. Not a Traefik error page. Not “Bad Gateway.” Not “Connection Refused.” An actual, legitimate, routed-through-a-reverse-proxy-to-a-Docker-container response.
I may have yelled. Loud enough that my daughter came to investigate, delivered her review of the situation (“Okay”), and returned to Roblox. The celebration-to-audience ratio was not ideal.
What I Actually Built
The before-and-after doesn’t look dramatic on paper. It was dramatic in practice.
Before:
Services installed directly on hosts:
- Service1 on Host1:8080
- Service2 on Host2:3000
- Service3 on Host3:9000
Access: http://host-ip:port (memorize the ports or suffer)
After:
All services in containers behind Traefik:
- service1.domain.com → Traefik → container1
- service2.domain.com → Traefik → container2
- service3.domain.com → Traefik → container3
Access: https://service.domain.com (auto-SSL via Cloudflare)
The architecture diagram I drew afterward looked clean. Professional, even.
Internet
|
v
pfSense (Firewall)
|
v
+-------------------------------------+
| Docker Host |
| |
| +-------------------------------+ |
| | Traefik (Reverse Proxy) | |
| | - Port 80 (HTTP) | |
| | - Port 443 (HTTPS) | |
| | - Auto service discovery | |
| | - SSL/TLS termination | |
| +---------------+---------------+ |
| | |
| +------------+------------+ |
| v v v |
| +------+ +------+ +------+ |
| | App1 | | App2 | | App3 | |
| +------+ +------+ +------+ |
| |
| Network: traefik-public |
+--------------------------------------+
|
v
Cloudflare (DDNS)
Clean. Elegant. The kind of diagram that makes it look like you knew what you were doing. I did not know what I was doing. I learned what I was doing over 463 messages and four days and a lot of bad curl output.
The Files That Made It Work
| File | Purpose |
|---|---|
docker-compose.yml (Traefik) | Reverse proxy configuration |
docker-compose.yml (per service) | Individual service containers |
/etc/docker/daemon.json | Docker DNS configuration |
| Cloudflare DDNS script | Automatic public IP updates |
| Cronjob entry | DDNS automation every 5 minutes |
Each one was the product of at least an hour of debugging. The Traefik compose file alone went through maybe 30 revisions.
What This Actually Taught Me
Docker networks are weird. Bridge, host, overlay — they all behave differently, and the documentation assumes you already understand the one you’re reading about. Bridge is the default, and the default is almost never what you want for multi-service setups. Host mode works but defeats the purpose of isolation. Overlay is for Swarm, which I wasn’t running. The correct answer was a user-defined bridge network with external: true, and it took three days to arrive there.
Traefik labels are picky and silent. One typo in a label and your service is invisible. Not “error” invisible — just absent. The dashboard shows nothing. The logs show nothing. Your service might as well not exist. The traefik.enable=true label is the easiest one to forget, and without it, Traefik ignores all your other labels completely.
Firewalls don’t know about containers. This seems obvious in retrospect. Docker creates its own subnets dynamically. Your firewall has static rules. Unless you explicitly tell the firewall about Docker’s subnets, traffic dies at the boundary. I spent a full day blaming Docker networking when the packets were being dropped by pfSense.
463 messages is a lot. But 463 messages means I didn’t give up. Sometimes persistence is the skill. Not intelligence, not experience, not having the right documentation. Just the willingness to type one more message, try one more configuration, restart one more container. Day 3 almost broke me. Day 4 rewarded the stubbornness.
Sleep helps. Four days meant four nights of sleep between debugging sessions. Each morning I came back with fresh eyes and noticed things I’d been staring past at midnight. The overnight brain processing is real and I’m never too proud to use it.
What Came Next
This foundation enabled everything that followed. Once I understood Docker networking and Traefik routing, adding new services became a five-minute job instead of a five-hour ordeal. The containerized infrastructure grew from three services to a dozen. Then more.
Sixteen months later, I’d move from Docker Compose to K3s — Kubernetes on the edge. That was its own nightmare, but at least I understood networking by then.
The build swarm came later. 66 cores of distributed Gentoo compilation. This blog, running in a container behind Traefik, came later too.
All of it traces back to those four days in September 2023. The 463-message saga that transformed my infrastructure from “stuff running on servers” to “a system I built and understand.”
The messages weren’t wasted. They were tuition. Expensive, frustrating, 11-PM-on-a-Saturday tuition. But I graduated.
And my daughter still doesn’t care.