Zero Trust at Home: Cloudflare Tunnels, Split-Brain DNS, and Why Port 443 Is Dead to Me
I used to open ports on my firewall like it was nothing. Port 443, port 8080, port 32400 for Plex. A DNAT rule here, a firewall exception there. “It’s fine,” I told myself. “Nobody’s targeting a residential IP.”
Then I looked at my OPNsense firewall logs one morning and saw 47,000 connection attempts from IP ranges I’d never heard of. Port scanners from three continents. Brute force attempts against SSH that had been running for days. Some bot had found my Plex port and was hammering it with garbage requests.
47,000. In one night.
That was the morning I decided port 443 was dead to me.
The Old Way Was Terrible and I Did It Anyway
For years, the homelab remote access playbook looked like this:
Open a port on your router. Point a dynamic DNS record at your residential IP (because of course it changes every few days). Write a firewall rule. Hope it holds. Set up an nginx reverse proxy so you can route multiple services through port 443. Manage SSL certificates with Let’s Encrypt and pray the renewal cron job doesn’t silently fail.
I did all of this. For years.
The problems stack up fast. Dynamic DNS has propagation delays, so sometimes you’re just staring at a DNS_PROBE_FINISHED_NXDOMAIN error for twenty minutes wondering if your house burned down or if Cloudflare just hasn’t caught up yet. Your real IP address is exposed to anyone who does a DNS lookup. DDoS attacks hit your residential connection directly — and good luck explaining to your ISP why your 500 Mbps line is saturated by UDP floods targeting your Grafana dashboard.
And the SSL certificates. I had a Let’s Encrypt cert that expired on a Saturday night because the renewal hook needed a config file that got overwritten during an nginx update three weeks earlier. Didn’t notice until Monday morning when a browser showed me the red padlock of shame while I was trying to check on a build job from a coffee shop.
Port forwarding is a contract with chaos. You’re telling the internet “here’s an open door, please only send nice traffic through it.” The internet does not honor that contract.
How Cloudflare Tunnels Actually Work
The mental model shift is small but everything changes because of it.
Instead of opening port 443 on your router and letting the internet reach in, you run a daemon called cloudflared on your server that reaches out. It establishes a persistent outbound connection to Cloudflare’s edge network. Traffic flows backward from what you’d expect:
┌──────────┐ ┌─────────────────┐ ┌─────────────┐ ┌──────────────┐
│ Internet │────>│ Cloudflare Edge │<────│ cloudflared │────>│ Your Service │
└──────────┘ └─────────────────┘ └─────────────┘ └──────────────┘
│
(outbound connection only)
Someone requests code.argobox.com. Cloudflare receives that request at their nearest edge node. Cloudflare routes it through the tunnel that cloudflared established. Your server responds back through that same tunnel. At no point does the outside world connect directly to your network.
Your firewall stays locked. No inbound rules. No port forwarding. No DNAT. Nothing.
The first time I set this up and ran nmap against my public IP from an external VPS, every single port showed as filtered. I refreshed the scan three times because I didn’t believe it. All my services were accessible through their subdomains, but my IP looked like a brick wall. That was a good feeling.
Setting It Up (The Real Config, Not the Docs Placeholder Version)
Install cloudflared
On Gentoo, I build from source because of course I do. But for most people:
# Debian/Ubuntu
curl -L --output cloudflared.deb \
https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64.deb
dpkg -i cloudflared.deb
# Or pull the Docker image
docker pull cloudflare/cloudflared:latest
Authenticate
cloudflared tunnel login
This pops open a browser window (or gives you a URL to paste). You log into Cloudflare, pick the domain you want to use, and it drops a certificate file into ~/.cloudflared/. Guard that file. It’s the key to your kingdom.
Create the Tunnel
cloudflared tunnel create homelab
This spits out a tunnel ID and a credentials JSON file. The tunnel ID is a UUID — something like a1b2c3d4-e5f6-7890-abcd-ef1234567890. You’ll reference it in your config.
The Config File
Here’s what my actual ~/.cloudflared/config.yml looks like (sanitized, obviously):
tunnel: homelab
credentials-file: /home/commander/.cloudflared/a1b2c3d4-e5f6-7890-abcd-ef1234567890.json
ingress:
# VS Code Server - full IDE in the browser
- hostname: code.argobox.com
service: http://localhost:8443
originRequest:
noTLSVerify: true
# Grafana monitoring dashboards
- hostname: monitor.argobox.com
service: http://localhost:3000
# Overseerr - media requests for family
- hostname: requests.argobox.com
service: http://localhost:5055
# Catch-all: return 404 for anything else
- service: http_status:404
Each hostname entry maps a public subdomain to a local service. The catch-all at the bottom is important — without it, cloudflared won’t start. It needs a default route for unmatched requests.
Route DNS
cloudflared tunnel route dns homelab code.argobox.com
cloudflared tunnel route dns homelab monitor.argobox.com
cloudflared tunnel route dns homelab requests.argobox.com
This creates CNAME records in Cloudflare DNS pointing each subdomain to your tunnel. You can also do this manually in the Cloudflare dashboard, but the CLI is faster.
Docker Compose for the Lazy (Smart) Path
I run cloudflared in Docker on Altair-Link (10.42.0.199), my services gateway box. Here’s the compose file:
services:
cloudflared:
image: cloudflare/cloudflared:latest
container_name: cloudflared-tunnel
restart: unless-stopped
command: tunnel run homelab
volumes:
- /home/commander/.cloudflared:/home/nonroot/.cloudflared:ro
networks:
- proxy-net
networks:
proxy-net:
external: true
The :ro mount is intentional. The container only needs to read the credentials, never write to them.
OpenRC Service (Because This Is Gentoo)
For bare-metal installs on Gentoo, systemd doesn’t exist in my world. Here’s the OpenRC init script:
#!/sbin/openrc-run
# /etc/init.d/cloudflared
name="cloudflared"
description="Cloudflare Tunnel daemon"
command="/usr/local/bin/cloudflared"
command_args="tunnel run homelab"
command_user="cloudflared"
command_background=true
pidfile="/run/${RC_SVCNAME}.pid"
output_log="/var/log/cloudflared/cloudflared.log"
error_log="/var/log/cloudflared/cloudflared.log"
depend() {
need net
after dns
}
rc-update add cloudflared default
rc-service cloudflared start
Starts on boot, reconnects automatically if the connection drops. I’ve had this running for months without touching it.
Zero Trust Access: The Actual Killer Feature
The tunnel by itself is great. No open ports, hidden IP, encrypted transit. But the real magic is what sits in front of the tunnel: Cloudflare Zero Trust Access.
Here’s the difference. With just a tunnel, anyone who knows your subdomain can hit your service. The traffic goes through Cloudflare, sure, but it still reaches your app. With Zero Trust Access, every single request gets intercepted and authenticated before Cloudflare even forwards it to your tunnel.
I configured this through the Cloudflare Zero Trust dashboard (it’s free for up to 50 users, which is about 49 more than I need).
Identity check. Every request requires a valid Google login. Not just any Google login — only email addresses from my specific domain. My daughter can’t accidentally stumble into my Grafana dashboards from her school Chromebook. Well, she could if she had the right email, but she’d rather play Roblox anyway.
Geographic restrictions. I blocked entire countries I’ve never been to and never plan to visit. If you’re trying to access my VS Code Server from a geography I’ve never set foot in, the request dies at Cloudflare’s edge. It never even reaches the tunnel.
Device posture checks. Optional, but available. You can require that the connecting device runs a specific Cloudflare WARP client version, has disk encryption enabled, or meets other compliance checks. I don’t use this one — it felt like overkill for a homelab — but it’s there if you’re paranoid enough.
# Example Zero Trust access policy
- name: "Homelab Admin Access"
decision: allow
include:
- email_domain: argobox.com
require:
- login_method: google
exclude:
- geo: ["CN", "RU", "KP"]
Even if someone discovers your tunnel URL, they hit a Cloudflare login wall. They’d need valid credentials from your identity provider to get past it. And even then, Cloudflare logs every access attempt with full context — IP, location, device, timestamp. I get alerts on my phone for failed authentication attempts.
The first week I had this running, I watched three separate bots try to access code.argobox.com, hit the Zero Trust wall, and give up. They never even touched my server. That’s the whole point.
Services I Actually Expose
Not everything goes through a tunnel. Here’s what does and what doesn’t, and why.
VS Code Server (code.argobox.com)
Full IDE in the browser. I can write code from an iPad at a coffee shop, from my phone in a waiting room (don’t judge me), from any browser anywhere. Protected by Zero Trust — you get a Google login screen before you see anything. This is the service I use most.
- hostname: code.argobox.com
service: http://localhost:8443
originRequest:
noTLSVerify: true
The noTLSVerify is there because code-server uses a self-signed cert internally. Cloudflare handles the public SSL, and the tunnel connection itself is encrypted, so this is fine.
Grafana Dashboards (monitor.argobox.com)
Server health, network metrics, build swarm status, disk usage trends. Protected by Zero Trust with the same policy as VS Code Server. I check this from my phone more often than I’d like to admit.
Overseerr / Media Requests (requests.argobox.com)
This one’s different. It’s intentionally not behind Zero Trust because my family and a few friends use it to request movies and TV shows. Instead, I have Cloudflare WAF rate limiting configured — 20 requests per minute per IP, with a challenge page if you exceed it. Good enough for a media request form. Not sensitive enough to warrant forcing my dad through a Google OAuth flow every time he wants to request a Western.
Plex — NOT Tunneled
Plex doesn’t go through Cloudflare. The bandwidth hit isn’t worth it on the free tier, and streaming 4K remux files through a proxy adds latency that shows up as buffering. Plex uses Tailscale for direct, encrypted connections instead. Different tool for a different problem. I wrote about Tailscale in another post.
Split-Horizon DNS: The Transparency Trick
Here’s the thing that makes all of this seamless. When I’m sitting at my desk at home, I don’t want code.argobox.com going out to Cloudflare and back. That’s a round trip to the nearest Cloudflare edge and back for traffic that should stay on my LAN. Wasteful. Adds latency. Dumb.
Split-horizon DNS (also called split-brain DNS) solves this.
When I’m home, my local DNS server (AdGuard Home running on 10.42.0.199) returns the local IP for my services:
code.argobox.com → 10.42.0.199:8443 (direct, local)
monitor.argobox.com → 10.42.0.199:3000 (direct, local)
requests.argobox.com → 10.42.0.199:5055 (direct, local)
When I’m away, public DNS returns the Cloudflare proxy IP:
code.argobox.com → Cloudflare Edge → tunnel → localhost:8443
monitor.argobox.com → Cloudflare Edge → tunnel → localhost:3000
requests.argobox.com → Cloudflare Edge → tunnel → localhost:5055
Same URLs everywhere. Same bookmarks. Same muscle memory. I don’t even think about it anymore — I type code.argobox.com and it works whether I’m in my office or on hotel WiFi three states away. The DNS layer handles the routing transparently.
Setting this up in AdGuard Home is just a DNS rewrite rule for each subdomain pointing to the local IP. Five minutes of config. Months of not thinking about it.
The Security Comparison
I made this table for my own sanity when I was trying to explain to a friend why he should stop port forwarding his Jellyfin server:
| Port Forwarding | Cloudflare Tunnel | |
|---|---|---|
| Ports exposed | 443, 8080, 32400, etc. | Zero |
| DDoS protection | Your ISP connection absorbs it | Stopped at Cloudflare edge |
| Firewall config | Manual DNAT/SNAT rules | No inbound rules needed |
| SSL certificates | Let’s Encrypt + renewal cron | Automatic via Cloudflare |
| Your real IP | Visible via DNS lookup | Hidden behind Cloudflare proxy |
| Authentication | Whatever your app provides | Zero Trust identity layer |
| Logging | Your own log aggregation | Cloudflare dashboard + alerts |
He switched that weekend. Sometimes a table is more convincing than an hour of explaining.
Limitations (The Honest Part)
I’m not going to pretend this is perfect. It isn’t.
Latency. Traffic goes through Cloudflare’s edge before reaching your server. For most things — web UIs, dashboards, API calls — the added few milliseconds are invisible. For real-time stuff like gaming or video calls, you’ll notice. That’s why Plex and gaming go through Tailscale instead.
Bandwidth on the free tier. Cloudflare’s free tier is generous but not unlimited. I’ve never hit a wall with web services and dashboards, but I wouldn’t push large file transfers through a tunnel. That’s what Tailscale’s direct connections are for.
Dependency on Cloudflare. If Cloudflare goes down, your tunnels go down. It’s happened — rarely, but it’s happened. I keep Tailscale as a backup path for critical access. Belt and suspenders.
Not for everything. High-bandwidth streaming, low-latency gaming, large file sync — all of these are better served by direct WireGuard/Tailscale connections. Tunnels are for web services you want to expose to a browser with authentication in front.
For admin tools, dashboards, and remote development, the tradeoff is absolutely worth it. Zero inbound ports, identity-aware access, DDoS protection, and I haven’t thought about SSL certificate renewal in over a year.
When Things Go Wrong (The Debugging Section)
Because things do go wrong. Here’s what I’ve actually had to debug.
Tunnel shows connected but services return 502
Nine times out of ten, this means the local service isn’t running or it’s listening on a different port than what’s in your config. Check the obvious first:
# Is the service actually listening?
ss -tlnp | grep 8443
# Check cloudflared logs
cloudflared tunnel info homelab
I spent 45 minutes on a 502 error once before realizing I’d restarted my code-server container and it came back on port 8080 instead of 8443. The config file pointed to 8443. Past me had changed the container’s port mapping and didn’t update the tunnel config. Classic.
Run in foreground for real debugging
When something’s genuinely broken, stop the service and run cloudflared in the foreground:
rc-service cloudflared stop
cloudflared tunnel run homelab
You’ll see every connection attempt, every routing decision, every error in real time. Way more useful than digging through log files.
DNS not resolving to tunnel
If dig code.argobox.com returns a normal A record instead of the Cloudflare proxy, your DNS route isn’t set up:
cloudflared tunnel route dns homelab code.argobox.com
Then verify in the Cloudflare dashboard that the CNAME record exists and is proxied (orange cloud on).
The “it works from home but not outside” classic
This one caught me once. Everything worked on my LAN because split-horizon DNS was resolving to the local IP. But external DNS was pointing to my old A record instead of the tunnel CNAME. I’d set up the tunnel routes but hadn’t deleted the legacy DNS records. Cloudflare was routing traffic to my old (now closed) port 443 instead of through the tunnel. Took me an embarrassingly long time to figure out because it worked perfectly from my desk.
Check your DNS records. Delete the old A records when you switch to tunnels. Don’t ask me how long it took me to learn this. The answer is longer than it should have been.
Port 443 hasn’t been open on my firewall in over a year. My public IP returns nothing on a port scan. Every service I expose goes through an authenticated, encrypted, DDoS-protected tunnel with identity verification in front of it.
I still check the firewall logs sometimes. Force of habit. They’re boring now. That’s exactly how I like them.