Distributed Storage Across Two Sites
“Data Gravity” is a real problem in homelabs. But what happens when your data is 40 miles away?
My storage architecture spans two physical locations: my house and dad’s. Different hardware, different purposes, connected by Tailscale’s mesh VPN.
Site 1: Dad’s House (The Media Fortress)
The Unraid Server
Hardware: Custom build (i5-12400, 32GB RAM) Storage: Unraid Array (3x 18TB Exos) + 1TB NVMe cache Role: Media storage and streaming
Dad’s Unraid server holds the family media library. Plex runs here, serving content to family members across both networks. The arr stack (Sonarr, Radarr) keeps everything organized.
Why at dad’s house? Better upload bandwidth for remote streaming, and he wanted a project. The Unraid UI is approachable enough that he can manage basic tasks without calling me.
The trade-off: I’m not physically present when things go sideways. Remote debugging via Tailscale is my only option. Sometimes the folder structure… evolves in unexpected ways between my visits.
The Synology (Cassiel-Silo)
Hardware: Synology DS920+ Storage: 4x drives in SHR Role: Critical backups, photo archive
The Synology at dad’s handles family photos via Synology Photos and provides backup storage. DSM’s interface means family members can actually use it without my help.
Site 2: My House (The Build Lab)
My local infrastructure focuses on development:
- Desktop workstation: Primary development machine
- Build swarm: 66 cores for Gentoo compilation
- Local storage: NVMe for active projects
I had a local Synology (Rigel-Silo) for NFS exports to the build swarm, but it died. The drives are sitting on my desk waiting for a replacement enclosure. For now, binary packages serve from local NVMe.
Connecting the Sites
Tailscale makes the 40-mile gap invisible:
# From my desktop (10.42.0.100)
ping 192.168.20.50 # Dad's Unraid
64 bytes from 192.168.20.50: icmp_seq=0 ttl=64 time=38.2 ms
Thirty-eight milliseconds. Not LAN speed, but good enough for management and light file access.
Sync Strategy
Rclone for Cross-Site Sync
Critical data syncs between sites:
#!/bin/bash
# Sync important docs from local to dad's Synology
rclone sync ~/Documents synology-remote:/backup/docs
# Sync everything to cloud (disaster recovery)
rclone sync synology-remote:/backup cloud_crypt:/backup --fast-list --transfers 16
Encrypted Cloud Backup
Google Drive as off-site backup. But I trust no one with raw data.
[gdrive]
type = drive
scope = drive
[cloud_crypt]
type = crypt
remote = gdrive:backups/homelab
password = ********
Every filename and byte is encrypted before leaving my network.
The Architecture
┌──────────────────────────────┐ ┌──────────────────────────────┐
│ My House (10.42.0.x) │ │ Dad's House (192.168.20.x) │
│ │ │ │
│ ┌────────────────────────┐ │ │ ┌────────────────────────┐ │
│ │ Desktop + Build Swarm │ │ │ │ Unraid (Media) │ │
│ │ (Active Development) │ │ │ │ Synology (Backups) │ │
│ └────────────────────────┘ │ │ │ Proxmox (VMs) │ │
│ │ │ └────────────────────────┘ │
└──────────────┬───────────────┘ └──────────────┬───────────────┘
│ │
└────────────┬───────────────────────┘
│
┌───────▼───────┐
│ Tailscale │
│ Mesh VPN │
└───────┬───────┘
│
┌───────▼───────┐
│ Google Drive │
│ (Encrypted) │
└───────────────┘
Why This Works
Geographic redundancy. If my house burns down, the media library survives. If dad’s house has issues, my development environment is unaffected.
Appropriate hardware at each site. Dad gets the user-friendly Synology and media-focused Unraid. I get the build swarm and development tools.
Family involvement. Dad can manage basic Unraid tasks. He occasionally… reorganizes things in creative ways, but that’s part of the fun.
Tailscale makes it seamless. I can SSH to any machine at either site. Remote debugging works (mostly). The mesh VPN turned two isolated networks into one distributed homelab.
The Challenges
Latency for large transfers. 38ms is fine for SSH and small files. Moving terabytes requires patience or physical travel with hard drives.
Remote debugging. When something breaks at dad’s, I can’t just walk over and check the blinking lights. Screenshots over Tailscale and phone calls become my eyes.
Configuration drift. Changes happen at dad’s site that I don’t always know about. Documenting the “expected state” of remote systems is critical.
Lessons Learned
Don’t assume co-location. Distributed storage is harder but more resilient. Plan for it from the start.
Document remote systems obsessively. When you can’t physically access hardware, documentation is your lifeline.
Choose appropriate hardware for each site. Dad doesn’t need a 66-core build swarm. I don’t need 54TB of media storage locally.
Tailscale changes everything. Before subnet routing, managing two sites was a nightmare of port forwarding and dynamic DNS. Now it’s just… networking.
Two sites, 40 miles apart, functioning as one distributed homelab. Not because it’s easy, but because redundancy matters.