The Placebo Effect
It’s January 2nd. New Year, New Homelab.
My Synology DS1819+ felt “slow”. Eight bays, all populated — six 8TB Seagates in a RAID 6 array (32TB usable after parity) plus two 4TB WD Reds as a separate volume for backups. About 36TB raw, roughly 40TB total capacity across both volumes. Dual 1GbE NICs in link aggregation. 16GB RAM (upgraded from the stock 4GB). A perfectly capable NAS that had been running without complaint for years.
But it felt “slow.” I don’t know what that means, but I felt it. Pages took a beat to load. File transfers seemed sluggish. Two empty M.2 NVMe slots on the board were staring at me. Synology’s marketing promised “up to 20x faster I/O performance with SSD cache.” Every homelab YouTuber seemed to have NVMe cache in their NAS.
I had the Amazon cart open. Two 1TB Samsung 970 EVOs. $200.
Actually Measuring Things
Before clicking “Buy”, I decided to act like an engineer for five minutes instead of a consumer.
“Is it actually slow? Or am I just bored and looking for things to upgrade?”
I installed iperf3 on the NAS (via Docker, since Synology’s package center doesn’t have it) and my desktop.
# On NAS
docker run -d --name iperf3 -p 5201:5201 networkstatic/iperf3 -s
# On Desktop
iperf3 -c 10.42.0.10
Result:
[SUM] 0.00-10.00 sec 1.10 GBytes 941 Mbits/sec sender
[SUM] 0.00-10.00 sec 1.10 GBytes 940 Mbits/sec receiver
941 Mbps. On a Gigabit connection. That is 94% of the theoretical maximum. The NAS literally cannot push data any faster over this cable. The remaining 6% is protocol overhead. Physics. Not something an NVMe cache is going to fix.
But maybe the network test isn’t the whole story. Maybe the disks themselves are bottlenecking on random I/O, and the NVMe would help with that even if the network is saturated.
The fio Deep Dive
SSH into the NAS. Run some actual disk benchmarks with fio.
Sequential read (the “streaming movies” workload):
fio --name=seq-read --ioengine=libaio --direct=1 \
--rw=read --bs=1M --size=1G --numjobs=1 \
--filename=/volume1/test/fio-test
Result: 412 MB/s. RAID 6 across six drives, reading sequentially. The network is capped at ~117 MB/s (gigabit). The disks are already 3.5x faster than the pipe.
Sequential write (the “backup dump” workload):
fio --name=seq-write --ioengine=libaio --direct=1 \
--rw=write --bs=1M --size=1G --numjobs=1 \
--filename=/volume1/test/fio-test
Result: 287 MB/s. Slower due to RAID 6 parity calculation, but still 2.4x faster than the network.
Now the interesting one. Random 4K reads (the “lots of small files, thumbnails, database queries” workload):
fio --name=rand-read --ioengine=libaio --direct=1 \
--rw=randread --bs=4k --size=256M --numjobs=4 \
--iodepth=32 --filename=/volume1/test/fio-test
Result: 1,847 IOPS. About 7.2 MB/s. This is where spinning rust actually hurts. Seek time is mechanical. The heads have to physically move to find each random block. An NVMe drive doing the same test would hit 300,000+ IOPS.
Random 4K writes:
fio --name=rand-write --ioengine=libaio --direct=1 \
--rw=randwrite --bs=4k --size=256M --numjobs=4 \
--iodepth=32 --filename=/volume1/test/fio-test
Result: 923 IOPS. Even worse. RAID 6 write penalty (every write requires updating two parity blocks) combined with random seek times. Painful.
So random I/O is genuinely bad. The numbers don’t lie. An NVMe cache would absolutely obliterate those random read/write numbers. Case closed? Buy the drives?
Looking at What I Actually Do
Not so fast. The question isn’t “are random IOPS bad?” The question is “do I actually do random I/O on this NAS?”
My workloads:
Plex streaming. Large sequential reads. A 4K HDR movie is a continuous stream of multi-megabyte chunks, read sequentially from disk. The drives handle 412 MB/s sequential; the network caps at 117 MB/s. Even two simultaneous 4K streams (about 80 Mbps each) don’t come close to saturating either the disks or the network. NVMe cache contribution: zero.
Nightly backups. Large sequential writes. Borg backup sends big deduplicated chunks over the network. Sequential writes. Already network-bound. NVMe cache contribution: zero.
Photo library browsing. This is the one workload where cache might help. Photo Station generates thumbnails, which are lots of small random reads. But I use this maybe twice a month, and the Synology’s 16GB RAM already caches the most-accessed thumbnails. NVMe cache contribution: marginal.
Time Machine backups from my laptop. Sequential writes. Network-bound. NVMe cache contribution: zero.
90% of my I/O is sequential. The 10% that’s random is infrequent enough that RAM cache handles it. The NVMe would be a $200 write-endurance countdown timer sitting mostly idle.
When TO Buy NVMe Cache
NVMe read/write cache makes a real difference when your workload is random I/O dominant. Specific cases:
Running VMs off NAS storage. Virtual disks are random I/O nightmares. The VM’s operating system thinks it has a local disk, so it does random reads and writes all over it. Every page fault, every log write, every temp file — random 4K I/O hitting the NAS. NVMe read cache for the frequently-accessed blocks, write cache for the bursts. This is the use case Synology’s marketing is actually targeting.
Databases. If you’re running a database server with data on the NAS (via iSCSI or NFS), those query patterns generate tons of random reads. Index lookups, joins across tables, aggregations — all random access. NVMe cache would genuinely help here.
Docker containers with lots of small file operations. Container images are composed of layers, and application startup reads from many small files across multiple layers. If you’re running 20+ containers off NAS storage, the random read pattern benefits from cache.
Multi-user file server with many simultaneous users. Ten people opening different files in different directories creates a random I/O pattern even though each individual access might be sequential. The aggregate is random. Cache helps smooth this out.
When NOT To Buy NVMe Cache
Media streaming. Plex, Jellyfin, whatever. Sequential reads. The drives are fine.
Bulk storage and backups. Sequential writes. Network-bound. Cache is irrelevant.
Single-user file access. One person copying files, editing documents, browsing photos. Not enough I/O to benefit.
If your network is Gigabit. This is the big one. At 1GbE, your network is almost certainly the bottleneck, not the disks. Even a RAID 5 of old 5400 RPM drives can saturate Gigabit for sequential workloads. The right upgrade isn’t NVMe cache — it’s 10GbE NICs and a switch. But that’s $300+ and requires rewiring. Different discussion.
The Whole Picture: 276TB Across Two Sites
For context, the Synology DS1819+ is just one piece of the storage infrastructure. Across both sites, I’m running about 276TB of raw storage:
The Synology (Rigel-Silo) handles the local Milky Way site. At the Andromeda site, 40 miles away, there’s Meridian-Mako-Silo — an Unraid box. Different architecture, different performance profile. Unraid doesn’t stripe data across drives the way RAID does. Each drive is independent, with a parity drive for protection. This means single-drive sequential performance (about 150-180 MB/s for a modern HDD) rather than multi-drive striped performance. Worse for throughput, but you can spin down idle drives and save power, and a single drive failure doesn’t require rebuilding a full array.
The Unraid’s cache pool actually does use NVMe — but as a write cache tier, not a read cache. New files land on the NVMe cache and get moved to the array overnight by the mover. This makes sense for Unraid because the alternative is writing directly to a single spindle at single-drive speeds, with no striping to help. Different architecture, different bottleneck, different solution.
The Synology with its RAID 6 stripe already has plenty of sequential throughput. The bottleneck is the network, not the drives. Adding NVMe to the Synology would be solving a problem I don’t have.
The $200 I Didn’t Spend
I closed the Amazon tab. That $200 eventually went toward a 10GbE NIC and a MikroTik switch that actually did improve transfer speeds — but that’s a different post.
Measure before you optimize. The “slow feeling” was real, but it wasn’t the NAS. It was the Gigabit network. And the fix for that isn’t cache — it’s copper. Run iperf3 before you buy anything. Five minutes of benchmarking saves $200 of regret.