u/AshleshaAhi

▲ 3 r/zfs

Using 15TB+ NVMe with full PLP for ZFS — overkill SLOG or finally practical L2ARC?

Mods let me know if this crosses any lines — happy to adjust.

I’ve been working on a deployment recently using some high-capacity enterprise NVMe (15.36TB U.2, full power loss protection, ~1 DWPD endurance), and it got me thinking about how these fit into ZFS setups beyond the usual small, low-latency devices.

A few things I’ve been considering:

SLOG

- Clearly overkill from a capacity standpoint, but with full PLP and solid write latency, they’re about as safe as it gets for sync-heavy workloads

- Curious if anyone here is actually running larger NVMe for SLOG just for endurance + reliability headroom

L2ARC:

- At this capacity, L2ARC starts to feel more viable again, especially for large working sets

- Wondering how people are thinking about ARC:L2ARC ratios when drives are this big

All-flash pools:

- With ~15TB per drive, you can get into meaningful capacity with relatively few devices

- Tradeoff seems to be fewer drives (capacity density) vs more vdevs (IOPS + resiliency)

Other considerations:

- ashift alignment and sector size behavior on these newer enterprise drives

- Real-world latency vs spec sheet under mixed workloads

- Whether endurance (1 DWPD) is enough for heavy cache-tier usage long-term

We ended up with a few extra from that deployment, so I’ve been especially curious how folks here would actually use drives like this in a ZFS context.

Would love to hear real-world configs or any lessons learned.

reddit.com
u/AshleshaAhi — 10 days ago