About to full send ZFS for the first time - quick sanity check?
Hi all,
Current setup: ESXI (so hardware raid), VMFS on top, File server with Debian/ext4 on the virtual disk. 3x8tb in a hardware raid 5 with a local 16tb jbod for backup. Host is UPS backed and has 192gb ecc ram. OS/VMs are on a hardware raid1 of 2x4tb sata ssd. There is a single 1tb nvme ssd that I use as a network swap drive (just for moving shit around quickly as I have 10gb lan). This setup has been stable for about 5 years, but its time to move away from ESXI and im out of storage.
Proposed plan: Borrow 20TB of storage just to temporarily throw everything on to, flatten the current server, install Debian or Rocky (yet undecided) on the 2x 4tb ssd (likely in an md raid 1), buy 3x more 8TB drives (so 6x total), then raidz2 of the 6x8tb's.
Single vdev, single pool, maybe 3-4 datasets, no zvols.
Am I right in thinking I dont want a special due to single point of failure, and little benefit to a SLOG/L2 given ram quantity? Am I best off selling the 1tb nvme to offset the cost of the 3x new 8tbs?
Any drivers to prefer debian or rocky from a ZFS perspective?
Also inherited an LTO 6 drive so interested to learn my options for flushing zfs snapshots to tape.