Unofficial zfs on root installation instructions

zfsbootmenu combined with refind is effective for native zfs encryption and even natively supports booting from zfs snapshots. after replacing the hooks as described in my previous post and rebuilding the initramfs; simply add the efi binary from https://github.com/zbm-dev/zfsbootmenu/releases/download/v1.12.0/zfsbootmenu-release-vmlinuz-x86_64-v1.12.0.EFI to /boot/efi/EFI/zbm or whatever efi folder you want to dump it in. then run pacman -S refind. refind install from within a chroot can be done with refind-install --usedefault /dev/path to install drive. in the directory where you place the zfsbootmenu.EFI binary add a file labeled refind_linux.conf containing the following:

"Boot default"  "zbm.prefer=<rootpoolname> ro quiet loglevel=0 zbm.skip"
"Boot to menu"  "zbm.prefer=<rootpoolname> ro quiet loglevel=0 zbm.show"

after that kernel commandline needs to be set with zfs parameters like this
zfs set org.zfsbootmenu:commandline=“rw loglevel=0 quiet nomodeset” zpendeavouros/ROOT/eos/root

let me know if I missed anything. im not sure if refind by itself has zfs support including encrypted pools. @dalto grub definitely does not have full zfs encryption support without a separate boot pool with many features disabled. described here: https://openzfs.github.io/openzfs-docs/Getting%20Started/Arch%20Linux/Root%20on%20ZFS/5-bootloader.html

Is this going to be included in a future Endeavor release?

The benchmark LUKS vs native encryption for ZFS was tested.

Keep in mind that is literally benching encryption performance. It isn’t benching the impact of that performance in a real world scenario.

He is doing all that testing against a filesystem mounted in RAM.

IMO, it would be much more interesting to see what the practical differences are.

I’m not sure if number of letters affects encryption performance. But I thought any length of password would be converted to hash with some limited number of bits for example constant 256 bits like “AES-256” that is the same for LUKS and native encryption.

I can only share my personal experience with zfs encryption. And my experience is excellent. My own fio benchmarks show hardly any difference between encrypted and unencrypted performance of zfs.

Example: I have a raid10 dataset with 2x2 4TB drives. My PC has 64 GB, so I typically run one 64 GB fio job per test.

zfs raid10 unencrypted (compression=off):
   READ: bw=338MiB/s (355MB/s)
  WRITE: bw=230MiB/s (241MB/s)

zfs raid10 with native encryption (compression=off)
   READ: bw=343MiB/s (360MB/s)
  WRITE: bw=252MiB/s (264MB/s)

In this example encrypted performance is even better than unencrypted :wink:
But the deviation is within the statistically anticipated error range. So by all practical means the zfs encryption is not giving me any performance penalty at all. (with a very fast PC of course: Ryzen 9 5900X)

1 Like

Try to run cp and cat (for reading) in a real world without fio

Please open bash shell in terminal:

Writing benchmark:

  • Copying any big data from other filesystem to zfs raid10
time cp /ext4/anyData /path/to/zfs/

Reading benchmark:

time for i in {0..5}; do cat /zpool/anyData > /dev/null; done

There is no need to do more testing. The CPU is encrypting too fast to slow down the read/write process. The discs are significantly slower which gives the CPU plenty of time to do the encryption.

zfs uses the aes-256-gcm algorithm. If I test the speed on my PC with openssl I get:

openssl speed -evp aes-256-gcm
...
The 'numbers' are in 1000s of bytes per second processed.
type             16 bytes     64 bytes    256 bytes   1024 bytes   8192 bytes  16384 bytes
aes-256-gcm     804665.22k  2067270.64k  4142959.65k  5257610.92k  5991672.64k  6032637.95k

The slowest value in this series is 804665.22k = 767 MB/s, the fastest is 6032637.95k = 5753 MB/s. There is no way that encryption slows down the raid10 which performance at about 200-300 MB/s.

I can also check with “cryptsetup benchmark”. This does no support aes-gcm but aes-cbc which is significantly slower than aes-gcm:

cryptsetup benchmark
...
#     Algorithm |       Key |      Encryption |      Decryption
        aes-cbc        128b      1563,2 MiB/s      6816,3 MiB/s
        aes-cbc        256b      1185,2 MiB/s      5405,8 MiB/s

On a fast nvme drive you might see a slight slow down due to encryption, but not on a spinning disc.

2 Likes

Just for the records I did the test with my fastest NVME: Samsung SSD 970 EVO Plus 1TB

zfs unencrypted (compression=off):
   READ: bw=3310MiB/s (3471MB/s)
  WRITE: bw=1087MiB/s (1140MB/s)

zfs with native encryption (compression=off)
Run status group 0 (all jobs):
   READ: bw=2370MiB/s (2485MB/s)
  WRITE: bw=1118MiB/s (1172MB/s)

Here I see a clear performance impact for the read operation. But this is expected. My CPU can not encrypt resp. decrypt as fast as the nvme reads. But overall, the performance is still very good.

PS
When you search benchmarks for this drive in the internet you need to be careful. The drive has a 32 GB cache. This makes the first 32 GB write operations very fast (TurboWrite) with a max. write speed of 3.300 MB/s. But that is not sustainable for bigger data volumes like in my 64 GB fio benchmark. In that case the raw max write speed for this drive is 1.700 MB/s

https://www.relaxedtech.com/reviews/samsung/970-evo-plus/

Thanks for putting together this quick guide!

I ran through it using the “replace partition” option as I had a chunk of empty space on my SSD that I wanted to install into while preserving my existing windows partitions.

I selected the encrypt checkbox and also got luks based encryption.

I have the install log / can share the termbin link if needed.

Yes, the zfs support in Calamares was built by me quite a long time ago. It was one of the first things I did with Calamares.

I only implemented encryption for manual partitioning and use entire disk.

1 Like

Gotcha. Thanks for clarifying!

1 Like

Any instructions on adding zfsbootmenu?

I haven’t tried zfsbootmenu yet. It looks interesting, but it is a bit further down my todo list.

If you get it working, let us know.

I’m curious about zfs on / now that dracut is being used.
Anybody get zfs-dracut packaged (apparently in Ubuntu), hopefully at least to AUR but best to archzfs?
Any arch based experiences?
I’ve learned the hard way that it’s dangerously time-consuming to boldly go where no one has gone before!

zfs support “just works” in dracut.

There is no special config needed.

I have been using dracut with zfs for 6 months or so.

great, I now notice dracut.zfs in man and that the module is indeed already in /usr/lib/dracut/modules.d/

now my challenge appears to be limited for now in getting zfs on / with nvme on a old supermicro bios needing a separate usb grub /boot disk…(which works so far with non zfs on /)

cheers

I tried this and I can’t get it past the installer. I did all the steps on here. Manually partitioned my two drives in the Endeavour OS installer. I did 1024mb as a fat32 with boot flags and /efi mount on sda and sdb. Then I set the rest of the disk space as zfs with / mount. Installer keeps failing. I’m not sure if I’m missing steps. It first failed with “failed to import zpool.”

https://termbin.com/z42z1

It looks like you are setting multiple /efi and multiple / mounts. You can’t do that.

You need only one / and one /efi. Also, you must check the box to format the / partition.

So how should I setup sda and sdb to do a mirror? sda1 is going to be fat32 /efi with boot flag sda2 is zfs /? Then what do I do with sdb?