Installing EndeavourOS on RAID0

Installing EndeavourOS on RAID0 (Intel RST → mdadm)

Hi everyone, I just managed to successfully boot my EndeavourOS on a RAID0 setup of my 2 2TB NVME SSDs. It was quite a challenge…

I wanted to share my experience setting up EndeavourOS on a RAID0 setup with NVMe drives, because it wasn’t straightforward and I ran into some traps with Intel RST and dracut.

Challenges / Issues

  1. Intel Rapid Storage Technology (RST) RAID

    • The Calamares installer crashed or failed when it detected Intel RST RAID volumes (crashed during partitioning, or refused to start after using gparted to partition instead).
  2. mdadm RAID0

    • Switching to a pure mdadm RAID0 worked, but getting the system to boot required manual intervention.
    • Dracut didn’t automatically include the RAID modules, so the initial initramfs would fail.
    • UEFI boot also needed careful partitioning, proper mounting, and correct systemd-boot configuration.

Steps I Took

1. Clear any previous RAID / filesystem info

sudo mdadm --stop --scan
sudo mdadm --zero-superblock /dev/nvme0n1
sudo mdadm --zero-superblock /dev/nvme1n1

sudo wipefs -a /dev/nvme0n1
sudo wipefs -a /dev/nvme1n1

2. Partition the drives

NVMe0 (main drive with EFI):

sudo parted /dev/nvme0n1 -- mklabel gpt
sudo parted /dev/nvme0n1 -- mkpart ESP fat32 1MiB 513MiB
sudo parted /dev/nvme0n1 -- set 1 boot on
sudo mkfs.fat -F32 /dev/nvme0n1p1
sudo parted /dev/nvme0n1 -- mkpart primary 513MiB 100%

NVMe1 (second drive for RAID):

sudo parted /dev/nvme1n1 -- mklabel gpt
sudo parted /dev/nvme1n1 -- mkpart primary 1MiB 100%

3. Create the RAID0 array

sudo mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/nvme0n1p2 /dev/nvme1n1p1
cat /proc/mdstat   # check array status
sudo mkfs.ext4 /dev/md0

4. Mount for installation

sudo mount /dev/md0 /mnt
sudo mkdir -p /mnt/efi
sudo mount /dev/nvme0n1p1 /mnt/efi

5. Install EndeavourOS via Calamares

  • Select primary RAID partition as /
  • Select EFI partition as /efi

6. After installation: chroot and configure

sudo arch-chroot /mnt
pacman -Syu linux linux-headers linux-firmware --needed

Add dracut RAID module:

nano /etc/dracut.conf.d/raid.conf
# add: add_dracutmodules+=" mdraid "

Update mdadm config:

mdadm --detail --scan >> /etc/mdadm.conf

7. Configure systemd-boot

  • Get the RAID UUID:
blkid /dev/md0
  • Edit loader entry:
nano /efi/loader/entries/<uuid>-arch.conf
# options root=UUID=<UUID-of-md0> rw rd.auto rd.md=1
  • Rebuild initramfs:
dracut --force --verbose
# ensure output includes: including module: mdraid
  • Verify partitions and EFI:
sudo fdisk -l /dev/nvme0n1
ls /efi/EFI/systemd
efibootmgr -v

Expect to see:

  • /efi/EFI/systemd/systemd-bootx64.efi
  • Boot entry like: Boot0000* Linux Boot Manager ... systemd-bootx64.efi

:white_check_mark: Result: The system now boots reliably from a RAID0 array with NVMe drives using mdadm, avoiding Intel RST.

3 Likes

Thanks for posting!

This won’t always work. It is better to use dracut-rebuild

1 Like

Oh thanks Dalto!

1 Like

Can’t seem to edit anymore, but after using it for a couple of days I have come to the following two additions / changes:

1. Your raid.conf needs additional drivers:

nano /etc/dracut.conf.d/raid.conf
# add: add_dracutmodules+=" mdraid "
# add: add_drivers+=" md_mod raid0 vmd nvme ahci "

2. Instead of using dracut --force --verbose, use Dalto’s suggestion:

sudo dracut-rebuild