Grub 2:2.06.r322.gd9b4638c5-1 won't boot and goes straight to the BIOS after update

That sounds like you didn’t setup the chroot properly.

What does lsblk -o name,type,fstype,size show?

Is it possible that my EFI partition is the nvmen1p1 one?

[root@EndeavourOS /]# lsblk -o name,type,fstype,size
NAME              TYPE  FSTYPE   SIZE
loop0             loop           1.6G
sda               disk          14.9G
├─sda1            part           1.7G
└─sda2            part           104M
nvme0n1           disk         238.5G
├─nvme0n1p1       part           400M
└─nvme0n1p2       part         238.1G
  └─mycryptdevice crypt        238.1G

Yes. Mount it and look inside it to know for sure.

[root@EndeavourOS /]# sudo mount /dev/nvme0n1p1 /mnt
[root@EndeavourOS /]# ls /mnt/


Are these the correct next steps?

[root@EndeavourOS /]# sudo umount /dev/nvme0n1p1
[root@EndeavourOS /]# sudo mount /dev/nvme0n1p1 /mnt/boot/efi

No, after unmounting that, you need to mount the root on /mnt. Unless it is mounted underneath the efi partition


Just in case it can help others, here is what I did to solve this grub issue on encrypted btrfs disk.

First boot on live usb from the last EndeavourOS iso downloaded on the website.
Then I followed the expected procedure: The latest grub package update needs some manual intervention

But since it might not be obvious to everyone, here his a quick summary.

Get the actual partitions name:

sudo fdisk -l
sda1: EFI
sda2: Linux File System

(pay attention to adapt the partition name to your case)

Unlock encrypted partition:
sudo cryptsetup open /dev/sda2 mycryptdevice

It’s now available in /dev/mapper/mycryptdevice

Now mount all btrfs subvolumes from the unlocked partition:

sudo mount -o subvol=@ /dev/mapper/mycryptdevice /mnt
sudo mount -o subvol=@log /dev/mapper/mycryptdevice /mnt/var/log
sudo mount -o subvol=@cache /dev/mapper/mycryptdevice /mnt/var/cache
sudo mount -o subvol=@home /dev/mapper/mycryptdevice /mnt/home

Then mount the ESP from /dev/sda1: (pay attention its indeed /dev/sda1)
sudo mount /dev/sda1 /mnt/boot/efi

Now I’m able to chroot on my install:
sudo arch-chroot /mnt

Finally I was able to repair grub with:

grub-mkconfig -o /boot/grub/grub.cfg
grub-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=EndeavourOS-grub

Reboot, and it worked !

What I missed was that I mounted /dev/sda2 in /boot/efi instead of /dev/sda1 … small mistake that took me a few minutes to figure out.

Hope this will simplify the procedure for you if you have the same kind of setup.

Best luck to you,

Edit: you will end-up with a new UEFI entry EndeavourOS-grub (previous one was EndeavourOS). You may need to select the new one after reboot is your BIOS boot menu.

1 Like


[root@EndeavourOS /]# sudo umount /dev/nvme0n1p1
[root@EndeavourOS /]# sudo mount /dev/nvme0n1p1 /mnt/

After all this fiddling I’m a bit confused what to do next. I’m sorry. :cold_sweat:
How do I mount the efi partition correctly to reinstall grub from then on?

Edit: I’ll check what @Lugh wrote.

This is wrong I think. That is your EFI partition. It gets mounted on /mnt/boot/efi after you mount your root partition on /mnt

1 Like
[root@EndeavourOS /]# sudo mount /dev/nvme0n1 /mnt/
mount: /mnt: /dev/nvme0n1 already mounted or mount point busy.
       dmesg(1) may have more information after failed mount system call.
[root@EndeavourOS /]# sudo mount /dev/nvme0n1p1 /mnt/boot/efi
mount: /mnt/boot/efi: mount point does not exist.
       dmesg(1) may have more information after failed mount system call.


Maybe i should start over again.

Its really confusing my disk ist named nvmeXXXX and not the sdX thing.

That isn’t even a partition. What are you trying to do there?

This should be fixed now with version 1.1

I just tried to follow the guides on how to fix the grub issue. But as my system is on btrfs and luks its quite complicated.
Obviously too complicated for my level of knowledge.

I installed it this way with a tutorial on automatic snapshots with btrfs so I can easily go back.
Thought this would be nice for a Beginner - to revert things with one click.

But now it seems this Idea was quite dumb. :sweat:

I’m still trying to mount the efi partition. :confused:

What @lugh posted above should work for you. He has a very similar setup to yours.

Where he uses /dev/sda2, you should use /dev/nvme0n1p2 instead.

Where he uses /dev/sda1, you should us /dev/nvme0n1p1 instead.

1 Like

Also, you might want to reboot clean before you start that.

1 Like

If you try the procedure I followed replacing:

/dev/sda1 by /dev/nvme0n1p1
/dev/sda2 by /dev/nvme0n1p2

This should do the trick.

@Moppel I had the exact same error due to my mistake while mounting the ESP.
grub-install: error: /boot/efi doesn’t look like an EFI partition.

1 Like

Should we run the script again or just upgrade the package to 1.1 is enough?

Updating the package to 1.1 will fix the issue for future kernel installs.

If you want to fix it for currently installed kernels you can do so with:

while read -r kernel; do
    kernelversion=$(basename "${kernel%/vmlinuz}")
    echo "Installing kernel ${kernelversion}"
    sudo kernel-install add ${kernelversion} ${kernel}
done < <(find /usr/lib/modules -maxdepth 2 -type f -name vmlinuz)

Of course, the only difference will be in the fallback entries. If you don’t care about the fallback entries, you could just wait for the next kernel update.


I only one thing to say about this whole grub issue. There is so much different information in all of these posts one cannot follow anything. I’m very disillusioned right now with grub. It’s all fine and dandy i can get my system to boot by some of these methods but if you update grub anytime it’s just problem after problem. I don’t see any solution as fixing the issue i just see more confusion. :disappointed:

Truly sad. It really shouldn’t be that complicated. I had no issues with re-installing grub to /dev/sda (on non-UEFI systems).

As of right now, the problem is still being investigated. Arch devs as well as @dalto (who’s been a legend during all of this btw), are discussing the issue via the bug report here:

For a majority of users the solution @sradjoker has posted works here. That thread alone has seen over 4k views in only two days which just shows the gravity this grub issue has had. It is interesting to see whether this has affected more Arch or Arch-derivative users, but all things are still ongoing so it’s hard to say at the moment.

I fixed my own issue with method 2 since and everything has worked fine for me. But I am tempted to look into systemd-boot now as a possible alternative. I don’t want to knee-jerk anything on my system at the moment, so since everything is working, I’ll wait and see what the Arch devs and grub upstream determine. For now, just a wait and see! :broccoli: