Kernel Lockdown active after fixing "vfat" boot issue, modules fail to load

Hello,

After a system update, I fixed an initial vfat boot error, but now face a new problem: Kernel Lockdown is active, blocking necessary drivers. This causes intermittent black screens and non-functional Wi-Fi.

Here is the proof from my journalctl -b 0 log.

1. Kernel Lockdown is being initialized:

Oct 17 12:41:03 82K2 kernel: LSM: initializing lsm=capability,landlock,lockdown,yama,bpf

2. The NVIDIA driver is blocked as a result:

Oct 17 12:41:08 82K2 supergfxd[742]: [ERROR supergfxctl::controller] Action thread errored: Modprobe error: modprobe nvidia_drm failed: "modprobe: ERROR: could not insert 'nvidia_drm': Operation not permitted\n"

My Wi-Fi module (rtw88_8822ce) also fails to load.

System Info:

  • OS: EndeavourOS
  • Kernel: linux-zen
  • CPU: AMD Ryzen 5 5600H
  • GPU: NVIDIA GeForce GTX 1650 Mobile
  • Wi-Fi: Realtek RTL8822CE

Actions already taken:
I have performed a full system update, reinstalled kernels, rebuilt initramfs with dracut, and regenerated the GRUB config. In my UEFI, Secure Boot is Disabled and I have used “Reset to Setup Mode” to clear all Platform Keys. The problem persists.

Why is the kernel still enforcing lockdown, and what is the correct way to disable it if the standard UEFI methods are not working?

What is the output of systemctl status apparmor.service ?

And:

pacman -Q | grep apparmor

Here is the output you requested:

$ systemctl status apparmor.service
Unit apparmor.service could not be found.

$ pacman -Q | grep apparmor
(no output)

It seems AppArmor is not installed on my system.

Then you wouldn’t need to have these kernel boot parameters:

Remove them and rebuild your initramfs. Let’s see if that helps.

ref: https://wiki.archlinux.org/title/AppArmor

I’ve checked my GRUB configuration. The lsm= parameter is not present.

$ grep "GRUB_CMDLINE_LINUX_DEFAULT" /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="nowatchdog nvme_load=YES nvidia_drm.modeset=1 loglevel=3"

It seems lockdown is being initialized by the kernel by default.

Hmm… that is strange…

What is the output of:

cat /proc/cmdline

Here is the output of cat /proc/cmdline:

BOOT_IMAGE=/boot/vmlinuz-linux-zen root=UUID=b4ca3a6e-f5ec-4093-9642-92096be5c972 rw nowatchdog nvme_load=YES nvidia_drm.modeset=1 loglevel=3

As you can see, the lsm= parameter is not being passed to the kernel at boot.

1 Like

You are right. It seems to be part of the default configuration of the kernel:

zgrep CONFIG_LSM /proc/config.gz
CONFIG_LSM_MMAP_MIN_ADDR=65536
CONFIG_LSM="landlock,lockdown,yama,integrity,bpf"

That being the case, I doubt that it should prevent Nvidia module to get loaded at boot. We would have seen hundreds of posts reporting it here. There must be some other reason needing some more digging I think.

You are right, it is strange that this isn’t a more common issue.

Given that we have confirmed Lockdown is active and is the mechanism causing the Operation not permitted error, would it be a valid diagnostic step to temporarily disable it?

I am considering adding the lockdown=none kernel parameter to my GRUB config to see if it resolves the issue. Is this a reasonable test?

I am not sure that it is really the root cause of the issue.

What I have found about it so far indicates that LSM configuration primarily controls access security, process confinement, and other security policies within the kernel. This configuration is not typically designed to restrict module loading itself unless explicitly combined with other security measures.

Can you try linux-lts (+ linux-lts-headers) to see if you have the same issue there too?

I tried booting with the linux-lts kernel. It failed and dropped me into an emergency shell.

The error is [FAILED] Failed to mount /sysroot. Here is a screenshot of the boot failure:

This indicates that the initramfs for my linux-lts kernel is broken and cannot mount the root filesystem. The linux-zen kernel, however, does boot (though it enters lockdown).

This seems to confirm that the issue is not global, but specific to each kernel’s configuration/initramfs.

1 Like

That was unexpected and honestly discouraging. Many times having the LTS is recommended as a backup kernel in case something goes wrong in updates to the latest stable kernel.

At this point I don’t really know what is going on in your system. I’m starting to suspect that what we are seeing in this thread (and also your issue with vFAT in the other one) may be only symptoms of some deeper issue(s). What that might be? I have no clue to be honest. Or I may be totally wrong. If that would be the case, I couldn’t be more glad :blush:

1 Like

Thank you for your help so far. I agree that this is a strange and deep issue.

I will continue troubleshooting on my own. My next steps will be to investigate further to see if I can find the root cause. I plan to run more diagnostics, focusing on:

  • Potential issues with dracut configuration.
  • Verifying filesystem permissions and attributes.

I will report back if I find anything significant.

1 Like

I ran a diagnostic on my dracut configuration and found that I have both dracut and eos-dracut installed:

$ pacman -Q | grep dracut
dracut 108_eos-1
eos-dracut 1.7-1

Is this the correct configuration, or could these two packages be conflicting with each other? I’m trying to determine if this is the reason of my problems.

Yes, those are the packages you need for EOS’ implementation of dracut in conjunction with Grub.

I have continued troubleshooting and wanted to provide a full update on the current state of the system.

Unfortunately, the problems persist. linux-zen boots to a desktop but has like 1/5 chacne to boot in the actual desktop but often its just black screen so i rather need to check all in tty (and strange thing here is when im not in desktop and check sudo modprobe nvidia_drm terminal do nothing but when i use it in the desktop its says error and even if i go in tty in this session its say erorr anyway), while linux-lts fails to boot entirely (Failed to mount /sysroot).

The core issue remains: kernel modules cannot be loaded due to an Operation not permitted error, which seems to be caused by Kernel Lockdown being active despite Secure Boot being disabled.

Here is a screenshot from a TTY session on the linux-zen kernel, which summarizes the current state. It shows that lockdown=none is being passed as a kernel parameter, yet Lockdown remains active and modprobe still fails:


Detailed Steps I Have Taken Since My Last Post:

To eliminate all possibilities of file corruption or misconfiguration, I performed a complete, “clean” reinstallation of all kernel and firmware components from chroot. Here is the exact procedure I followed:

1. Fixed linux-firmware Structure:
Based on the recent Arch Linux news, I suspected an issue with the linux-firmware split.

  • I ran pacman -Rdd linux-firmware to remove the old package.
  • I ran pacman -Syu linux-firmware to install the new firmware packages.

2. Full Kernel Reinstallation (Cache Cleared):
To ensure no corrupted files were being used from the cache, I did a full reinstall.

  • I cleared the pacman cache for all kernel packages with rm /var/cache/pacman/pkg/linux-*.
  • I forced a database sync with pacman -Syy.
  • I completely removed all kernels, headers, and the NVIDIA driver with pacman -Rns linux-lts linux-lts-headers linux-zen linux-zen-headers nvidia-dkms.
  • I reinstalled everything from scratch, forcing a fresh download from the repositories: pacman -S linux-lts linux-lts-headers linux-zen linux-zen-headers nvidia-dkms linux-firmware.
  • Finally, I regenerated the GRUB config with grub-mkconfig -o /boot/grub/grub.cfg.

This entire process completed without any errors. DKMS successfully built the NVIDIA modules, and dracut successfully generated all initramfs images. A check of /boot confirmed that all vmlinuz and initramfs files were new and had matching timestamps.

Conclusion:

Despite a complete and clean reinstallation of all relevant packages, the Kernel Lockdown issue persists, and the system remains broken in the same way.

At this point, I have exhausted all standard repair procedures. The problem seems to be that the kernel is ignoring both the UEFI state (Secure Boot off, keys cleared) and kernel parameters (lockdown=none).

So maybe i just don’t see something or not check some stuff so be nice if someone give new ideas.

Try nvidia-open-dkms instead of nvidia-dkms and see if it makes any difference.

Stuck here

I think to try Load Default Settings in the UEFI

Can you boot your live usb, chroot into the system and post:

pacman -Q | grep nvidia

[liveuser@eos-2025.03.19 ~]$ sudo mount /dev/nvme0n1p6 /mnt && sudo mount /dev/nvme0n1p1 /mnt/boot/efi && sudo
arch-chroot /mnt
[root@EndeavourOS /]# pacman -Q | grep nvidia
lib32-nvidia-utils 580.95.05-1
libva-nvidia-driver 0.0.14-1
linux-firmware-nvidia 20251011-1
nvidia-hook 1.5.2-1
nvidia-inst 25.10-1
nvidia-open-dkms 580.95.05-1
nvidia-prime 1.0-5
nvidia-settings 580.95.05-1
nvidia-utils 580.95.05-1
[root@EndeavourOS /]#

1 Like