Can't mount array after update to 5.18 kernel

I am running into an unusual issue after using pacman -Syu to update to linux-5.18.arch1-1 and linux-headers-5.18.arch1-1

After upgrading my machine boots to failover console as the root user after complaining about a timeout when trying to mount a raid10 array. I do see a few funny messages in the journal where disk references or paths seem to be garbled. Here are:

the link to my report for inxi -Fxxc0z --no-host
and the journal for the failed boot session journalctl -b -2

I was easily able to fix the problem by rebooting and logging in with the linux-lts kernel and downgrading using sudo pacman -U linux-5.17.9.arch1-1-x86_64.pkg.tar.zst linux-headers-5.17.9.arch1-1-x86_64.pkg.tar.zst

I think I’ll probably just be waiting for an update version of the kernel to try again in the future?

(PS - inxi shows kernel 5.17.9-arch1-1, but that’s because I had to downgrade the kernel to be able to fully boot. Everything else has been kept constant)

2 Likes

there are

  • errors on 80-udevs commands
  • and timeout by sysdemd on disk and after , time out on fsck
  • on Sata1 is goes back to 1,5Gbs

You probably want to fix this too:

May 25 06:45:11 phenom systemd-udevd[375]: Configuration file /usr/lib/udev/rules.d/77-mm-fibocom-port-types.rules is marked executable. Please remove executable permission bits. Proceeding anyway.

Thanks, I fixed that one! :slight_smile:

1 Like

Here is the journalctl -b -0 for a clean boot with the working 5.17 kernel. I’ll spend some time comparing this to the journalctl from attempting to boot with 5.18 to see if Stephane’s observations are still the same for this one.

I think I found the culprit… buried in /etc/mdadm.conf is a line that provides some definitions for the array that wouldn’t mount. It looked like this:

ARRAY /dev/md/UserRAID10 metadata=1.2 name=phenom:UserRAID10 UUID=bee9ca99:c9a86e5e:0d3e9c1a:c5473a21

/dev/md/UserRAID10 is actually a link to /dev/md127, so replacing the link with the direct reference seems to have fixed the issue as the following works just fine:

ARRAY /dev/md127 metadata=1.2 name=phenom:UserRAID10 UUID=bee9ca99:c9a86e5e:0d3e9c1a:c5473a21

I am not sure how changes to the kernel drive this, but I isolated things down to a consistent scenario where the 5.18 kernel had issues with the mdadm.conf file which don’t exist with 5.17

In any event, I am current on 5.18 now and can go back to checking for updates every few minutes. :slight_smile:

3 Likes

This topic was automatically closed 2 days after the last reply. New replies are no longer allowed.