You could try running a filesystem check on /dev/sda5.
Boot up your live EnOS usb.
Right click on sda5 and choose “ChecK” from the menu.
Apply the operation in the tool bar >
When it is done, reboot. Let’s see if it resolves the issue.
This looks to me a damaged/corrupted file system or a failing disk.
Check also if your disk connector is not loose.
When it happens next time and you boot back into your system, gather some information in the logs.
sudo dmesg | eos-sendlog journalctl --since "60 min ago" | eos-sendlog
Post the URLs you get here on the forum.
I recommend strongly to backup whatever personal data you have on this disk.
I peeked around the google and discovered other people having a similar issue (it’s not a hardware malfunction) and smoothing it over by editing /etc/default/grub; they added this parameter to the line GRUB_CMDLINE_LINUX_DEFAULT= nvme_core.default_ps_max_latency_us=5500
I haven’t crashed again yet since I added that in, but we’ll see.
That sounds great!
All the best!
Just putting these here in case you wouldn’t mind taking a look. This is after I changed the latency.
Just skimmed through your logs, I couldn’t see any errors or failures regarding EXT4-fs.
I had a quick look at this kernel parameter as well and found:
which dates back to 2017 and implies that the issue would have been fixed in the kernel.
Since you started having the issue after an update, I wonder if your kernels got updated too. So could it be that some changes brought back this rather old issue?
I actually just got a kernel update a few hours ago, but TBH I have a script I use for semi-unattended upgrades and cleanup (I know, I know ) so I couldn’t say whether there was some other kernel update a day or two before that. I guess we’ll find out if it’s a kernel issue if/when other people complain… I’m using the same repos as everyone else here
Here is some more related to similar issue:
… and it says explicitly:
This kernel parameter may no longer be necessary with recent versions of Linux Kernel. (e.g., v4.14.221, v4.19.175, v5.4.97, v5.10.15, v5.11-rc7, and later)
If you want to dig a bit more, you could have a look in /var/log/pacman.log for upgrades around the date of the update after which the issue started.
It says on April 8 I got 6.2.10.arch1-1, then on the 16th I got 6.2.11.arch1-1, then today (the 21st) I got 6.2.12.arch1-1. But the problem started yesterday (the 20th)… I can’t imagine why I’d go 4 days without a hitch and then suddenly the same kernel starts acting up.