I have a system with many distros on it - several of them Arch based. Most of them boot quite quickly, ~5 seconds from selection at the boot loader screen. One, however, takes 50-55 seconds from the same point. I would prefer to figure out why, and perhaps eliminate the delay!
What I hope I can get is some direction to aim my searches, as I learn more than I ever thought to know (or want to!) about starting up a system. Here is a systemd-analyze blame which raises some possibilities:
Somewhat truncated, as I doubt any items further down are the source of the slowdown - but I would appreciate any pointers as to which to research first! One of possibilities is Network-Manager wait-online.service - but I’m running hardwired which my limited experience suggests should not be a slowdown.
Thanks for any directions to turn my attention to…
If you use systemd-analyze critical-chain it will tell you which services are affecting the final boot time rather than just those which are running the longest.
However, given
it’s possible that you’re running with an HDD and so the man page database regeneration is having an impact on overall IO.
On some systems does slowdown Arch (although not that hard as 55 seconds) @antechdesigns had tweaked them before, if i remember correctly, he might help
That usually one-time operation though?
I mean, not every boot
I’m mostly on a gen 4 nvme drive, and only have a spinner for data storage - and apart from mounting it in fstab, it shouldn’t be referenced. I’ll try the other investigation now.
Well- I can’t say as I understand the contents entirely, but it doesn’t appear that lvmetad takes long…
critical-chain
15:18:27 WD= [~]
└───freebird@nest ─$ systemd-analyze critical-chain
The time when unit became active or started is printed after the “@” character.
The time the unit took to start is printed after the “+” character.
lvm is not in use. What is strange to me is all the other systems are Arch-based too (EndeavourOS, Arch, Arcolinux). Of course, if it wasn’t strange I would already know how to fix it
Yeah but that’s the point - it can act voodoo and not really be seen on systemd-analyze as an effect (well not always), if i remember correctly…but hopefully not
For example it can slowdown systemd-swap as a side effect etc…
lvm2 is really weird on some specific systems
So just to exclude i’d still recommend to try…Even if as last measure
Any reason I shouldn’t just remove (uninstall) systemd-swap? I have lots of physical swap space (unused anyway) and from what I’ve read so far it’s for handling alternative swap methods…
I quite like having it available - and I also like not ever needing it! But a physical swap partition, mounted in fstab, should be enough I would expect. I am guessing this was added for performance reasons (zswap?) but with consequences
I don’t push my 32Gb all that often to experience an improvement! I wonder why it’s so slow though - maybe some configuration issue? More reading…
I don’t think there are any problems - but I need to research the performance tweaks that has systemd-swap as a dependency. If it needs removing, and is not in fact giving performance elsewhere!
Thanks to all for the help - I know more now - which was the object of the exercise!
Because I don’t need either, mainly - and a partition set up once, and used by all the different distros does not actually waste many resources needlessly. A swap file, on the other hand, would need to be set up on each system and would require me to be less lazy!