Home 'System Admins': How do you handle updates?

This question goes out to the ‘Admins’ out there that maintain a small business worth of equipment in their home or have a homelab. How do you keep all your software updated?

My answer to this question is a lot of ssh’ing into my 7 (soon to be 9) systems running EndeavorOS from a term like Terminator, running yay –noconfirm –sudoloop on all of them, and waiting till it completes. Then, a quick sudo reboot to complete the process. I generally do this twice a week.

For an automation guy, this is tedious.

I wish I could script this out to kick off updates on all my systems at once and do a reboot after it’s done, but being you need to enter your password to start updates that plan is killed before it got off the ground.

So… what do you guys do? You got a mostly manual process like me or have you figured out some magic to take some of the pain out of it?

Edit: A small bit of clarification on my part - my situation is a family’s worth of personal systems. Gaming Rigs, Laptops, General work and play systems… Not servers.

I don’t use Arch-based distros for server purposes. Then I can safely auto-update those boxes.

These are not servers. But yeah, generally a good bit of advice.

I have 6 EndeavourOS installations at the moment, but I handle the updating of all of them manually, and not all at the same interval as the needs differ.

I wouldn’t automate it personally, as often enough for it to be an issue, there are breaking updates that require some intervention that an automated process might not handle.

The recent update that pushed Nvidia Pascal GPU’s to an AUR driver for example, required manual intervention on one of those systems.

The switch in VLC from all-in-one unified package to plug-ins based packaging is another example.

Then of course, there are breaking updates like the Grub update some years ago, that resulted in a failed boot if not handled with some care.

Agreed that ‘Fully Automatic’ updates are not the solution. I think it is generally a good idea for one to check https://archlinux.org/news/ before blindly running updates. My thought is more a script that can run through your systems and update them all at once after you have confirmed it is safe to do so. It does not come without some risk, of course.

This is one more case where Linux, especially Arch Linux or its derivatives, is not yet ready for big deployments. When we are talking of 10 or more than 10 Arch linux installations, getting deployments and updates done is still a tedious task. Requires manual intervention. That is why OS like Windows and distros like Debian shine. It is not Windows and Debian are great or not great rather they offer predictability, like Patch Tuesday/Monday and tools to automate this without any human intervention.

Not saying EOS or other Arch based distros suck. I love EOS.
Disclaimer: Debian is a good distro. It has its own merit and EOS/Arch have their own merit.

I’d personally have some concerns even running a script that updates all computers simultanously, even after diligently checking https://archlinux.org/news/.

The grub update that broke things a few years back is one example where the issue wasn’t present on the Arch news until well afterwards. In fact, the EndeavourOS guys were first to really address it properly to their credit.

If one system is updated at a time, at least you’d catch that early, needing to fix one system. This is instead of simultaneously updating 7 or 9 systems, only to then discover they’re all broken and now you have no way of easily accessing information and resources to even fix it.

Not sure I could disagree more. My small team of 6 managed 200+ Linux/AIX systems 25 years ago, all with home-grown automation scripts that required zero manual intervention; well, unless there was a faulty update, of course, but that’s going to happen on any system eventually.

Right now I have 12 different EOS-based systems I manage for DE/update testing. If these systems were critical, I could automate updates. But that would require mounting QEMU qcow2 images, setting up a chroot, then going from there - definitely not worth the effort.

OS like Windows shine??? I guess that’s true now. I’ve been running a Windows 11 VM for over a year and the updates have slowed to a crawl; basically only the big updates, like 24H2 → 25H2 took any time at all. I remember the pains of Patch Tuesday, especially with Windows 7, where updates would take so long you thought that they’d failed.

This makes a lot of sense with GPU drivers or major kernel versions are being updated, and the systems have similar hardware. If you’re unfortunate enough to have to manage a network of completely different systems, then auto-updating seems very dangerous.

I’d actually suggest that automatically updating different systems is less likely to land you with a network of bricked computers, as the issues are not going to be ubiquitous.

If one system is running Ubuntu, one Debian, one FreeBSD, one Arch, one Windows (:grimacing:), etc, it’s incredibly unlikely all of them will be bricked simultaneously. If they’re all running Arch though… well, history shows it’s entirely possible.

I find this incredibly hilarious after micro$lop recently pushed an update that bricked systems. KB5074109, I think. Had to make sure the GF’s systems (she’s the only one still on windows) didn’t update. Ended up blocking microsoft’s servers in Pi-Hole temporarily.

I think the point is being missed here.

Let’s say, for the sake of argument, that an update is going to cause issues on one or more of my systems. And, also for sake of argument, that it is either not a known issue and has not yet been put up on the news site or some other not easily fixed issue. Going to play this scenario out with a few levels of severity.

Severity One: Manual intervention required.

Automation:

I run a script and it fails because the operation does not complete. I have to do it manually. I research the issue and eventually fix it after some hours wasted. I make this change on all my systems that it affects.

Manual:

I type the commands in manually and it fails. I have to do it manually. I research the issue and eventually fix it after some hours wasted. I make this change on all my systems that it affects.

Severity Two: Bad configuration/Drivers. No DE.

Automation:

I run a script and it completes without errors. Upon reboot of my systems I notice the problem. Because I am not a casual user and have been doing this for 25 years, I just grab my ventoy usb stick (or use the system I updated the rest of the network from as it would not have been rebooted yet) and research the issue. After extensive research and trial and error, I manage to find the issue and fix it. I apply the fix to all my systems.

Manual:

I type the commands in manually and it completes without errors. Upon reboot of my system I notice the problem. Using one of the other computers in my network (or live boot cause I want an excuse to use that ventoy usb stick I made) I research the issue. After extensive research and trial and error, I manage to find the issue and fix it. I apply the fix to all my systems.

Severity Three: Catastrophic, system will not boot.

Automation:

I run a script and it completes without errors. Upon reboot of my systems I notice the problem. Because I am not a casual user and have been doing this for 25 years, I just grab my ventoy usb stick (or use the system I updated the rest of the network from as it would not have been rebooted yet) and research the issue. After extensive research and trial and error, I manage to find the issue and fix it. I apply the fix to all my systems.

Manual:

I type the commands in manually and it completes without errors. Upon reboot of my system I notice the problem. Using one of the other computers in my network (or live boot cause I want an excuse to use that ventoy usb stick I made) I research the issue. After extensive research and trial and error, I manage to find the issue and fix it. I apply the fix to all my systems.

In each scenario, I am not saving myself work by taking the manual option. I still have at least one problem that I need to take time to fix. And once I figure out that problem and its solution, I need to propagate that solution to all systems even if they didn’t get the update.

’Automation’ is a very scary word. Trust me, in the wrong hands that don’t understand the power it has to DESTROY an entire network of machines it can be absolutely terrifying. But I have been automating dev-ops tasks for nearly two decades. There are tasks that should not be automated, and some that should - all based around risk and affected users. Should a casual user automate installing updates? NO. Full stop. Should a sys admin that knows how to fix things when they break push such an automation on a production network? Very unwise. But sys admin on a non critical home network? I see no issue or danger. We are going to have to fix the faulty update anyway. That it affects more than one system isn’t that big of a deal. Especially if you can easily get everyone back up and running with live sticks while you work on the problem.

How does manual scenario scale in case of multiple systems?

Now if we look at the automation scenarios it does appear that update is run on a single system first and then if everything works fine update is run on multiple systems. There are a few issues with this.
Firstly updates that are available for installation on Mon, 2-Feb-2026 are “x”. While updates that will be available for installation on Tue, 3-Feb-2026 will be “x+y”. That is the nature of the rolling distro that is EOS and Arch. So all the verification has to be done on a single day, in our case Mon, 2-Feb-2026. Otherwise it has to be repeated all over again. Offcourse this can be overcome by replicating the repos into the local LAN or local network, but that does not work for 90% of folks.
Secondly there is no uniformity of systems in a house or a small setup. Someone has a pure intel setup. Others may have a pure AMD setup while others will have a Intel+NVidia setup and so on. So for each system the update has to be done.

To be completely fair, chances of bricking the system via Arch are few. It might happen once or twice in a year not more than that. On EOS Forum, over the past 1 year and more, I have not come across a post by the same user who has managed to brick his/her system twice or more, in a single year, by running pacman -Syu. Even windozs manages to mess up its update, occasionally. Remember the Solarwinds and the recent goof up.

But all of this against automation does not contend with two factors. The first being manual time and effort spent on updating the system is time and effort not available to do other tasks. There are only 24 hours in a day out of which 8 hours have to be spent on sleep. Hence automation. The second is manual method works for a few computers, but it simply does not scale. Hence automation despite its drawbacks and risks win.

My Two cents. Not disparaging what you wrote.

I think something got lost in translation here. First, I did not suggest the ‘run on one system first then do the rest.’ All would be updated at once. Just the system you are currently running the automation on would not be rebooted right away (because you are using it to run the automation) and thus if you notice one of the other systems has an issue you can hold off on reboot till you figure it out.

Secondly, I am FOR automation. I WANT automation. I write scripts to automate everything in my home. I WANT automation for updating my arch systems. My scenarios were meant as an argument FOR automation as doing it manually just means MORE WORK for no extra protection in case something happens. I am not sure where I went wrong writing that to have it seen as being completely opposite of what I was trying to say.

The live ISO does not replace an established desktop environment. For example, a gaming system ceases to be a gaming system, if you’re forced to boot via a live ISO. My workstation which I use for various development tasks, is not easily replaced by a live ISO.

However, if in your scenario, there is no issue with causing all users to resort to a live ISO while a problem is resolved, then I guess you can take that as a win for automation.

I would also like to say, I appreciate the conversation! I don’t want this to come across in any way as a pile-on. I’ve enjoyed your insights, and those of others.

Use case is always important. We do have three gaming rigs. Only two of those are using Linux right now. And while those two systems might get caught in some sort of terrible update hell, this can be mitigated by schedule. Run the updates when they will less likely be in use.

However, I would argue that in this house all they really need is a live ISO. All they ever do is open a web browser. Nothing intensive.

And I do not see it as a pile on. (Though I am sort of confused as to why the one guy that seemingly agrees with me somehow thought I was arguing counter to my point lol). I posted this knowing that I would mostly be in the minority. Automation has been a big part of my life - work and otherwise. So while I do fight for it, I do also enjoy hearing the other side of things. Those that may see more need for caution than I do. All insights are good ones.

I only have three systems running EndeavourOS. I check for updates several times a day on one (my PC), 2-3 times a week on another (my parents’ laptop), and 2-3 times a month on the last one (an ancient ThinkPad).

I do this via the console using yay or eos-update --aur. I read through the updates beforehand and then confirm the update or not.

I have a server and a desktop, both running the same distro. However, the updates are applied manually and skewed a bit. I am confident in my ability to recover from update issues and it is convenient to just keep astride the updates from one source vs multiple.
Technically there’s a laptop too, but it hasn’t been powered on for six months ;0

I care about frequent updates on my main machine, other people and machines don’t have that requirement. So usually once a month is update day here. There must be a least two hours of additional free time available on top of the expected time to fix any unexpected issue.

I either run Arch based, which I’m familiar with and have knowledge about recent update issues from my main machine. That speeds things up. No automation though, you have to apply eyeballs. The second “automated” option: Install an immutable distro. Esp. if the computer is used by a different user show them the update button and let them handle it.

My home server (RPI4, rpi OS) is a file server, VPN server, nextcloud server, adguard server, home automation server.

I update daily; I manually run a script to update OS, nextcloud, and nextcloud apps, and I check what updates are proposed before allowing the updates to be installed.
Only the adguard filter lists are updated automatically once every 24 hours.