Friendly reminder to do some system maintenance

Greetings lovely community,

This is just a quick reminder if you haven’t done it in a while, feel free to run (it may require root privileges):

paccache -r

This command “deletes all cached versions of installed and uninstalled packages, except for the most recent three, by default” ( Arch Wiki ). You don’t have to do this if you don’t want to, (it’s good system maintenance though!) but if you haven’t run the command in a while, you could reclaim a few gigabytes back of storage. Personally, I run the command once a month and I generally reclaim about 1-2GB of space back. I have a small 256GB SSD in my laptop, so getting back a few gigs is a benefit.

Also from the same wiki link above, if you don’t want to run the command manually, you can have it run automatically once a week via a timer, which is actually what I just learned and decided to try it out for myself. It’s not terribly difficult to enable, but the Arch Wiki doesn’t quite explain easily how to do it for beginners to understand, so I’ll break it down in case anyone is curious to automate the paccache -r command (note: run these commands one at a time):

sudo systemctl enable paccache.timer

sudo systemctl start paccache.timer

systemctl status paccache.timer

The first two commands essentially turn it on and start it up. The third command is optional, but it is a good way to check to make sure it is enable and working as it should show “Active (waiting)” in the output.

Hope this was helpful to anyone. And feel free to share any other relevant system maintenance commands you like to use from time to time that others might benefit to know as well :slight_smile:

Edit: A little bit more about paccache can also be found over in the EndeavourOS wiki:


Do them both at the same time with the --now option:

sudo systemctl enable --now paccache.timer

This starts and enables the service. One command vs two.


@Stagger_Lee Yup that is also a more efficient way to do it as well! I was just breaking the process down into like a one step at a time type of thing, but thank you for mentioning that :wink:

1 Like

There is another approach you can take to limit how much cache pacman uses. I have my machines set up with a pacman hook to cleanup when I run pacman. This doesn’t involve any additional services. You can do this by simply creating a file /etc/pacman.d/hooks/clean_package_cache.hook with the following contents:

Operation = Upgrade
Operation = Install
Operation = Remove
Type = Package
Target = *
Description = Cleaning pacman cache...
When = PostTransaction
Exec = /usr/bin/paccache -r

This call to paccache keeps the three most recent copies of each package, though you can get more creative with additional parameters if that default doesn’t meet your purpose.

1 Like

Mine is already enabled, I guess I enabled it through the Welcome app

$ sudo systemctl list-timers | grep paccache
Mon 2022-03-07 00:00:00 -03 5 days left         Mon 2022-02-28 07:42:15 -03 1 day 10h ago paccache.timer               paccache.service

@MrToddarama unless there’s some downsides to systemctl timers I’m not yet aware of, I personally find it easier to keep track of my systemctl timers via systemctl list-timers command than I do trying to keep track of all the triggers I have enabled. Is there any easy way to display a list of all of them or is it more like you just have to already know?

@mcury I’ve used that Welcome app option once or twice before (not frequently, but a few times), but I wasn’t aware it enabled and started a systemctl timer service or is it something different to paccache -r? This is what I’m referring to btw:

Screenshot from 2022-03-01 16-23-46

I’m not entirely sure if these two things are exactly the same or not. I only enabled paccache timer today and it shows N/A for the dates because it hasn’t yet run, but I know I’ve used the Welcome app cache cleaner before, so that’s why I’m curious about this now.

And these are also the systemctl list-timers I currently have enabled and running:

[scott@endeavourOS ~]$ systemctl list-timers
NEXT                        LEFT        LAST                        PASSED             UNIT                         ACTIVATES                     
Wed 2022-03-02 00:00:00 EST 7h left     Tue 2022-03-01 00:00:06 EST 16h ago            logrotate.timer              logrotate.service
Wed 2022-03-02 00:00:00 EST 7h left     Tue 2022-03-01 00:00:06 EST 16h ago            shadow.timer                 shadow.service
Wed 2022-03-02 04:25:34 EST 12h left    Tue 2022-03-01 05:31:06 EST 10h ago            man-db.timer                 man-db.service
Wed 2022-03-02 11:42:11 EST 19h left    Tue 2022-03-01 14:27:36 EST 1h 38min ago       updatedb.timer               updatedb.service
Wed 2022-03-02 13:12:39 EST 21h left    Tue 2022-03-01 13:12:39 EST 2h 53min ago       systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service
Sat 2022-03-05 15:00:00 EST 3 days left Sat 2022-02-05 15:00:02 EST 3 weeks 3 days ago pamac-cleancache.timer       pamac-cleancache.service
Mon 2022-03-07 00:00:00 EST 5 days left n/a                         n/a                paccache.timer               paccache.service
Mon 2022-03-07 00:42:39 EST 5 days left Mon 2022-02-28 00:24:02 EST 1 day 15h ago      fstrim.timer                 fstrim.service

8 timers listed.
Pass --all to see loaded but inactive timers, too.
[scott@endeavourOS ~]$ 

I only used the Welcome app option
I have these timers:

NEXT                        LEFT                LAST                        PASSED        UNIT                         ACTIVATES                     
Tue 2022-03-01 19:00:00 -03 29min left          Tue 2022-03-01 18:00:10 -03 30min ago     snapper-timeline.timer       snapper-timeline.service
Wed 2022-03-02 00:00:00 -03 5h 29min left       Tue 2022-03-01 00:00:18 -03 18h ago       logrotate.timer              logrotate.service
Wed 2022-03-02 00:00:00 -03 5h 29min left       Tue 2022-03-01 00:00:18 -03 18h ago       shadow.timer                 shadow.service
Wed 2022-03-02 01:40:35 -03 7h left             Mon 2022-02-28 08:16:29 -03 1 day 10h ago updatedb.timer               updatedb.service
Wed 2022-03-02 10:26:17 -03 15h left            Tue 2022-03-01 15:31:28 -03 2h 59min ago  man-db.timer                 man-db.service
Wed 2022-03-02 17:38:22 -03 23h left            Tue 2022-03-01 17:38:22 -03 52min ago     snapper-cleanup.timer        snapper-cleanup.service
Wed 2022-03-02 17:43:19 -03 23h left            Tue 2022-03-01 17:43:19 -03 47min ago     systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service
Mon 2022-03-07 00:00:00 -03 5 days left         Tue 2022-03-01 00:00:18 -03 18h ago       btrfs-trim.timer             btrfs-trim.service
Mon 2022-03-07 00:00:00 -03 5 days left         Mon 2022-02-28 07:42:15 -03 1 day 10h ago paccache.timer               paccache.service
Fri 2022-04-01 00:00:00 -03 4 weeks 2 days left Tue 2022-03-01 00:00:18 -03 18h ago       btrfs-defrag.timer           btrfs-defrag.service
Fri 2022-04-01 00:00:00 -03 4 weeks 2 days left Tue 2022-03-01 00:00:18 -03 18h ago       btrfs-scrub.timer            btrfs-scrub.service

11 timers listed.
1 Like

Scotty_Trees … no real downside. Sorry if my message came across as implying that. Was really just providing a second approach and another option… all of that ‘choice is good’ mentality :slight_smile:

I get what you are saying about listing timers. I don’t really view the paccache clean up step as a separate scheduled event, but am instead considering it be part of the overall update process … so with my mindset it seems logical to make it part of each execution of pacman.

Both approaches are certainly valid and will clean up the clutter … different strokes for different folks!

1 Like

Welcome’s paccache manager periodically and automatically cleans up the package cache according to the settings you give. It starts the required systemd service in order to do its work.

You can look at its bash code at /usr/bin/paccache-service-manager.


@mcury Thanks for the timers lists I can compare to mine as well to help wrap my head around it just a little bit better :wink:

@MrToddarama no worries, you’re message didn’t really imply that, that was just where my mind went to thinking first, but your approach is just as good as well and it’s also found in the EndeavourOS wiki section, which I added an edit to the top post for clarity. In regards to your pacman execution style…I guess you want it to function like a swiss army knive, able to do many things at once. Nothing wrong with that of course. So far, at least for now, I’ve taken the approach of simplicity and keeping my pacman as lean and efficient as I can; basically do one thing and do it well. In either case, we’re both right. Though, if say…you don’t run an pacman transaction in like 2-3 weeks for whatever reason, you’ll definitely have a bigger cache than I would :wink:

@manuel I had first run systemctl status paccache.timer before I enabled the paccache timer and I didn’t have that service set as Active, so even though I’ve ran the Welcome app cleanup before, the paccache timer was never active. Is there a systemd service to show I have this Welcome app cleanup enabled or not somewhere or am I just over thinking it a bit?


This is very useful what you wrote. From time to time I also delete downloaded packages. Let’s just say I haven’t thought so far that this could be automated at boot time.

1 Like

@Scotty_Trees Is there any trouble in using sudo pacman-Sc ? Because I usually use this after removing the packages which I didn’t liked for my use.

 systemctl status paccache.timer

Looks like EOS is starting the service from /etc/systemd/system/paccache.timer.

Thank you for reminding me. I just got 8.7GiB back. I did not need them so urgently but thanks anyway. :sweat_smile:

1 Like

Don’t forget your systemd journals, they can get big too, up to 4GB big if left unchecked.

sudo journalctl --vacuum-size=50M

I decided to go one step further. Instead of having to manually run the command you mentioned, I found this option via the Arch wiki:

If the journal is persistent (non-volatile), its size limit is set to a default value of 10% of the size of the underlying file system but capped at 4 GiB. For example, with /var/log/journal/ located on a 20 GiB partition, journal data may take up to 2 GiB. On a 50 GiB partition, it would max at 4 GiB.

I haven’t needed my systemd logs (currently usign 4GB on my system) for anything, and may (hopefully) never need them for any issues, so I decided to knock down that system limit from 4GB in my case to 100MB. You can set your value to whatever you like of course.

To confirm current limits on your system review systemd-journald unit logs, check:

journalctl -b -u systemd-journald

This will show an output which will tell you how much space is being taken up by the logs. If you want to change the systemd journal size limit system wide, so you don’t have to manually run --vaccum-size=X command, you can do the following:

sudo nano /etc/systemd/journald.conf

You’ll want to uncomment (aka delete the # ) for the line “SystemMaxUse=” and add a value that you want, in my case I’ll choose 100MB for this:


Quick side note, I’m not sure if it matters if it’s 100M or 100MB, but the Arch Wiki only shows “M” after the size, so I’ve just gone with that.

Then you can just save the file when you’re done.

The Arch Wiki wasn’t entirely clear on this point (at least for me), but since we edited a systemd journal file, I went ahead and restarted the service just in case so that the system now knows to use the new value. I assume a system reboot would also accomplish the same thing. If I’m wrong or mistaken at all on this part, please anyone feel free to correct me, I don’t mind being wrong, I’m always willing to learn.

So to restart the service:

sudo systemctl restart systemd-journald.service

Now you should be good to go and won’t have to worry about needing to run the vaccum command ever again. If you need your systemd journal logs you may want to comment the SystemMaxUse line for your troubleshooting needs if they arise. Hope this was helpful as it was for me to learn it and write about it.


:point_down:This wasn’t clear enough? :grin:

Restart the systemd-journald.service after changing this setting to apply the new limit.

Note that if you use a drop-in file placed in the /etc/systemd/journald.conf.d directory rather than editing the journal.conf file directly, you avoid getting pacnew files as you haven’t directly edited the file. Using a drop-in file is specifically mentioned (with an example given) in the Arch wiki article, too.


Thanks for the tips, but I ran pccache -d for a dry run, I got.
==> no candidate packages found for pruning

Then I ran systemctl status paccache.timer

paccache.timer - Discard unused packages weekly
Loaded: loaded (/usr/lib/systemd/system/paccache.timer; disabled; vendor preset: disabled)
tive: inactive (dead)
Trigger: n/a
Triggers: paccache.service

What process or hook is cleaning up the cache?
Where would I find it?


cat /usr/lib/systemd/system/paccache.service 
Description=Remove unused cached package files

ExecStart=/usr/bin/paccache -r
1 Like

Interesting. I’m not used to working pacman hooks but, could I use this to to the same thing on the yay and paru cache by adding on something like the lines below?

Exec = paccache -r -c ~/.cache/yay/*/
Exec = paccache -r -c ~/.cache/paru/clone/*/