High disk usage in /home

I was doing a Clonezilla backup today, and discovered that my /home has filled up. Duf reports 8.1G in use. It shouldn’t be more than a couple gigabytes. When I mark all content in /home, in PCManFM, and right-click, the total size of files and folders is 2.5G.

I thought perhaps some Snapper home snapshots have gone MIA.

I did
sudo btrfs subvolume list -s / | wc -l
…and snapper reports 2 snapshots.

But when I do
sudo snapper list
… it reports no snapshots (I deleted all of them).

I’m almost certain I read in a thread here that old snapshots can go undetected by the file system, and that there is a method to clean it up. But for the life of me I can’t find it again.

I’d like to learn how to figure this, any tips are welcome.
I can’t be sure this a snapper issue.

I did
sudo du -a /home/ | sort -n -r | head -n 20

which reports

3157492	/home/vlad
3157492	/home/
1215316	/home/vlad/.cache
1123928	/home/vlad/.cache/paru
1122604	/home/vlad/.cache/paru/clone
732076	/home/vlad/.icons
451864	/home/vlad/.cache/paru/clone/joplin-appimage
437580	/home/vlad/.local
437564	/home/vlad/.local/share
437100	/home/vlad/.cache/paru/clone/onlyoffice-bin
372264	/home/vlad/.local/share/fonts/microsoft fonts
372264	/home/vlad/.local/share/fonts
277484	/home/vlad/.mozilla/firefox
277484	/home/vlad/.mozilla
263076	/home/vlad/.mozilla/firefox/iube05kt.default-release
249852	/home/vlad/.cache/paru/clone/onlyoffice-bin/onlyoffice-bin-7.2.1-1-x86_64.pkg.tar.zst
225932	/home/vlad/.cache/paru/clone/joplin-appimage/Joplin-2.9.17.AppImage
225680	/home/vlad/.cache/paru/clone/joplin-appimage/joplin-appimage-2.9.17-1-x86_64.pkg.tar.zst
214252	/home/vlad/.cargo/registry
214252	/home/vlad/.cargo

It could be I have misunderstood how the Duf app works.

I may also have misunderstood how the snapper list command works - it only lists snapshot for the default profile which is root, not all profiles, it seems.

When I do
cd ~; du -hs
to see the disk usage of /home, it only reports 3.1G. Which is close to what I expect.

Duf seems to report disk usage for the whole of /, also in the /home line. I don’t get what the purpose of that would be, but OK.

I guess 8G in total use by my installation and apps is about what is to be expected at this point.

Here’s how duf reports it:

│ 5 local devices                                                                                                    │
├────────────┬────────┬────────┬────────┬───────────────────────────────┬───────┬────────────────────────────────────┤
│ MOUNTED ON │   SIZE │   USED │  AVAIL │              USE%             │ TYPE  │ FILESYSTEM                         │
├────────────┼────────┼────────┼────────┼───────────────────────────────┼───────┼────────────────────────────────────┤
│ /          │ 237.5G │   8.1G │ 227.9G │ [....................]   3.4% │ btrfs │ /dev/disk/by-uuid/8acc6775-eb54-4a │
│            │        │        │        │                               │       │ a6-9e59-ec51fdfd9bc1               │
│ /boot      │   1.0G │ 208.5M │ 837.4M │ [###.................]  19.9% │ vfat  │ /dev/sda1                          │
│ /home      │ 237.5G │   8.1G │ 227.9G │ [....................]   3.4% │ btrfs │ /dev/disk/by-uuid/8acc6775-eb54-4a │
│            │        │        │        │                               │       │ a6-9e59-ec51fdfd9bc1               │
│ /var/cache │ 237.5G │   8.1G │ 227.9G │ [....................]   3.4% │ btrfs │ /dev/disk/by-uuid/8acc6775-eb54-4a │
│            │        │        │        │                               │       │ a6-9e59-ec51fdfd9bc1               │
│ /var/log   │ 237.5G │   8.1G │ 227.9G │ [....................]   3.4% │ btrfs │ /dev/disk/by-uuid/8acc6775-eb54-4a │
│            │        │        │        │                               │       │ a6-9e59-ec51fdfd9bc1               │
╰────────────┴────────┴────────┴────────┴───────────────────────────────┴───────┴────────────────────────────────────╯

You are using btrfs filesystem, so this is the way duf, df and others show the space.

Using e.g.

du -hd1 ~

should show the space usage at $HOME.

1 Like

Thanks! Very useful to know!

It looks like I’ve misunderstood a couple of outputs and commands. It seems like disk usage of about 8G should be expected and reasonable, with the packages I’ve installed.

I’d still like to investigate the potential problem of snapshots lost to the filesystem though, for future reference.

In some situations, it seems like snapshots disappear into subdirectories in the /.snapshots subvolume.

I’ll probably be able to learn about this on my own. But if anyone has the link to the thread where it was mentioned here (I seem to remember a procedure to find and delete them), I’d appreciate it!

I’ll do some further searching on this forum.

1 Like

Normal du won’t be fully accurate for btrfs either.

For btrfs, you can use something like:

sudo btrfs filesystem du -s /home

That being said, it really depends what the question you are trying to answer is.

3 Likes

Did you program pre-cognition into the Btrfs Assistant?

There’s a snapshot done with the /home profile, named “Before the Big Cleanup”, which I never made.

I swear I haven’t been drinking, more than what is reasonable on a Friday evening.

Time to learn some net security stuff. It’s been at the bottom of the list, with plenty of other things on the plate.

EDIT: Ah, I actually copied a command from the Opensuse snapper Wiki, which I went through to see if there was anything about snapshot ending up in sub-directories.

Enough beer this evening, it is a workday for me tomorrow.

snapper -c home create --description "before the big cleanup"

By default, snapper creates a nested subvolume called .snapshots which is nested inside the subvolume you are taking snapshots of. It then create sub-directories under that and creates nested subvolumes for each snapshot.

1 Like

OK, I see.

There’s been talk about snapshots disappearing from the file system in some situations, and I wanted to learn about it so I could be prepared if it should happen.

This topic was automatically closed 2 days after the last reply. New replies are no longer allowed.