Btrfs as a home user - why?

Yes.

In truth, there is almost always some benefit to enabling it. Even in the case of /home where it may choose to not compress big sections of data, it will compress what it can and not compress what isn’t worth it.

2 Likes

It’s a snapshot. :smile:

I think part of this is that i am just lazy, I am sitting here reading thru tutorials and wikis and it just feels… like a huge step backwards. Not in functionality, but in management.

It all seems extremely hands on, and that’s what i mean when I say it’s not for the average home user. (it doesn’t help that the official documentation is only aimed at servers, assuming you are a network admin at a company and working with huge data clusers etc).

And on the other end it seems everyone on /reddit etc that advocates for it are extreme user cases (while claiming to “average users”), either “paranoid” (taking 6 monthly, 12 weekly, 7 daily and 12 hourly snapshots on top of the auto-created one at very update they’ve set up ) or also having odd user cases like “Oh I just bundled 9 old discarded HDDs in an array as a home made NAS using Btrfs, it’s so easy!”

Even setting it up like Suse used to do it seems… annoying.

What I want is, if I am going to use snapshots at all:

Auto-created snapshots of my / when an update is made. Maybe… three of them? I don’t see me need more, period (1), so they need to also automatically be deleted, easy to find, and preferably everything can be set up in a GUI.

(1) because Arch already have a package cache for rollbacks of individual packages AND I take several actual backups (not snapshots) with Timeshift to an external HDD that includes both the complete / and /home partitons.

Sure. Some example directory data from my current machine (compression zstd level 1) using compsize:

# /usr/share
Type       Perc     Disk Usage   Uncompressed Referenced
TOTAL       56%      3.4G         6.1G         6.1G

# /usr/bin
Type       Perc     Disk Usage   Uncompressed Referenced
TOTAL       54%      938M         1.6G         1.6G

# My source dir in home
Type       Perc     Disk Usage   Uncompressed Referenced
TOTAL       64%      2.6G         4.1G         4.1G

# But my home music
Type       Perc     Disk Usage   Uncompressed Referenced
TOTAL       99%      8.7G         8.7G         8.7G

So overall a few gigs here and there. Is that worthwhile if you have 200 GB free anyway? Probably not. Is it nice benefit on a 256 GB laptop? Maybe.

2 Likes

(Btw I have to wonder how many of Fedora’s and Suse’s users that actually do any of this at all, and not just click “next next next” at install and then never use any features of the btrfs file systems at all. My guess 65-70% of the user base).

Must be you lucky day … :grin:
This Wiki article provides exactly what you want; check the bottom of the article or see here if you don’t want encryption.

[Edit] Btw, with a simple trick you could get Timeshift to run in btrfs-mode to create btrfs-snapshots and also create rsync backups. But you’re probably not interested, seeing that this would involve some manual intervention and scripting :wink: .

It looks interesting, but is it using timeshift? Because I need that for my Rsync backups to a second drive, and as far as I know timeshift can’t keep track of two different backup schemes?

Anyway, I appreciate all the input, but I think I have concluded that for my own user case, btrfs is not worth it on any level other than very much maybe increasing the life slightly on my / SSD compared to Ext4.

The amount of manual hands on to make it work seems far far greater than the benefits I would get out of it, and I am not in the mood to try to implement it just because I can.

(Marking this as “solution”, because it is my final conclusion: btrfs is just not worth the hassle for me, specifically as long as you can’t set it up directly in the installer, like in Suse, and Calamares simply don’t have that functionality).

Note:I have since switched to Btrfs on my / BUT it required a completely redone partition scheme to be working as intended so this would still be the “solution” to my original question.

2 Likes

Your use case is rather atypical for home users. I would argue that at your data writing rate, you should consider server grade storage drives if you are unsatisfied with the life span of the drives.

67TB is a pretty good chunk out of a 240GB drive’s TBW. The health shown is on par with this usage.

1 Like

Well have you tried garuda
To say that calamares cannot setup btrfs.

I actually didn’t wanted to comment but anyway

This type of discussion is what makes me glad that I force all garuda users to use btrfs by default.

1 Like

Yes, it relies on Timeshift and timeshift-autosnap for the automatic creation of snapshots when the system updates.

You could create btrfs-snapshots and rsync backups through Timeshift on the same system simply by using two different Timeshift configurations (default = /etc/timeshift/timeshift.json).
Set up Timeshift in btrfs mode, copy and rename the json configuration file. Then set up Timeshift in rsync mode, copy and rename the json conf file.

You could now have one setup as the default one with the other being called manually or by cronjob for example. Or run both manually; or …
All you need to have is a simple script that overwites timeshift.json with the one containing the mode you need and maybe reverse that afterwards.

Installed compsize and took a look (right now I am running btrfs on everything but /boot/efi and discounting all theoretical benefits I seem to save about 1 gig on /usr and not much anywhere else, especially not on /home where I have 99.5% images and videos. I think the save there was literally 100Mb out of 200Gb because the only thing it could compress was my 5 spreadsheets and my CV :wink:

However if it actualy roughly manages to compress 30% on / (not counting the EFI that’s on Fat32 and my /home of course) it might be worth continuing running btrfs on / just for that, and switch to either XFS or Ext4 on my home.

Note, that one can add or change the used compression method(s) in the installed system itself later on, but this would only compress new files and require recompression of all old files.

So unless you had compression activated before filling the drive you’ll need to recompress.
To recompress / for example, run
sudo btrfs filesystem defragment -r -v -czstd /

On / I did that after install, on /home I simply didn’t copy any data back from my backup until compression had been set and the system rebooted. (So /home/myusername was empty except the pre-made empty folders).

Strange, considering you mentioning only saving about 1G.
My root subvolume, compressed with zstd, has a size of about 13G uncompressed and a compression ratio of more than 30%. Much like the ones @Schlaefer posted above.

I’ll check again.

enables all garuda users by offering powerful default settings and preinstalled features out of the box

There, fixed it for you. :wink:

Haven’t tried garuda on a production machine, but it’s practically number one on the list. It does nearly everything I usually set up anyway. But that theme is really … scary. :stuck_out_tongue_winking_eye:

On the other hand garuda has a different philosophy than EOS, which is rather unopinionated, and currently ext4 is the sane default for most users.

2 Likes

Does the automatic setup work if you dual boot? If your /boot/efi is on Windows’ Fat32 partition for boot?

Something was frakked, I ran it again and it upped the numbers to an average compression rate of 38%. So fine :slight_smile:

2 Likes

Well, I have spent the evening looking at this and I think I will redo my partition scheme completely for the first time since Nov 2019.

Meaning I will sacrifice having a backup of media files from Home to an external drive and instead put /home on the SSD, give the entire Internal HDD to Windows and format the entire External HDD for Linux and have specifically for movies backed up by other means. Then a complete / and /home of 50 Gb should be enough.

1 Like

I’m not so sure - though I would agree that 67.21 TB of lifetime writes looks rather high - this is not a workstation (just a basic i3-4130 HTPC pulling in some TV and running PLEX server). I’ve a fairly old Western Digital closing on 2328 days, and a couple of Toshibas on 1200 and 1700 days…

In the last month I’ve downloaded no more than 300GB, so we’d be looking at 3TB in ten months at that rate (and that’s a fairly heavy month I’d say)… so I’m interested to get clues on how 67TB is even possible… hence my paranoia about filesystem overhead…

This disk had the basic fat32/ext4 linux install, so I’m disappointed by the short life despite it running perhaps 16 hours per day… For most of it’s running time it plays radio, handles torrents, runs Plex (feeding the TV downstairs and Plex player on the bedroom TV) and does my internet browsing.

I now have about 8TB mounted storage, with maybe 6TB on it - now let’s assume that we downloaded all of that to SSD and then moved it to HDD… 67TB with around 6000 hours power on time would mean average writing of 11GB per hour. The EVO has 74 hours and 200GB (still looking heavy to me, around 3GB per hour).

  • The WD disk was shelved for a while, 2 months ago I dusted it off and threw it back in, formatting to BTRFS for a change, and it seems happy and pretty quiet.

Something which has surprised me is that, if I open Dolphin and right click to check properties on a Toshiba disk, it has work to do - scanning to get the size and file information (similarly exploring in Filelight) whereas if I right click my ancient Western disk, with BTRFS, the information is just there… and explore in Filelight brings up it’s 1.2TB contents in a flash.