Btrfs as a home user - why?

Your use case is rather atypical for home users. I would argue that at your data writing rate, you should consider server grade storage drives if you are unsatisfied with the life span of the drives.

67TB is a pretty good chunk out of a 240GB drive’s TBW. The health shown is on par with this usage.

1 Like

Well have you tried garuda
To say that calamares cannot setup btrfs.

I actually didn’t wanted to comment but anyway

This type of discussion is what makes me glad that I force all garuda users to use btrfs by default.

1 Like

Yes, it relies on Timeshift and timeshift-autosnap for the automatic creation of snapshots when the system updates.

You could create btrfs-snapshots and rsync backups through Timeshift on the same system simply by using two different Timeshift configurations (default = /etc/timeshift/timeshift.json).
Set up Timeshift in btrfs mode, copy and rename the json configuration file. Then set up Timeshift in rsync mode, copy and rename the json conf file.

You could now have one setup as the default one with the other being called manually or by cronjob for example. Or run both manually; or …
All you need to have is a simple script that overwites timeshift.json with the one containing the mode you need and maybe reverse that afterwards.

Installed compsize and took a look (right now I am running btrfs on everything but /boot/efi and discounting all theoretical benefits I seem to save about 1 gig on /usr and not much anywhere else, especially not on /home where I have 99.5% images and videos. I think the save there was literally 100Mb out of 200Gb because the only thing it could compress was my 5 spreadsheets and my CV :wink:

However if it actualy roughly manages to compress 30% on / (not counting the EFI that’s on Fat32 and my /home of course) it might be worth continuing running btrfs on / just for that, and switch to either XFS or Ext4 on my home.

Note, that one can add or change the used compression method(s) in the installed system itself later on, but this would only compress new files and require recompression of all old files.

So unless you had compression activated before filling the drive you’ll need to recompress.
To recompress / for example, run
sudo btrfs filesystem defragment -r -v -czstd /

On / I did that after install, on /home I simply didn’t copy any data back from my backup until compression had been set and the system rebooted. (So /home/myusername was empty except the pre-made empty folders).

Strange, considering you mentioning only saving about 1G.
My root subvolume, compressed with zstd, has a size of about 13G uncompressed and a compression ratio of more than 30%. Much like the ones @Schlaefer posted above.

I’ll check again.

enables all garuda users by offering powerful default settings and preinstalled features out of the box

There, fixed it for you. :wink:

Haven’t tried garuda on a production machine, but it’s practically number one on the list. It does nearly everything I usually set up anyway. But that theme is really … scary. :stuck_out_tongue_winking_eye:

On the other hand garuda has a different philosophy than EOS, which is rather unopinionated, and currently ext4 is the sane default for most users.

2 Likes

Does the automatic setup work if you dual boot? If your /boot/efi is on Windows’ Fat32 partition for boot?

Something was frakked, I ran it again and it upped the numbers to an average compression rate of 38%. So fine :slight_smile:

2 Likes

Well, I have spent the evening looking at this and I think I will redo my partition scheme completely for the first time since Nov 2019.

Meaning I will sacrifice having a backup of media files from Home to an external drive and instead put /home on the SSD, give the entire Internal HDD to Windows and format the entire External HDD for Linux and have specifically for movies backed up by other means. Then a complete / and /home of 50 Gb should be enough.

1 Like

I’m not so sure - though I would agree that 67.21 TB of lifetime writes looks rather high - this is not a workstation (just a basic i3-4130 HTPC pulling in some TV and running PLEX server). I’ve a fairly old Western Digital closing on 2328 days, and a couple of Toshibas on 1200 and 1700 days…

In the last month I’ve downloaded no more than 300GB, so we’d be looking at 3TB in ten months at that rate (and that’s a fairly heavy month I’d say)… so I’m interested to get clues on how 67TB is even possible… hence my paranoia about filesystem overhead…

This disk had the basic fat32/ext4 linux install, so I’m disappointed by the short life despite it running perhaps 16 hours per day… For most of it’s running time it plays radio, handles torrents, runs Plex (feeding the TV downstairs and Plex player on the bedroom TV) and does my internet browsing.

I now have about 8TB mounted storage, with maybe 6TB on it - now let’s assume that we downloaded all of that to SSD and then moved it to HDD… 67TB with around 6000 hours power on time would mean average writing of 11GB per hour. The EVO has 74 hours and 200GB (still looking heavy to me, around 3GB per hour).

  • The WD disk was shelved for a while, 2 months ago I dusted it off and threw it back in, formatting to BTRFS for a change, and it seems happy and pretty quiet.

Something which has surprised me is that, if I open Dolphin and right click to check properties on a Toshiba disk, it has work to do - scanning to get the size and file information (similarly exploring in Filelight) whereas if I right click my ancient Western disk, with BTRFS, the information is just there… and explore in Filelight brings up it’s 1.2TB contents in a flash.

Actually Garuda is ‘close to Arch’ too, so you can do what you want with it later on. One of my Garuda installs is on ext4 (was having troubles getting it to boot OR shutdown) and the other is on btrfs. Both of them are on ZFS for the data drives - and the btrfs version problems were eventually solved by blocking OS_PROBER in the grub - and it all works beautifully so far.

Everything I’ve managed to take on board about filesystems suggest that btrfs is TRYING to be ZFS, but not quite there yet. Not enough paid devs I suppose - though that should change with a major distro defaulting to it. The other takeaway I have so far is that f2fs is a good choice to cut down on SSD and NVME wear, so I have several setups trying that out for non-data use (nowhere close to affording SSDs for data!)

I don’t see any downsides with btrfs for home use - and certainly Calamares had no trouble setting it up in ‘plain’ mode (didn’t try encryption yet - I need to have more things worth hiding!). It doesn’t magically provide backups for you though - just snapshots…

I see. I doubt the overhead is due to file system choice, although I’m not an expert in these matters. However, I’ve never encountered such an unexplained load on any of my drives, all ext4. Also ext4 is so widespread I doubt if it were the culprit there wouldn’t be heavy literature on this. I rather suspect either swap, or something else going on in the background.

I don’t think 67 TB is out of the ordinary for that use case. Torrents + Plex is going to generate a lot of writes. Plex is write-heavy, especially if it is transcoding for any of those other devices.

2 Likes

I now have installed as stated and apart from other things throwing wrenches into it (apparently the latest kernel does NOT like my computer, for the first time since starting with Arch, so I have had to switch to the LTS), everything works the lazy way:

No bootable snapshots, just scheduled thru timeshift. Seems 5 snapshots (three boot, one weekly one monthly) will end up about 1 Gb total, which is fully accepable.
Anyway, this means I would appreciate some other advice for easy Rsync of dotfiles and documents to an external drive.

Edit: solved it easily by just making a pacman hook that rsyncs my home directory to the external drive every update, plus making an alias out of the command for manual backup

Want to thank all of you for a really informative post here… where I live in Ecuador SSD’s are double plus a little…of the price one pays for Amazon delivered to my US address (or interestingly enough, Colombia)… so I am still using spinning rust on both desktops, 2 ea 1 TB drives in one machine and 3 drives in the second, 2 ea 1 TB and 1 ea 2 Tb… one machine running Manjo and Windoze 10 dual boot, the other running Manjo stable and EOS…
From what I am reading here it looks like it may be time to change my fs from ext4 to btrfs. Many thanks for all the great information…now if I could figure out a way to smuggle a few drives across the border from Colombia…oh wait…I didnt say that…

1 Like

When I got some 18650 batteries from China to Thailand via Aliexpress last year, they came packed inside a torch - because torches can be shipped, batteries cannot. The torch was free. So my advice is to contact a seller and advise them on packing and labelling :wink: perhaps they could put it into a defunkt/low price device - or even a metal pencil case for shipping.

Good luck :wink:

Thanks Ben…Good suggestion. What I am going to do is cultivate someone across the border as a shipping point…go over, pick them up and come back. We will see…

1 Like