Considering some btrfs optimizations

Greetings lovely community,

I’ve only ever known ext4, but switched to btrfs earlier this year. I’ve seen different distros/users optimize btrfs in slightly different ways. When I installed btrfs using Calamares from the Apollo 22-1 iso, I just used the default settings for btrfs with a 8GB (no hibernate) swap partition for my 256GB SSD. I’ve been wondering though if I should/need to add any optimizations for my btrfs. or if the defaults that EndeavourOS used from the latest iso are enough for the average user.

sudo nano /etc/fstab
# /etc/fstab: static file system information.
# Use 'blkid' to print the universally unique identifier for a device; this may
# be used with UUID= as a more robust way to name devices that works even if
# disks are added and removed. See fstab(5).
# <file system>             <mount point>  <type>  <options>  <dump>  <pass>
UUID=EC97-73E1                            /boot/efi      vfat    defaults,noatime 0 2
UUID=112e02fc-dd9b-419a-a575-bc79354c86ab /              btrfs   subvol=/@,defaults,noatime,compress=zstd 0 0
UUID=112e02fc-dd9b-419a-a575-bc79354c86ab /home          btrfs   subvol=/@home,defaults,noatime,compress=zstd 0 0
UUID=112e02fc-dd9b-419a-a575-bc79354c86ab /var/cache     btrfs   subvol=/@cache,defaults,noatime,compress=zstd 0 0
UUID=112e02fc-dd9b-419a-a575-bc79354c86ab /var/log       btrfs   subvol=/@log,defaults,noatime,compress=zstd 0 0
UUID=b479a7af-fa4d-4bb8-98df-2ab4ae7a9b78 swap           swap    defaults   0 0
tmpfs                                     /tmp           tmpfs   defaults,noatime,mode=1777 0 0

ssd, noatime, space_cache, commit=120, compress=zstd, discard=async, nodatacow, and autodefrag are the optimizations that I know of. As you can see from my fstab above, I’m using defaults, noatime, and compression=zstd already for the btrfs volumes. Calamares also created swap, vfat, and tmpfs volumes, but I do not know if I need/should have to add optimizations to them as well or if the optimizations are meant for btrfs only.

I’ve got an SSD so I’ve already enabled the SSD Trim timer via systemctl enable fstrim.timer, so I’m wondering if I should/need to add the ssd optimization option to my btrfs as well or if that’s some sort of overkill I’m not sure so feel free to clarify anything for me.

I understand a lot of this may be subjective and depend on the user, I’m just looking for practical options to consider, keeping things lean, smooth, and fast; if something is a good idea or good practice, I’d like to enable it. If something is not really needed or won’t notice any difference (or causes decrease in performance for example), then I can take it or leave it is my philosophy.

At the very least I’m probably going to want to add sdd and space_cache, but I’m wondering how many of these optimizations I should/need to really add, so I’d appreciate any thoughtful guidance into the matter and thanks for taking the time to read and reply, any responses are always appreciated very much.

1 Like

They are mount options more than optimizations. Different filesystems use different options.

Btrfs autodetects ssds and automatically sets that option when it detects an ssd.

On modern kernels, space_cache=v2 is the default. There is no reason to add it. Since space_cache is applied to the whole filesystem, specifying it can cause unexpected behaviour in some cases.


So it looks like what I’m hearing is that I’m already pretty optimized with the EndeavourOS defaults then :sunglasses:

1 Like

Is there a way to modify the zstd compression level after setting it during the install?
I think that the default level is 3. What level do you recommend?
What about the commit=120 option?

You can change the compression rate on new data by changing the mount options in fstab. For existing data you can btrfs filesystem defrag but be aware that will literally decompress and recompress the data so it will be time consuming if you have a lot of data.

It depends on the speed of cpu and your storage. The faster the cpu, the more compression you can support. On modern CPUs, I use the default zstd. When CPU cycles are at a premium, I use zstd:1 or lzo

That is the interval at which data is written to the disk. Personally, I would not be comfortable setting it as high as 120. That is 2 minutes. I usually leave it at the default of 30 seconds. Of course, it is up to your use case what is appropriate for you.


I don’t have

tmpfs /tmp tmpfs defaults,noatime,mode=1777 0 0

in my fstab on an Arch (the-arch-way) install.

However, I see:

tmp tmpfs tmpfs rw,nosuid,nodev,nr_inodes=1048576,inode64


tmpfs 3.8G 16K 3.8G 1% /tmp

in the output of findmnt and df -h respectively .

Should I be adding that line explicitly to fstab?


Also, I have discard=async among the mount options for my subvolumes. Do I need to remove that if I run fstrim manually or with the service?

Note: There is no need to enable continuous TRIM if you run fstrim periodically. If you want to use TRIM, use either periodic TRIM or continuous TRIM.

Note: Continuous TRIM is not the most preferred way to issue TRIM commands among the Linux community. For example, Ubuntu enables periodic TRIM by default [7], Debian does not recommend using continuous TRIM and Red Hat recommends using periodic TRIM over using continuous TRIM if feasible [8].


You don’t need to add it explicitly.

You don’t have to remove it but you probably can. discard=async is a much more efficient form of discard.

My understanding is that if your workload is write-intensive or your disk is close to full than discard=async is a better choice. Otherwise, periodic trim should be sufficient.


Thanks @EOS!
Sorry for not having done my homework better!
It seems that there are always things that scape me in ArchWiki :blush: :sweat_smile:

Good to know!

Thanks for the explanation!
No, my workload is not that what could be qualified as write intensive, so I think I will remove that option and go with the following advice:

Thank you both, @EOS and @dalto!


This prompts me to ask if my fstab is set up well on Btrfs?


To me it looks to be pretty well stock.

1 Like

I am not a huge fan of autodefrag. Personally, I would remove that.

If your filesystem needs to have defrag run on it, just run a periodic defrag manually, using a timer or by using Btrfs Maintenance.

1 Like

Would something like the following,

for defragment:

sudo btrfs filesystem defragment -rv /

and for fstrim:

sudo fstrim -av /

run on all the volumes mounted on the system? (-v only because I like to see what is going on :nerd_face:)

That will defragment the subvolume mounted at /. If you want to defragment all of them, you need to use Btrfs Maintenance or use a separate command for each subvolume you want to defragment.

Why not use the fstrim timer/service?

Alright, I get it. I’ll be also looking at Btrfs Maintenance.

Good question!

I tend to do manually, most of the things that actually can easily be automated for making your life easier :sweat_smile:

I guess seeing all those lines flying by makes me feel more assured. It’s just a thing :sweat_smile:

I know that BTRFS is slightly different when it comes to defrag.
The only difference I found about it is that according to Dalto, it can change the compression of existing files in case you changed it in fstab.

So, as I’m using SSD and I didn’t change my compression settings, why should I run a defrag?

Is this a correct understanding Dalto?
Or is there another benefit from running it that I’m currently not aware of?

1 Like

The filesystem will become fragmented over time decreasing performance. On an HDD the decrease in performance can be significant over time. This is a bigger issue with btrfs than traditional filesystems because fragmentation is a side effect of btrfs’s CoW implementation.

On SSDs, the same thing can happen although the performance degradation is much less because of the nature of an SSD.



You also need to start the timer:

1 Like

Btrfs defrag brings some potential drawbacks that you should be aware of:

  1. Defragmenting files move Btrfs data extents and attempt to align them, one after the other. So, the Copy-on-Write links between the copies of the file breaks. This will increase redundant data extents, as well as the disk usage of a Btrfs filesystem that was previously saved by sharing data extents between identical (or nearly identical) copies of the file.
  2. If a Btrfs subvolume has multiple snapshots, defragmenting the subvolume will break the Copy-on-Write links between the subvolume and the snapshots. This will increase disk usage of a Btrfs filesystem.
  3. If you are using the Btrfs filesystem for large databases or virtual machine images (for storing VM data/disks), defragmenting the filesystem will also negatively impact the performance of the filesystem.

So, it is necessary to run once a in a while, but cause problems? Kinda controversial statements don’t you think?

Btrfs Maintenance will enable that for you. It might not be started until you reboot though. I am not sure if it starts it without looking.