"PSA: Linux 5.16 has major regression in btrfs causing extreme IO load"

Found something interesting, if you run the # systemctl start btrfs-defrag.service followed by # systemctl status btrfs-defrag.service you will be able to see what command is being issued…

$ sudo systemctl status btrfs-defrag.service while in execution
● btrfs-defrag.service - Defragment file data on a mounted filesystem
     Loaded: loaded (/usr/lib/systemd/system/btrfs-defrag.service; static)
     Active: active (running) since Thu 2022-02-03 16:26:15 -03; 15s ago
TriggeredBy: ● btrfs-defrag.timer
       Docs: man:btrfs-filesystem
   Main PID: 4118 (btrfs-defrag.sh)
      Tasks: 6 (limit: 38393)
     Memory: 3.1G
        CPU: 3.235s
     CGroup: /system.slice/btrfs-defrag.service
             ├─4118 /bin/bash /usr/share/btrfsmaintenance/btrfs-defrag.sh
             ├─4121 /bin/bash /usr/share/btrfsmaintenance/btrfs-defrag.sh
             ├─4122 /bin/bash /usr/share/btrfsmaintenance/btrfs-defrag.sh
             ├─4124 cat
             ├─5007 find /var/cache -xdev -size +1 "" -type f -exec btrfs filesystem defrag -t 32m -f {} ";"
             └─5237 btrfs filesystem defrag -t 32m -f /var/cache/pacman/pkg/linux-5.16.5.arch1-1-x86_64.pkg.tar.zst

fev 03 16:26:15 eos systemd[1]: Started Defragment file data on a mounted filesystem.

Anyone performing manual defragmentation should be aware of this very important consideration:

This means that defragmenting snapshots and applying compression at the same time may end up creating multiple copies of the file on disk, thereby increasing disk usage (with more space used with more snapshots).

1 Like

But isn’t that caused by using the -c flag which is changing the compression? If so, I wouldn’t expect changing the compression to be part of a routine defrag.

1 Like

Yes indeedy. It also seems to ignore the compression level set on the mountpoint which is annoying. I’ll edit my post to make it clearer.

1 Like

I got compress=zstd in /etc/fstab.

So when running manual defragmenting and using -clzo for example, is this when this duplication of files occur?

Yes. But…why would you do that?

I wouldn’t. I was just using an example to understand what @jonathon posted above.

So for running a manual defragmenting the use of -c flag is not necessary at all if I’ve got the compress=zstd in fstab?

The only time I have ever personally used the -c flag is when you either want to compress uncompressed files or change the compression on files. This should be a pretty rare occurrence.

2 Likes

Alright. Got it. Thanks!

1 Like

How do i know if i have a problem on btrfs?

Read the thread… :stuck_out_tongue_winking_eye:

1 Like

This “constant writing bug” is easy to spot, at least in plasma:

bug

Left, something is happening on the hard drive.
Right, maybe this bug is not a problem for this disk.

There are certainly other similar programs besides that applet and iotop.

1 Like

Screenshot_20220204_081038

2 Likes

As i know De-fragmentation is not recommended on SSD’s right? It can reduce its lifes. Is it enable on ext4 as defaults?

Yes. It just does not make sense for a non-spinning drive to optimize the block structure of files. All blocks on an SSD/NVME drive have the same read/write latency. No performance gain by defragmentation.

No. There is no defrag/autodefrag option for mounting ext4 filesystems. If you want to defrag an ext4 fielsystem you need to do it manually. e4defrag is the tool for it.

6 Likes

SSDs and NVMEs have wear leveling in the firmware, and that determines the placement of data in the cells, so defragmenting means nothing to SSDs and NVMEs. However, the btrfs filesystem maintains data structures that map the free and used space on the filesystem, and those map like data structures can get very large if the data it maps becomes too fragmented. So, on btrfs filesystems, it is beneficial to defragment occasionally. The defragment process is not really to help with actual data on the disk at the cell level, but to keep the btrfs map data structures from becoming too large. In actuality, the data is always going to be fragmented across the cells on an SSD, but the filesystem “thinks” the data is more contiguous. :slight_smile:

7 Likes

That is interesting. And from my point of view another design flaw of btrfs.

1 Like

This topic was automatically closed 2 days after the last reply. New replies are no longer allowed.