Is btrfs good for my ssd

I had heard a while back that btrfs is bad for SSDs, i think that issue has been dealt with, but just to make, is it btrfs good for my ssd, also, any tips to increase longetivity
i did take alook at Any tips for btrfs on an SSD
but im not a person versed in filesystems, hence i couldnt get a single thing from that
i would really appreciate some help

Btrfs is the default filesystem for Fedora and openSUSE distros (other distros may follow suit in the future too perhaps), so the fact that two top tier level distros that are used heavily in the enterprise systems should be enough to give you confidence in btrfs. With that said though, ext4 is tried, tested, and stable, it just works without you ever really having to do anything to your filesystem.

Btrfs on the other hand, requires you to read about it in order to use all its advanced features and may require you to configure it a little bit to your liking, but the defaults should be enough for the average user. In strictly terms of btrfs being good as in healthy for an SSD, you should be fine. Btrfs is constantly updated. With that said, btrfs is still not 100% perfect yet, so as with anything always have a backup of your critical files handy. I’ve been using btrfs for less than a year, no issues. Many users have been using it for longer happily as well.


A good balanced take!

So basically if you want to set it and forget it you should choose ext4. If you want the latest and the greatest and don’t mind getting your hands dirty you should choose btrfs.

I used to use btrfs but restoring grub proved to be really hard, probably because my system only supports MBR, but no such problems with ext4. Also when using btrfs you should be aware of stuff like this.

I am more of a set it and forget it guy so I use ext4 at least for now.

for general use there isnt really any reason to be concerned about BTRFS on an SSD, its not going to hurt your SSD afaik. You might run into performance issues depending on the configuration and what youre doing possibly but thats not harmful to an SSD.

1 Like

ext4 is generally more reliable and produce less writes (which is what wears SSD resource)

Most hardcore disk writer is usually your browser, so you have to tweak it’s cache to be stored mainly in RAM, instead of disk…Which browser is in use?

depending on what you want BTRFs for couldnt you just run it without COW for those SSDs you worry about it on? I know BTRFS has a LOT of options for tuning and im not expert on it.

Not sure you could actually, it’s kinda the whole point of BTRFS…
But even if you could - why?

In my mind, there’s no reason to use BTRFS unless you need some of it’s very specific features.
Or seek trouble :laughing:


EXT4 didn’t recognize my defective RAM for 9 months after purchase but BTRFS did because of checksum ability. This is my real experience.

If I would continue to use EXT4 long with the damaged RAM, I get several corrupt data without simply noticing.

Useless, you can not notice when using EXT4, then your system can not be 100% trusted in future when memory hardware is damaged e.g. CPU cache, wrong calculation of binary in CPU, RAM or disk.

1 Like

That’s nice i guess, personally i would argue you should check your RAM with memetest and stresstest, it’s not a job of filesystem.

However during power outage recently i had ext4 x4 HDD drives, and single experimental BTRFS drive. BTRFS completely and utterly failed, became unrecoverable, it’s analogue of fsck haven’t worked because it’s “not mature yet” or something.

I thought that my drive was killed, but it’s absolutely not. SMART was ok, all it’s sectors were good…I’ve wiped it with ext4, recreated all the files (fortunately nothing unrecoverable) - and were good to go!

Now it was completely fine, along with rest of ext4

So in my book ext4 is much more reliable…It’s kinda anecdotal experience vs anecdotal experience, however my case is way more unexcusable, because only common denominator of a problem was BTRFS itself, but you had RAM problem, not drive or HDD problem.

I did, but MemTest86+ did not recognize the error, that is why I sent the defective RAM to PassMark company that tries to improve its software MemTest86+, but it is not easy. But he confirmed the RAM is damaged.

I have also experienced 3x power outages, but BTRFS runs fine after the power outages, so maybe I’m lucky.

1 Like

You did good by helping PassMark improve :+1:

I wonder, have you tired some serious stresstest like linpack running for 6+ hours?
Usually in such cases it’s the way to catch some obscure problems like that, after some weird RAM problems i had about 10 years ago, i always heavily memtest / stresstest my new systems, to not have some unexpected type of stuff later.

1 Like

Yeah, I have repeated MemTest86+ two times ( 2x about 7 hours) and mprime because I am using 4 x RAMs. Their results showed no error, but they are wrong.

My long story here:

1 Like

Just for the future - keep in mind that mprime is not best thing to detect RAM related errors, it’s better for heating / CPU issues, next time try linpack for that it should detect more of those

1 Like

firefox, brave, tor

i think im gonna use ext4


Her is some good back and forth on btrfs and ext4

My reason for using btrfs was instant snapshots and simple restore from grub menu.

mprime large FFT is actually pretty good at finding memory errors, small FFT is for heat.


Those are the 3 best memory testers, Memtest/Memtest86 hasnt been able to decently detect memory errors in nearly a decade unless the memory is so faulty you cant miss it. 9/10 ive had faulty memory pass memtest 100% then fail on other loads or in 1 of the 3 listed. Those 3 also find when your memory clocks are too high for your IMC, thats how i found 2933mhz to be the max my 5600x can do with Quad dimms.

1 Like

True, however i find linpack is bit better, anyway running both won’t hurt :+1:

yeah never hurts, can always add it to the round of “is my system broken” tools lol

1 Like

how did you run mprime? just the default mixed load? small or large fft? also you should just dump memtest86+ as these days it cant handle modern imc/dimms and almost never detects errors

edit: this may change with the recent v6.0 betas btw. Its been rewritten with uefi and 64bit support which it didnt have previously being last stable in 2013. Previously its been a 32bit application with hacky workarounds to enter long mode for testing beyond 4gb and other weird stuff to sorta “work”