Convert root to f2fs from xfs

I don’t see a more appropriate category than this one - so I offer my apologies up front.

My root partition is on xfs and an NVME drive. I would like to convert it to f2fs - without having to format/reinstall (if possible). fstransform does not support f2fs according to their documentation on github.

I have good rsync backups of my /home dir, and I have just imaged my entire system using Clonezilla to a separate drive. So I’m prepared for pretty much anything to go wrong with no risk of loss of data.

a) Is it possible?

or

b) Do I need to reinstall EOS, then restore my data?

Thanks!

Dave

Nope.

4 Likes

Thank you for the reply…That is pretty much what I gathered since fstransform doesn’t support it.

I’ll re-install and restore… I’ve just synced 4 backups to different drives - so I’m not going to have data loss.

THANK YOU @ Jonathan for the response.

Dave

3 Likes

Just wondering - what prompted the departure? Or the destination? Was thinking of try out ZFS - just to see…

What is the reason for going from XFS to f2fs? Just curious.

Longevity of the internal NVME drive I paid good money for! :slight_smile:

I have a couple other NVME drives installed, and two 2.5" SSD’s - and I’m getting phenomenal performance out of them under f2fs. XFS is also performing very well on my root drive…I have no complaints at all about it’s performance.

I’m just trying to conserve write-cycles on the drive. When I bought the thing - it was expensive.

And as I have a quaternary backup of my root drive across many rotating drives - I’m quite safe from data-loss in case of a re-install.

I’m a retired BAS engineer - which involves both hardware and software. When I got into building automation - we wrote our code inthe old yet venerable “Macro Assembler”…Assembly language… on the Motorola 68000 and the 6809E of Tandy color computer fame. All total - I have 18Tb of storage online. (I still service two long-term clients in my retirement for both coding and IT support. Coding in C/C++ - and then python is a recent add to my skillset.

So I tend to build my workstation for myself and others - to the best longevity specs I can…

F2FS will be a first for me as in f2fs on root…It already works well on other SSD drives I have online.
And there is no risk to my data if it doesn’t work out…

Sincerely and respectfully,

Dave

F2fs was it not for usb drives or phone type of things…if it works goood dont change it only on xfs need to check fstab for the fsck thing i gues… i think should set 0 0 but prefer atleast upsteam supported filesystem

Thank you for the reply @ringo

My fstab follows the default settings mentioned on the Arch wiki for XFS on root.

# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a device; this may
# be used with UUID= as a more robust way to name devices that works even if
# disks are added and removed. See fstab(5).
#
# <file system>                                 <mount point>                       <type>      <options>                           <dump>  <pass>
UUID=96909132-5bde-4c03-af9a-33057afdaec0       /                                   xfs         defaults,noatime                        0       1
tmpfs                                     		/tmp           						tmpfs   	defaults,noatime,mode=1777 				0 		0

# Daves Mounts
UUID=affaeb88-aeb9-41ad-ba85-a6a7503a0b9a	   	/AdditionalDrives/Backup-1		   	xfs	   		defaults,noatime,exec					0	  	2
UUID=f36a2fb2-b4fb-4876-a3d1-d85f02a867a8	   	/AdditionalDrives/Backup-2		   	xfs	   		defaults,noatime,exec					0	  	2
UUID=217bf509-bc3a-42f1-a0dd-75e6b6fd1ca7	   	/AdditionalDrives/Backup-3		   	xfs	   		defaults,noatime,exec					0	  	2
UUID=4db7a6a5-4794-4485-be69-7aefc58ceff8	   	/AdditionalDrives/Storage-1Tb 	  	f2fs		defaults,noatime,exec					0	  	2
UUID=8e445f3b-6e89-44dd-8dc6-5d1e4ed8aa0f	   	/AdditionalDrives/VirtualMachines 	f2fs		defaults,noatime,exec					0	  	2
UUID=7120a867-5fe8-4e1b-b324-1af2d1ec5148	   	/AdditionalDrives/Storage-512G     	f2fs		defaults,noatime,exec					0	  	2

Thank you again!

Dave

Was something on fstab wiki xfs the fsck on root…

exec is obsolete if defaults is selected.

Can you elaborate on this a little bit? Do you have links you can share?
I understand that XFS eventually has more write cycles because it uses a log. But isnt that a welcome functionality? What else is causing f2fs drives to live longer?

One of the beauties of Linux - is that there is always something new to learn.

THANK YOU @Leon, for clarifying the “defaults” vs “exec” options. I looked at the wiki article (and a few more) and validated what you posted. I have amended my fstab accordingly, and propagated that file throughout my backups.

@mbod…This link should help explain more about f2fs. It was designed from the ground up by Samsung specifically for NAND based storage devices.
https://en.wikipedia.org/wiki/F2FS

And the f2fs fsck bug has been squashed with this commit:
https://www.mail-archive.com/linux-f2fs-devel@lists.sourceforge.net/msg17224.html

Thank you all for your responses!

Dave

just notify if all works :slight_smile: also for me is actually a wild wild west on that :slight_smile:

I’ve already tested in in virtual machines…So that I know that f2fs on / works.

But on real hardware…well now…that is going to be tested on a laptop first, then on my main workstation.

Dave

Edit and PS: The laptop has a 512Gb SSD in it…

Good Evening All,

EOS installed, booted, and updated without issue under F2FS.

Dave

2 Likes

Thanks for the info - wasn’t really that aware of f2fs’s existence - but it sounds like something I should be looking at, given the nvme’s I run on these days…

I am well aware of the wiki article and many others talking about f2fs. But none of them elaborates on the f2fs effect on device lifetime.

The only thing I could find with a little more details is the first article from the developers published in 2015: http://www.cs.fsu.edu/~awang/courses/cop5611_s2020/f2fs2.pdf

In that article they specifically focus on mobile devices and they claim that Android/ios client apps from facebook or twitter for example have random write patterns and that …

Unless handled carefully, frequent random writes and flush operations in modern workloads can se-riously increase a flash device’s I/O latency and reduce the device lifetime.

Furthermore

As far as we know, F2FS is the first publicly and widely available file system that is designed from scratch to optimize performance and lifetime of flash devices with a generic block interface. This paper describes its design and implementation.

There is no specific test regarding lifetime of the device. In fact, when they also test PCIe SSDs with f2fs, ext4 and btrfs they say:

On thePCIe SSD, all file systems perform rather similarly. This is because the PCIe SSD used in the study performs concurrent buffered writes well.

Which basically means that todays SSD/NVMe controllers mitigate the issue that they found with mobile devices SD cards.

To me this all looks like that the internet is repeating the initial lifetime claim from that article without mentioning that this is only valid for mobile devices with stupid SD cards and is not valid for PCIe devices with smart controllers.

1 Like

Hello @mbod,

This article may shed some light that f2fs is designed for smart devices too.
https://wiki.archlinux.org/index.php/F2FS

Key quote from this article:

F2FS (Flash-Friendly File System) is a file system intended for NAND-based flash memory equipped with Flash Translation Layer. Unlike JFFS or UBIFS it relies on FTL to handle write distribution. It is supported from kernel 3.8 onwards.

An FTL is found in all flash memory with a SCSI/SATA/PCIe/NVMe interface, opposed to bare NAND Flash and SmartMediaCards.

<Emphasis - mine>

So respectfully, the argument that f2fs no longer applies due to electronic advancement, doesn’t appear to be correct.

This article explains the purpose of FTL - and as you brought up, the FTL handles wear leveling.
https://itigic.com/ftl-why-is-it-so-important-in-ssds/

And in test conducted by people smarter than me, it performs well. LIke all FS’s - it has it’s strengths and weaknesses.
https://www.phoronix.com/scan.php?page=article&item=linux-50-filesystems&num=1

Then you have this reddit which deals specifically with the misconcption that f2fs is meant for “dumb” SSD devices only:
https://www.reddit.com/r/linux/comments/6bngw0/refuting_the_myth_about_f2fs/

And in the reddit here is the source information:
https://lwn.net/Articles/518988/

Bottom line - f2fs is a new FS. It has not had the benefit of the test of time such as ext4/3/2, XFS, etc.

As with all things new (like btrfs), while f2fs is considered mature, and btrfs is not…Some distros are now going with btrfs as the default fs. NOT ME. I do not trust btrfs - in most tests - it’s a slow, and in many posts I’ve read - in some use cases it is still causing catastrophic data loss…Which f2fs is not causing data loss by those who use it.

As with all things linux - it is the users choice (and their risk to assume). I’ve tested three other smart SSD’s/NVME’s I have in my system with f2fs for the last few months - including one that hosts my KVM machines…And in my usage scenario - f2fs performs flawlessly and very quickly. So the next logical step (for me) would be to convert my root FS (also an NVME) to f2fs.

Thank you for the reply - but I will be converting my root fs to f2fs. I will report back if it’s a disaster…but I’ve done f2fs on root with EOS and ArcoLinuxB on my laptop. Both installs went perfectly, the updates went perfectly, and the laptop (my service laptop) enjoyed a speed-up in the process as an additional “perk”.

Sincerely and respectfully,

Dave

2 Likes

My root FS is now f2fs. I did an rsync backup of my home dir, and an rsync restore to it from my backups.

I exported all of my pacman packages to a file, then used that file as input to pacman after the format and reinstall to ensure I had installed everything I had installed previously.

So the system is fully “loaded up”.

F2FS seems to be a tad quicker than XFS on the exact same drive…But that could simply be the placebo effect.

FYI,

Dave

2 Likes

It makes perfect sense - and something on my to-do list. However, I should probably reduce the number of distros I would do it to first! Perhaps it would also make sense to go to XFS for my spinner (data drive for all distros) but the logistics of that are intimidating… perhaps rsync over ssh? Temporarily adding another drive? Crossing my fingers? :grin: I probably won’t get around to that any time soon - but it would make sense…