Adventures with btrfs and gpt

disclamer: quite long post, sorry for verbosity

In my transition to EOS from another arch-based distribution, I am willing to attempt a switch to gpt+btrfs system as well.

Before approaching the actual installation process, I have examined several sources - starting from the very informative threads in this same forum (list for future reference):

My layout plan (w/encrypted luks):

  • base as per @2000 article, 8M gpt and btrfs for the remaining space, with subvolumes @ and @home automatically created by calamares
  • create additional subvolumes for /var/cache, /var/log, /var/tmp, ~/.cache (plus @/swap for hibernation and /.snapshots for subsequent Snapper use).
  • /var subvolumes nested within /@ parent and .cache within /@home, as it initally sounded a logical thing to do

In spite of homeworks, I was unable to achieve this, probably because I really haven’t fully grasped the nature of subvolumes.

Following a mix of @2000 guide and Archwiki (possible first mistake: mixing), I tried to create /var subvolumes - eg. sudo btrfs subvolume create /mnt/btrfs/@/var/log - being greeted by ERROR:target path already exist (first sign I haven’t understood properly subvolumes).
Then I thought that since paths already exist, I might “just” have to deal with mounting them, so I tried to mount /dev/mapper/… /var/log into subvol @/var/log, encountering though wrong fs type, bad option, bad superblock, missing codepage or helper program or other error.

As things made less and less sense from the ideal picture I had in my mind, I came to terms with my inability to understand and compromised for a full-flat layout, attempting to create subvolumes at the same level as @ and @home, but ultimately resulting in a convoluted mess of parents and children - eg. subvolume create /mnt/btrfs/@var and subsequently /@var/log (in one of such experiments I managed to create @var/cache/pacman/pkg, each sub-branch being its own subvolume and children of the preceding one, just to give you an idea of the desperation level reached).
Final attempt with btrfs subvolumes create /mnt/btrfs/@var-cache and mounting it with mount -o subvol=@var-cache dev/mapper/... /mnt/btrfs/@/var/cache resulted in both empty @var-cache subvolume and actual directory @/var/cache.
Several stops at Google and StackOverflow didn’t help in clearing skies from the foggy mess.

I know I’d likely have to deal with moving data (rsyncing?) and creating fstab entries as highlighted by @dalto here, but ultimately I am missing several pieces.
It’s evident at this point I got the whole btrfs subvolumes idea wrong, but still I really wouldn’t like to surrender and go back to ext4 LVM-LUKS layout. May then any kind soul with a working knowledge of btrfs come to rescue and put some salt (and solid judgment) in my relation with subvolumes?

Dulcis in fundo, just to further spice up things, gpt partitioning didn’t really work as well, resulting in unbootable system.
Have to say I am attempting all this on my quite ancient (but still doing its job), pre-uefi laptop (around 2010 if I’m not mistaken, upgraded with ssd), hence bios probably isn’t able to see ee gpt partition as bootable. The trick of fdisk -t mbr /dev/sda didn’t really work, but before trying the sudo printf ‘\200\0\0\0\0\0\0\0\0\0\0\0\001\0\0\0’ | dd of=/dev/sda bs=1 seek=462 as suggested in archwiki (and prematurely sending the ssd to heaven in case it screws up) I would kindly welcome a second opinion from you guys.

Thanks for your patience and sorry for my dumbness.

Let me see if I can be of help before the real experts chip in :slight_smile:

If you think of subvolumes as being the same as a folder for a moment, what you tried to do is to create a folder (subvolume) named log in /var which obviously does already exist. Hence the error you get is correct. You can’t have a folder and a subvolume with the same name in the same location.

And given that /var/log is just a regular folder this explains the second error message, you can’t mount a regular folder like if it was a subvolume.

I have all my subvolumes at the root level and map them to their desired location via fstab, but to achieve what you tried to do I think the following steps are required:

  1. Rename /mnt/btrfs/@/var/log to e.g. /mnt/btrfs/@/var/log-old
  2. Run sudo btrfs subvolume create /mnt/btrfs/@/var/log
  3. Copy stuff from log-old to log

Note, this is conceptual, I didn’t try this, not sure if you can rename the log folder while it’s in use.
Actually I may just try this out of curiosity…

Edit: This works.
Add these steps to the list above and you are done
4. Reboot
5. sudo rm -r /var/log-old
6. Check success: sudo btrfs subv list /mnt/btrfs

And by the way, you don’t need to reference your subvolumes from outside in. btrfs subvolume create /var/log will work just as well.

1 Like

So with this command, you’re trying to create a subvolume named “log” in the subvolume @.

But you probably want to create a top level subvolume that you can later mount to /var/log :wink: -
(Assuming you’ve followed my wiki article and have mounted the newly installed btrfs system to a temporary directory /mnt/btrfs); Try the following:

  1. Create top level subvolume named @var-log
    sudo btrfs subvolume create /mnt/btrfs/@var-log

  2. to mount in fstab you would need to add something like this
    [...] /var/log btrfs subvol=@var-log,[...]

You probably know this, but just to make sure …
The (additionally) mounted subvolumes, apart from @ and optionally @home, won’t be part of the snapshots that Timeshift or Snapper create. So, you won’t be able to store and restore their state with these programs.

1 Like

If you create the subvolume directly where you want it to logically reside you can save yourself the fstab hassle.

1 Like

You’re absolutely right, nested subvolumes don’t need to be mounted by fstab.

My personal experience with nested subvolumes and creating and restoring @ and @home with additional nested subvolumes led me to only use and recommend top level subvolumes though.

Mind sharing what these issues were? I am genuinely interested as I am still learning the ins and out of btrfs.

For the case at hand I’d say that nested is probably fine because the additional subvolumes look to be all about temporary data, most likely to exclude this data from snaphots and backups

By the way, I am not using nested subvolumes either, but so far the only reason was because I want to be able to give them names which allow me to clearly tell them apart from folders. @var, @home, etc.

I’ve had issues with Timeshift, where snapshots couldn’t be deleted before deleting the nested subvolumes by hand first. When trying to delete certain snapshots, Timeshift greyed them out, but didn’t delete them. If I remember correctly, all snapshots created before the one restored had this issue.

So basically, at the time of my testing, Timeshift and nested subvolumes were a no-go. Might be fixed now.

1 Like

that’s not only a Timeshift problem. It’s a general “issue”. You must delete the nested subvolume first. I think that’s even the case when you want to delete a subvolume with the basic btrfs command
(sudo btrfs subvolume delete @whatever_subvolume)


So the basic conclusion is: If you want to use applications like Timeshift or Snapper that automate the creation and deletion of snapshots (create and keep a certain number of daily, weekly, etc.) you should not use nested subvolumes; use top level ones instead.


And by coincidence I ran into a nice example yesterday which illustrates the issue.
I moved my Arch server onto a larger SSD yesterday, which conveniently worked via

btrfs send @ | btrfs receive newDrive

plus a few manual config steps.

But what I didn’t think off is that docker or docker-compose make heavy use of nested snapshots which means my jitsi install is now completely broken. Haven’t figured out how to rescue this yet :frowning:

1 Like

First, thanks to @TomZ, @2000 and @T-Flips for their precious contributions; considering subvolumes as directories and not as an abstraction of them makes more sense in the plethora of errors I have encountered (to be honest, I got slightly confused by the layout section in Archwiki Snapper page).

Moreover, after reading your experiences with nested subvolumes (really thanks to @TomZ for trying yourself the conceptual idea you gave on the first post and sorry to hear about your jitsi instance!!), I decided a flat layout is definitely going to be my approach as well…I am for experimenting, but I’d need some stability as well on the long run :smiley:
This considered, as far as I have understood, my next steps are (example with /var/cache, similar process for var/log, var/tmp and ~/.cache.):

  • btrfs subvolume create @var-cache
  • subsequent rsync -aAvh @/var/cache @var-cache to recreate the subdirectories structure
  • then map /dev/mapper/... /var/log btrfs subvol=@var-cache, into fstab
  • ultimately rm -r @/var/cache before reboot

For snapshots, I’d mkdir @/.snapshots, create related subvolume btrfs subvolume create @snapshots and map it into fstab.

@2000 I am aware of this and I am trying to complicate my life on purpose :smiley:
Specifically, afaik /var/tmp cannot be mounted as tmpfs, but doesn’t hold strictly necessary files for an eventual restore, hence it sounds a good candidate; /var/cache is essentially pacman cache in my past experience and similarly, in ~/.cache the most relevant parts are yay packages and xfce sessions, and hence no strictly essential files are present (I might be wrong naturally, as usual). Finally, /var/log is mainly because I think if I’m gonna do a restore would be ‘cause I screwed up something, and hence having logs not overwritten seems a good idea to me, for a post-mess debriefing.

Finally, about the gpt magic formula, I think I’ll end up trying to see if I manage to create a type 0 bootable partition before going nuclear with dd (cannot deny that command always scared me for its power to destroy and create).

Thanks again for all your kind help!


looks like you’re doing it like suggested:

true, afaik. I subvolumed it as well :laughing: Nothing that needs to be “snapshotted” by snapper so it can be “factored out / excluded” i.e. put in a seperate subvol

i guess so. I just subvolumed /var/cache/pacman/pkg. Most of the other subdirs are empty on my system… Shouldn’t be a problem to exclude the whole /var/cache dir as you suggest. Same goes for ~/.cache (if you even do snapshots via snapper on your @home subvolume)

exactly. Exclude /var/log (by making it a separate subvol), so logs don’t get rolled back when you rollback the system. I have that as well.
So there are 2 cases i always think of for putting a directory in its seperate subvol

  1. don’t want the content to be rolled back when you roll back your system )i.e. /var/log)
  2. don’t need to snapshot the content because its not relevant (i.e. /var/tmp, /var/cache/pacman/pkg)

those other subvols can be snapshotted seprately (like @home). You can create a seperate snapper config for them if you want (i don’t snapshot my @home. I do a file based backup with vorta for my home folder).

By the way, i find it quite useful to have a mountpoint “above” all the (flat) subvolums (does that makes sense ? :sweat_smile: )
Maybe you have that already, but i have a line in my fstab that goes like this:
UUID=123 /mnt/btrfs-root btrfs defaults,noatime,space_cache,autodefrag,compress=zstd 0 2

That means: i created a folder /mnt/btrfs-root and have my “root disk” mounted to that folder (realise there is no subvol=@whatever in the options section of that line)

that means i can go to /mnt/btrfs-root in my file browser and there i see all the subvolumes (@, @home, @snapshots, ...) i can easily create new subvolumes, delete them, snapshot them. (like for example: system went south :rofl: -> just boot a whatever distros live iso -> browse to the disk my (broken) EnOS install is on -> see all the subvolumes. Rename the (broken) “system”/subvol called @ with sudo mv @ @.bak. Then do something like sudo btrfs snapshot ./@snapshots/<some number>/snapshot @
So i take a functional snapper snapshot and … snapshot it to @ :grin:, i.e. create a snapshot of a functional snapshot ( :joy:) as ( the destination) @, which is my root -> broken system repaired in seconds.

Maybe looks a bit complicated as i described it…but it isn’t. No need for grub-btrfs (which can boot into snapshots). Sure it’s maybe easier. But i don’t need it. I just boot an iso i have handy and “snapshot back” a functional snapshot/system-state, done.

Omph, quite a long post, wasn’t intended…


exactly my approach to subvolumes.

That’s what I’d like to put in place as well, resorting mainly to snapshots (plus weekly backup) of main system, while @home having its files backed up as-is (for this, I’m still considering alternatives to plain rsync for incremental weekly backups). Never heard about vorta, will look that up.
Hopefully later today I’d find the time to test the layout and finally do ultimate installation.

actually I had read a similar approach the other day (were you, @T-Flips, or @TomZ?), and that sounds pretty interesting for a desperate situation where one can solve the issue with base coreutils.

Thanks again for your contribution!

An “outside-in” mount point is one of the first things I set up whenever I install my machines. Took a while for me to get my old head around the concept but now I think this is brilliant :slight_smile:
I use /mnt/ for that:

$ ll /mnt
total 0
drwxr-xr-x 1 root root  190 Okt  5 18:34 @
drwxr-xr-x 1 root root   12 Okt  5 16:42 @home
drwxr-xr-x 1 root root   12 Okt  4 17:15 @homenew
drwxr-xr-x 1 root root  206 Okt  5 16:42 @old
drwxr-xr-x 1 root root 1,3K Okt  6 10:14 snapshots
drwxr-xr-x 1 root root   16 Mär  7  2020 @swap

@ and @home is is what is actively running right now. Flip @ and @old (mv), reboot and I am running a different OS, or I could tell the boot loader to start @old instead of @…this is a great concept in my opinion.


i thought you would snapshot your @home as well, or why are you going to create a separate subvol for ~/.cache? I’m curious :grinning:

vorta is a gui for borg. Can be found in the AUR.

not me as far as i remember…

…wild. You really live on the edge! :laughing:
I did such things for changing DE already, but not for running a whole other OS.

1 Like

ok, it was you, @TomZ! Yeah, at first I thought quite peculiar approach, but that way you got an incredible level of flexibility (and as @T-Flips correctly pointed out, you like living on the edge :joy: )

sorry, in the rush of answering within the 5-min break I had, I cut it to the bones…I’d snapshot both @ and @home, but while for @ snapshots constitute also the main form of external backup, for @home, due to the nature of its content (docs, pictures, projects,…), a snapshot would be a kind of “first line of defense”, while plain backup (and here still thinking which form to take) would allow me portability and accessibility in other non-btrfs platforms as well. If I oversaw anything and got anything wrong, glad to modify my view!

About btrfs adventures, tried subvolume ops as per

and everything seems to work.

Cannot say the same thing about booting process, after various different approaches I keep having a boot failure and need to go thru EOS live grub to boot the actual installed system. Maybe, being quite an annoying issue per se, I’d open another thread to avoid polluting this one, more focused on btrfs itself rather than grub.

Thanks again for your kind contributions that allowed me to boost my rather limited knowledge on btrfs!

ah, ok.

nothing that comes to my mind i regards of snapshotting @home. Putting ~/.cache in a separate subvol is indeed a good idea then.

Just some things to consider maybe about btrfs for the home partition in general. Like, if you use VirtualBox or the like, one should disable CoW für the VM images (or the containing folder). Those VMs usually reside in your home directory.
same goes for DBs.
I think there also was something that does (or did in the past?) not support the btrfs filesystem. But i can’t remember if it was Steam or the Dropbox Client or whatever…

Yeah, i would also recommend this.



1 Like

That article is from August of 2018 and it says starting in November (I’m assuming of 2018), so perhaps something changed between then and now. Maybe they saw the error of their ways.

Thanks for pointing it out, particularly for the VMs: I’m gonna chattr +C the directory holding them.
As for Dropbox, I’ve dropped it, so no problem :wink: