Migrating from grub to rEFInd. Snapshots and other things

Still don’t have snapshots set up. But my first crack at an EOS rEFInd theme isn’t bad if i do say so myself.
20220909_200103

3 Likes

Ok, figured out why I’m not seeing the snapshots…it’s because I supposedly restored to one but it’s still a snapshot I guess. Have to figure out how to unsort all this and apply the changes to the main volume I guess. Used to snapshots, not used to the way BTRFS does them, I use them in VMWare land.

Error
refind_btrfs/__main__.py[598]: ERROR (refind_btrfs.state_management.refind_btrfs_machine/refind_btrfs_machine.py/run): Subvolume '@' is itself a snapshot (parent UUID - 'd2699933-4bce-4349-a007-f1c6ad3c4301'), exiting...

Output of btrfs subvolume list / :

ID 256 gen 32110 top level 5 path timeshift-btrfs/snapshots/2022-09-08_18-02-39/@
ID 257 gen 34373 top level 5 path @home
ID 258 gen 34351 top level 5 path @cache
ID 259 gen 34373 top level 5 path @log
ID 260 gen 26 top level 256 path timeshift-btrfs/snapshots/2022-09-08_18-02-39/@/var/lib/portables
ID 261 gen 27 top level 256 path timeshift-btrfs/snapshots/2022-09-08_18-02-39/@/var/lib/machines
ID 323 gen 34373 top level 5 path @
ID 327 gen 32160 top level 5 path timeshift-btrfs/snapshots/2022-09-08_18-25-08/@
ID 328 gen 34365 top level 323 path .snapshots
ID 329 gen 33336 top level 328 path .snapshots/1/snapshot
ID 330 gen 33347 top level 5 path timeshift-btrfs/snapshots/2022-09-09_16-45-33/@
ID 331 gen 33351 top level 5 path timeshift-btrfs/snapshots/2022-09-09_16-46-36/@
ID 332 gen 33382 top level 328 path .snapshots/2/snapshot
ID 337 gen 33822 top level 328 path .snapshots/7/snapshot

EDIT, forgot to include this:

sudo btrfs subvolume show /
@
	Name: 			@
	UUID: 			0c240ef3-d755-594b-887a-b3b35c6f3847
	Parent UUID: 		d2699933-4bce-4349-a007-f1c6ad3c4301
	Received UUID: 		-
	Creation time: 		2022-09-08 18:02:39 -0400
	Subvolume ID: 		323
	Generation: 		34380
	Gen at creation: 	32110
	Parent ID: 		5
	Top level ID: 		5
	Flags: 			-
	Send transid: 		0
	Send time: 		2022-09-08 18:02:39 -0400
	Receive transid: 	0
	Receive time: 		-
	Snapshot(s):
				@/.snapshots/1/snapshot
				@/.snapshots/2/snapshot
				@/.snapshots/7/snapshot
				timeshift-btrfs/snapshots/2022-09-08_18-25-08/@
				timeshift-btrfs/snapshots/2022-09-09_16-45-33/@
				timeshift-btrfs/snapshots/2022-09-09_16-46-36/@

That is pretty normal. How else would you restore a snapshot? You typically take a snapshot and place it in the location of the old subvol. Even you moved the snapshot during the restore, it would still be a snapshot.

btrfs is a little strange in that snapshots are subvols themselves.

Yeah, makes sense, I guess I don’t get why btrfs-refind is complaining about it though. Anyway, I should probably clean out the old timeshift backups anyway, just to make sure that isn’t an issue. As it is, it looks like the grub config is updating with the snapper snapshots, so in a pinch I can use that to get to them. So nice having two bootloaders when one doesn’t work 100% right. :slight_smile:

Hi, refind-btrfs author here.

I don’t get that error on my machine - the “@” subvolume really has no parent subvolume, i.e., it is not a snapshot of another subvolume found on the same filesystem. My understanding is that any subvolume which has a defined “Parent UUID” field is in fact not a root subvolume but rather a snapshot. Btrfs docs could use some improvements, I’m not gonna lie…

Anyhow, you can easily turn this validation off if you so desire.

I think I’ll do that. One thing I wanted to check with you, is it required to have manual stanzas for the snapshots to show up? I’m working off of the automatic detection right now, couldn’t get the manual ones working yet

Yes, a manual boot stanza is one of the prerequisites (last bullet point). It really isn’t that hard to write one yourself, though.

Otherwise, I’d have to re-implement whatever logic rEFInd’s author himself implemented (and port it as well - from C to Python) which seems to be a bit too much work - at least for the time being.

1 Like

No problem. I took a couple cracks at it, hadn’t gotten it working yet, and went to the automatic mode. Sounds like I need to try again with the manual stanzas…of course I’ll leave the automatic ones for now just in case. I’m sure I’ll figure out what I’m doing wrong eventually.

Hello @Venom1991
Any chance you can explain how to set up btrfs snapshots to show up on rEFInd. I’ve looked over the info on the package refind-btrfs but I’m not sure i understand the how and where to configure it. I’m assuming this is booting from the vmlinux-linux image? I have used rEFInd triple booting before with the grubx64.efi and i know how to boot from the vmlinux-linux image file. I have btrfs file system with btrfs-assistant and snapper and snap-pac. Would like to try this package refind-btrfs but not quite following the process of configuring it. I didn’t use any special changes in the refind.conf file before but i assume it is needed for this setup. I would most likely try to set it up in vm for now.

Thanks.

But that just tells you where it came from. As soon as you restore a snapshot, the target subvolume originates from that snapshot in one way or another. A subvolume being a snapshot doesn’t tell you anything but it’s history. Doing validation that @ can’t be a snapshot makes not much sense in practice.

Still working on the manual stanzas, think I’ve got my mistakes figured out though. Also did a short writeup on Reddit on the thing…once you’ve tried to configure it it’s more understandable why it isn’t a default option. And just out of curiosity, anyone know why Reddit previews show up in what looks to be German? Might have that wrong, my language knowledge runs towards the more latin-style stuff.

You didn’t explain how you manually configured it? This is the part that i want to try in vm.

Probably because i still don’t have it working yet. The manual stanzas still aren’t booting.

Where do you set the manual stanzas? On mine it already has boot info in the .conf file. Do you take that out and replace it or are you setting the info in the other conf file that creates it?

I took the parameters in the refind_linux.conf file in /boot and added them to /boot/efi/EFI/refind/refind.conf. Used the Arch Linux stanza as a template to create the EndeavourOS Zend and LTS stanzas, replacing the kernel paramters with the ones from refind_linux.conf and commenting out “disabled.”

1 Like

@ricklinux in fact, here’s the stanza, see if anything jumps out at you, I think it’s right but I might have been staring at it for too long. I’ve got the same stanza for an LTS kernel, except for changing “zen” to “lts” everywhere. I did rename the volume, wanted it to look right on the boot screen.

menuentry "EndeavourOS Zen" {
    icon     /EFI/refind/icons/endeavour-zen.png
    volume   "EndeavourOS"
    loader   /boot/vmlinuz-linux-zen
    initrd   /boot/initramfs-linux-zen.img
    options  "root=UUID=115e14e8-db06-40b9-8639-6934372d846e rw rootflags=subvol=@ resume=UUID=3cbbdf01-e7a7-4fa9-a24f-48180d6f4b45 loglevel=3 nowatchdog nvme_load=YES i8042.probe_defer"
    submenuentry "Boot using fallback initramfs" {
        initrd /boot/initramfs-linux-zen-fallback.img
    }
    submenuentry "Boot to terminal" {
        add_options "systemd.unit=multi-user.target"
    }
#    disabled
}

What happens when you delete the subvolume from which the current root subvolume was restored? Does it still have the “Parent UUID” field defined? If so, that’s pretty misleading considering that its parent no longer exists at that point.

Here’s my reply to a similar discussion on GitHub. It’s not really a simple problem to solve, considering one can boot into snapshot A then reboot into snapshot B and so on.
Do I keep track of what is basically a forest of trees (where each node is a subvolume) at that point or not? It seems like a serious overcomplication, imho.

The more I think about this problem the more it makes sense for the user to explicitly define the current root subvolume’s UUID directly in the refind-btrfs.conf file. I could even try playing around with the PKGBUILD and programmatically output the value myself, somehow.
There would be no confusion that way although this approach would require manual intervention - after refind-btrfs is installed (if it can’t be done with the install script) and after the root subvolume is restored from a snapshot of itself.
That shouldn’t be too much of a hassle since these operations are usually rare (the installation itself being performed exactly once, of course) unless the user really breaks his or her system often and is obviously forced to restore the root subvolume just as often.

Can you explain the problem you are trying to solve in the first place? Why does it matter where the subvolume mounted at @ has a parent UUID or not? I am not sure I follow the issue from reading the github link.

But it changes every time a restore is done so that means you need to reconfigure after each restore.

This is actually surprisingly common for many of our users. Restores are often done as a convenience.

Yes, it does. My understanding is that field doesn’t indicate that the subvolume has a parent with that UUID. It is indicator of the history, not the current state.

You surely won’t find anything useful on the AUR page itself. Both the readme and config files contain a wealth of information, at least I think they do.

Honestly, I shudder at the thought of writing an exhaustive tutorial. I dislike writing documentation that much and it took me a long time to write those two files - especially the readme. The source code itself remains undocumented but is fairly readable - from my (obviously subjective) perspective.

There are a few “variables” one would have to take into account while writing a tutorial for a tool such as this one - just to name a few:

  • Encrypted vs unencrypted filesystem
  • separate /boot or /boot as part of /
  • Snapper or Timeshift or something else (the tool was designed to be agnostic with regards to these snapshot management tools)

My system is unencrypted, has /boot as part of / and I use Snapper which means that I’d have to spin up virtual machines, try these various setups, see what works and finally document all of the steps. It’s a lot of time-consuming and, worse yet, very boring work.