Using snapper to snapshoot and restore a system

Thanks @dalto!

That confirms my suspicion. It’s a pity that it has adopted such a restrictive naming convention being such a commonly used app, .

Alright, I will look into it. I have never used it so I will need to read up on it. Hope it is alright to leave this topic open in case I run into so issues and need some support.

As it relates to btrfs snapshots timeshift is limited in so many ways beyond just the naming convention.

snapper is much more flexible but has is also more complicated as a result.

Feel free. I recently setup snapper on my “boot 6 distros from one btrfs partition” project and learned quite a bit about it in the process.

1 Like

:grinning:

In a nutshell, this is what got me to start the other thread asking about the possibility to install a system by practically unpacking it into a partition or in this case a subvolume on btrfs and make the necessary modificaton to make it bootable/functional.

But that is another topic. I’ll get back to that thread to give more detail on my current disk/system setup.

1 Like

OK, well, I won’t go into detail so as not to stray off the topic but I took fairly detailed notes of how I got all the distros setup and in a single partition if you have any question. A small amount of additional information is in this topic.

1 Like

Thank you!
I will have a look.

The problem with me is that sometimes I try to lift several Watermelons with one hand :sweat_smile:

Note to myself: your first priority is to get snapper going

:watermelon:

1 Like

Here is some info about the system I would like to snapshot:

cat /etc/fstab
# Static information about the filesystems.
# See fstab(5) for details.

# <file system> <dir> <type> <options> <dump> <pass>

# /dev/nvme0n1p4
UUID=139F-295E      	                        /boot/efi 	              vfat      	   rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro	0 2

# /dev/nvmen0n1p6
UUID=42f63a16-9a8b-421a-b509-7a3987aaa661	    /                         btrfs     	   rw,noatime,compress=lzo,ssd,discard=async,space_cache,subvolid=344,subvol=/@arch-budgie-root	0 0

# /dev/nvme0n1p6
UUID=42f63a16-9a8b-421a-b509-7a3987aaa661	    /home     	              btrfs     	   rw,noatime,compress=lzo,ssd,discard=async,space_cache,subvolid=345,subvol=/@arch-budgie-home	0 0
#
#
#
UUID=42f63a16-9a8b-421a-b509-7a3987aaa661       /Data/Backups              btrfs           rw,noatime,compress=lzo,autodefrag,ssd,discard=async,space_cache,subvol=@Backups      0 0
UUID=42f63a16-9a8b-421a-b509-7a3987aaa661       /Data/Documents            btrfs           rw,noatime,compress=lzo,autodefrag,ssd,discard=async,space_cache,subvol=@Documents    0 0
UUID=42f63a16-9a8b-421a-b509-7a3987aaa661       /Data/Pictures             btrfs           rw,noatime,compress=lzo,autodefrag,ssd,discard=async,space_cache,subvol=@Pictures     0 0
UUID=42f63a16-9a8b-421a-b509-7a3987aaa661       /Data/Music                btrfs           rw,noatime,compress=lzo,autodefrag,ssd,discard=async,space_cache,subvol=@Music        0 0
UUID=42f63a16-9a8b-421a-b509-7a3987aaa661       /Data/Library              btrfs           rw,noatime,compress=lzo,autodefrag,ssd,discard=async,space_cache,subvol=@Library      0 0
UUID=42f63a16-9a8b-421a-b509-7a3987aaa661       /Data/Videos               btrfs           rw,noatime,compress=lzo,autodefrag,ssd,discard=async,space_cache,subvol=@Videos       0 0
UUID=42f63a16-9a8b-421a-b509-7a3987aaa661       /Data/VirtualMachines      btrfs           rw,noatime,compress=lzo,autodefrag,ssd,discard=async,space_cache,subvol=@VirtualMachines
UUID=42f63a16-9a8b-421a-b509-7a3987aaa661       /home/pebcak/.mozilla      btrfs           rw,noatime,compress=lzo,autodefrag,ssd,discard=async,space_cache,subvol=@Mozilla      0 0
UUID=42f63a16-9a8b-421a-b509-7a3987aaa661       /home/pebcak/.var          btrfs           rw,noatime,compress=lzo,autodefrag,ssd,discard=async,space_cache,subvol=@flatpak-user-data      0 0
lsblk -fs
nvme0n1p1 vfat   FAT32               C259-BB17                                           
`-nvme0n1                                                                                
nvme0n1p2 ext4   1.0   boot          853f2048-816a-462c-9e20-9f38b9595c18                
`-nvme0n1                                                                                
nvme0n1p3 btrfs        Fedora34-root f273502b-e1f8-4a57-b9f2-111a8621c3ea                
`-nvme0n1                                                                                
nvme0n1p4 vfat   FAT32               139F-295E                             107.3M    15% /boot/efi
`-nvme0n1                                                                                
nvme0n1p5 btrfs        ArchGnome     4e33451a-4483-4d0a-9fee-c48e63991416                
`-nvme0n1                                                                                
nvme0n1p6 btrfs                      42f63a16-9a8b-421a-b509-7a3987aaa661  303.7G    17% /home/pebcak/.var
|                                                                                        /home/pebcak/.mozilla
|                                                                                        /home
|                                                                                        /Data/VirtualMachines
|                                                                                        /Data/Videos
|                                                                                        /Data/Music
|                                                                                        /Data/Pictures
|                                                                                        /Data/Library
|                                                                                        /Data/Documents
|                                                                                        /Data/Backups
|                                                                                        /
`-nvme0n1                                                                                
nvme0n1p7 swap   1                   85fbc1f0-262f-4576-b7c0-62c1b121624f                [SWAP]
`-nvme0n1
# btrfs subvolume list /
ID 256 gen 44511 top level 5 path @Pictures
ID 257 gen 44511 top level 5 path @Documents
ID 258 gen 44511 top level 5 path @Library
ID 259 gen 44511 top level 5 path @Backups
ID 260 gen 44511 top level 5 path @Videos
ID 261 gen 44511 top level 5 path @Music
ID 266 gen 44511 top level 5 path @VirtualMachines
ID 333 gen 50174 top level 5 path @Mozilla
ID 344 gen 50174 top level 5 path @arch-budgie-root
ID 345 gen 50174 top level 5 path @arch-budgie-home
ID 346 gen 44755 top level 344 path var/lib/portables
ID 347 gen 44757 top level 344 path var/lib/machines
ID 352 gen 50173 top level 5 path @flatpak-user-data

I think
ID 344 gen 50174 top level 5 path @arch-budgie-root
mounted at / of the running system is the one I would like to snapshot.

So if I have understood the ArchWiki, I could simply run

snapper -c root create-config /

to create a config file for the snapshot root based on the config-templates.

I don’t think I should edit anything in the config file (or do i?) since almost all the time-related automatic stuff in there depends if the cron daemon is running in the system.

I would then just manually create an snapshot by running:

snapper -c root create --description *some description*

Perhaps I would need to create some subvolumes for /var/cache and /var/log to exclude them from the snapshot?

Looking forward your comments and advice before I go ahead with the above.

You probably don’t want to mount your subvolumes by both id & name. It works, but it may get confusing later on as things change. I would pick one or the other.

Is there a reason you are mounting this stuff at /Data? Symlinking /Data into your home directory makes sense when /Data is a real partition but when these are all subvols you can mount the subvols directly into you home directory.

You should create snapshots of every subvolume with important data. Snapshots aren’t only useful for system recovery. They are great for easily recovering data. I take hourly snapshots of almost all my subvolumes.

I do have different retention settings depending on the data. Even if you have a separate backup strategy keeping some snapshots is useful.

root is the default so it is sufficient to simply do:

sudo snapper create-config /

However, given what you are doing you would need to decide if that is what you really want. For example, on my multi-boot machine I do something more like this:

sudo snapper -c arch-budgie-root create-config /
sudo snapper -c arch-budgie-home create-config /home

I would set your general preferences in the default config template first since that is used to create all your individual configurations. The default snapshot retention rules are fairly nonsensical.

The cron jobs have been superseded by systemd-timers. You will probably want to enable snapper-cleanup.timer, snapper-timeline.timer and potentially snapper-boot.timer depending on your exact needs.

Those settings control what those timers actually do so you probably want them to be set.

I would. I would also do that first.

1 Like

Thank you so much for such wonderful comments! I can see much more clear now.

Sure, it seems superfluous. I would keep the name since it is more “human readable”.

You are so right! I guess it was the force of old habit that I mounted them there. Will correct it. Much neater.

The truth is that most of my important data are backuped on various external hard drives. But perhaps it would be an idea to make a separate sobvolume for the data I would like to backup and make snaphots of that.

This is much better organized for sure.

So I set the number of the snapshots to be retained first in the config-template before running the creat-config command. Passing the -c flag each time making the snapshot would take care of the cleaning up of the old snapshots then. Is this correct?

I think perhaps I would limit myself to do this manually before I get the hang of it and then pass the control over to systemd. What do you think?

And first thing first: making those subvolumes for /var/cache and /var/log.

Do you think I got it covered?

I also have backups but I still take snaphots. snapshots are near instantaneous and take very little overhead if you don’t keep them around for excessive periods of time. It is generally much easier to access a snapshot than a backup as well. You probably also don’t take backups hourly.

snapshots are great for things like when you completely mess up a config file and want to see what it looked a little while ago or when you corrupt a document and don’t want to go all the way back to a backup.

It depends where the -c flag is. The one at the beginning specifies the config. If it is after the create it is used to indicate a cleanup algorithm. However, the cleanup is performed by the systemd-timer. Alternatively, you can also run snapper cleanup manually.

I would do one manual snapshot to verify it is working and then let the timers handle it. Otherwise you may end up with manual cleanup that is not worth your effort.

Making a mistakes with snapshots is a pretty low-risk problem. You can easily remove the snapshots if they are not right.

1 Like

Great!
Thank you so much again for paying attention, taking your time and sharing of your insight and knowledge! You are a fantastic teacher!
:purple_heart:

I think I let this rest a bit while I go grab some food. Then I’ll go for it! I feel I have it now! I’ll get back to you with the result

1 Like

and

sudo snapper -c arch-budgie-root create

did it !!

the new subvolume list
ID 256 gen 50463 top level 5 path @Pictures
ID 257 gen 50542 top level 5 path @Documents
ID 258 gen 50517 top level 5 path @Library
ID 259 gen 50389 top level 5 path @Backups
ID 260 gen 50469 top level 5 path @Videos
ID 261 gen 44511 top level 5 path @Music
ID 266 gen 44511 top level 5 path @VirtualMachines
ID 333 gen 50695 top level 5 path @Mozilla
ID 344 gen 50693 top level 5 path @arch-budgie-root
ID 345 gen 50695 top level 5 path @arch-budgie-home
ID 346 gen 50621 top level 344 path var/lib/portables
ID 347 gen 50621 top level 344 path var/lib/machines
ID 348 gen 50621 top level 5 path @pacman-pkg
ID 352 gen 50672 top level 5 path @flatpak-user-data
ID 356 gen 50455 top level 5 path @Downloads
ID 357 gen 50621 top level 5 path @var-arch-budgie
ID 358 gen 50695 top level 357 path @var-arch-budgie/log
ID 359 gen 50672 top level 357 path @var-arch-budgie/cache
ID 362 gen 50621 top level 344 path .snapshots
ID 363 gen 50589 top level 362 path .snapshots/1/snapshot

All those personal data subvolumes are now under home, made the @var-arch-budgie/log and @var-arch-budgie/cache, copied over the data from the “old” directories to the “new” ones, mounted them where they belong. After a reboot, everything was fine. And then I ran the commands above and the snapashot was made.

I installed grub-btrfs and after regenerating grub.cfg, and a reboot the snapshot was there as a boot option. Booted into it. Everything was fine but the wallpaper was somehow changed.

Now I am a bit uncertain about how I can otherwise access the snapshot. It is own by root and cannot be accessed via file manager. Can it be mounted somewhere in the file system.?

edit: sorry @dalto, I replied to myself :blush:

I don’t think you want a subvolume for /var. If you do have a subvolume for /var make sure that you snapshot it in a synchronized fashion with /. Otherwise, pacman will be unhappy if you ever have to recover from a snapshot.

If you are using grub-btrfs, I think you probably want /var to be in the root subvolume.

A snapshot is a subvolume like any other. You can mount them the same way you can mount any btrfs subvolume. by name or subvolid.

This how they get mounted:

UUID=42f63a16-9a8b-421a-b509-7a3987aaa661           /var/cache                         btrfs              rw,noatime,compress=lzo,ssd,discard=async,space_cache,subvol=@var-arch-budgie/cache 0 0
UUID=42f63a16-9a8b-421a-b509-7a3987aaa661           /var/log                           btrfs              rw,noatime,compress=lzo,ssd,discard=async,space_cache,subvol=@var-arch-budgie/log 0 0

The rest is still at /var in the root subvolume. Hope this alright. I don’t want under any circumstances upset pacman :sweat_smile:

The whole of it? Alright, I understand. Otherwise some parts will be missing?
And if I do away with grub-btrfs, how do I go about restoring the system? With send and receive?

Good to know! I’ll try it to see if I can make it work.

If var is in the root, what is this:

You can just take a snapshot of the snapshot. Because snapshots are subvolumes too in btrfs. Alternatively you could rsync the data out of the snapshot into the target subvolume.

If it is at all helpful, here is what the subvols on my laptop look like right now.

subvolumes
ID 256 gen 275650 top level 5 path eos-root
ID 257 gen 275649 top level 5 path eos-home
ID 258 gen 212634 top level 5 path eos-varcache
ID 259 gen 275651 top level 5 path eos-varlog
ID 260 gen 212629 top level 5 path eos-paccache
ID 262 gen 275676 top level 5 path shared-snapper
ID 263 gen 276211 top level 5 path shared-localbin
ID 275 gen 275663 top level 5 path shared-appimage
ID 276 gen 275669 top level 5 path shared-dotsteam
ID 277 gen 275677 top level 5 path shared-steam
ID 278 gen 277866 top level 5 path shared-dotmozilla
ID 279 gen 277866 top level 5 path shared-dotthunderbird
ID 280 gen 275667 top level 5 path shared-dotminecraft
ID 282 gen 275665 top level 5 path shared-documents
ID 283 gen 275671 top level 5 path shared-downloads
ID 284 gen 275675 top level 5 path shared-pictures
ID 285 gen 275678 top level 5 path shared-videos
ID 286 gen 275674 top level 5 path shared-music
ID 316 gen 276216 top level 5 path shared-atjoplin
ID 317 gen 276187 top level 5 path shared-joplin
ID 318 gen 277866 top level 5 path shared-vivaldi
ID 325 gen 276531 top level 5 path fedora-root
ID 326 gen 277866 top level 5 path fedora-home
ID 327 gen 276531 top level 5 path fedora-varcache
ID 328 gen 276529 top level 5 path fedora-varlog
ID 336 gen 275658 top level 5 path mga-root
ID 337 gen 275657 top level 5 path mga-home
ID 338 gen 225754 top level 5 path mga-varcache
ID 339 gen 275659 top level 5 path mga-varlog
ID 361 gen 275661 top level 5 path neon-root
ID 362 gen 275660 top level 5 path neon-home
ID 363 gen 225942 top level 5 path neon-varcache
ID 364 gen 275662 top level 5 path neon-varlog
ID 1657 gen 275647 top level 5 path debian-root
ID 1687 gen 275646 top level 5 path debian-home
ID 1688 gen 200459 top level 5 path debian-varcache
ID 1689 gen 275648 top level 5 path debian-varlog
ID 2998 gen 275656 top level 5 path kaos-root
ID 2999 gen 275655 top level 5 path kaos-home
ID 3000 gen 191744 top level 5 path kaos-varcache
ID 3001 gen 191831 top level 5 path kaos-varlog
ID 4703 gen 277862 top level 5 path pop-root
ID 4738 gen 277866 top level 5 path pop-home
ID 4739 gen 277206 top level 5 path pop-varcache
ID 4740 gen 277866 top level 5 path pop-varlog

Right, I guess it was an unnecessary step to create this first and then make two others (cache and log) inside of it. I realize that it is a bit confusing.

I understand the second approach but I need to imagine/visualize better how the first approach works

It it is indeed! It looks very neat, impressive! We could perhaps talk about it more in the other thread I have pending.

I thank you once again for patiently walking me through this. I would do away with grub-btrfs, if I feel confident that I can restore the system by taking the snapshots from the snapshots. But i admit I am a bit outside my comfort zone.

Could we continue this tomorrow if you don’t mind? Would be good for me to let it rest and see it with fresh eyes. And a more clear head.

Of course.

1 Like

Thanks! See you then!
Have a pleasant evening!
:wave:t5:

1 Like

I thought it might help if I shared an outline of the procedure to recover using snapshots. There are actually lots of ways to do this but here is how I would probably approach it. To do this, you will either need to be booted into a different subvolume/snapshot or off an ISO.

I will assume the root of the btrfs device is mounted at /mnt. However, it could be anywhere.

First rename the old root subvolume and put a r/w snapshot on top of it.

sudo mv /mnt/@arch-budgie-root /mnt/@arch-budgie-oldroot
sudo btrfs subvolume snapshot /mnt/.snaphots/1/snapshot /mnt/@arch-budgie-root 

If you are booted into a snapshot, then you would reboot at this point. If not, you can do the following and reboot after.

sudo mv /mnt/@arch-budgie-oldroot/.snapshots /mnt/@arch-budgie-root/.

The above command moves your “.snapshots” subvolume of out of the oldroot into the new. Lastly, you can optionally remove the old subvolume if you don’t need it anymore

sudo btrfs subvolume delete /mnt/@arch-budgie-oldroot

That is really all there is to it. You might consider trying it out in a VM first so you are comfortable with it.

1 Like

Awesome! Thank you so much! It all gets more clear to me, bit by bit, how snpashots and subvolumes work.

I am sorry if I haven’t been doing much of the homework myself. I’ll make time to read some of the documentation on BTRFS, suvolumes, etc. for at least being able to make relevant questions.

I would very much like to step out of the comfort zone of my casual GNU/Linux user to learn more and more in depth. As it stands know my knowledge about things linux could at best be described as Swiss cheese. Lots of hole in it.

At this stage, I am starting to think that I might be better off in the long run, if I would redo the whole system and switch to systemd-boot as well in the process. Here below is the partition scheme of one of the laptops around the house. Please let me know your thoughts about this.

lsblk -fs
NAME      FSTYPE FSVER LABEL       UUID                                 FSAVAIL FSUSE% MOUNTPOINTS
zram0                                                                                  [SWAP]
nvme0n1p1 vfat   FAT32 ENOS-ESP    BC20-92DB                              97.9M     1% /boot/efi
└─nvme0n1                                                                              
nvme0n1p2 btrfs        EnOS-gnome  0d19c35e-8b2b-4844-a2e6-359ae17575e1   23.7G    51% /home
│                                                                                      /
└─nvme0n1                                                                              
nvme0n1p3 vfat   FAT32 LM-ESP      C82F-3A29                                           
└─nvme0n1                                                                              
nvme0n1p4 btrfs        LM-cinnamon bd837b5b-cb17-4ef8-9d6f-af66dc71c15a                
└─nvme0n1                                                                              
nvme0n1p5 vfat   FAT32             F2B2-12BB                                           
└─nvme0n1                                                                              
nvme0n1p6 btrfs                    6e488b94-abf5-49df-ab0c-6d24c004998f                
└─nvme0n1                                                                              
nvme0n1p7 btrfs        Data        27c1dbde-b305-416b-97c2-125aefb12f1d    512G    35% /Data/Audios
│                                                                                      /Data/Backups
│                                                                                      /Data/ISOs
│                                                                                      /Data/Pictures
│                                                                                      /Data/Downloads
│                                                                                      /Data/VirtualMachines
│                                                                                      /Data/Music
│                                                                                      /Data/Videos
│                                                                                      /var/cache/pacman/pkg
│                                                                                      /Data/Library
│                                                                                      /Data/Documents
└─nvme0n1                                                                              
nvme0n1p8 swap   1                 ca1e4355-91d3-4c8d-9b43-fd5ce32417ed

Ahh…why do you have separate ESPs?

I think it is always better to switch to systemd-boot but I not be the most independent party there. :rofl:

2 Likes