Searching for a real Backup Solution (no BTRFS or Timeshift!)

Hello together,

the last 4 days I am busy to finally find a backup solution that works. I’m doing this because I’m on vacation at the moment and have been putting it off for years.

Actually(!) I would like to have something like Acronis True Image or Macrium Reflect (Windows only). A service that creates incremental backups every hour and saves them to a network drive.

And if the hard drive should break, or catch a virus, or accidentally delete various folders while drunk, you can simply restore the state before with 2 clicks. Or if the hard disk is really defective, then you replace it, boot from the respective Live CD, and restore the whole image to a new hard disk.

Now I know, because I have dealt with it 2 years ago last that such a software for Linux simply does not exist. At that time I gave up, and as a “solution” simply installed Proxmox as host, created a VM with all my hardware I need (including various PCI devices passed through) and installed my Arch on it. Then configured Proxmox to take a snapshot every hour and wrote a script (more precisely Pacman-hook) that connects to Proxmox via SSH and creates a snapshot with “Pacman Update $Date” as description so I can go back if any update messes up my system.

That worked so far so good. Unfortunately the performance suffered a bit, and sometimes it happened when Proxmox started a snapshot, that my “real” system froze for a short time which didn’t happen often, but still annoyed me.

Since then I backup my hard drive once a month with clonezilla.

But since then this also gets on my nerves, because I (not only out of curiosity and fun but also because of my job) very often experiment with my system and sometimes when I’m done with a “session” it’s easier to restore a 1:1 image than to uninstall 42389472 packages and search for their remains. But then I have to start Clonezilla again every time and pull a fresh image. And in the time nothing can be done with the machine.

So, for these reasons, I’m looking for a solution right now. Since, as mentioned, programs like Macrium or Acronis do not exist for Linux, it is clear to me that I have to go two ways.

My idea is:

  1. I install my system as usual with the usual partition scheme, but minimal. There I create my mountpoints in fstab and install backup software X (which must be found), configure it, and install a browser. Nothing more. From this I then pull an image with Clonezilla.

  2. After point 1 is done, I just keep working with my Endeavouros and the backup software X does my hourly backups/snapshots.

  3. ?

  4. Profit!

And with the solution I can then restore any X snapshot/backup after I’ve experimented around. And if for some reason the hard drive breaks, I can just launch clonezilla, restore the image and start the OS where the backup software X is already present and configured and I can then easily restore a backup/snapshot. And after a reboot, voila, everything as if nothing had ever happened.

Well, Timeshift (rsync) or BTRFS (or Timeshift with BTRFS) can do exactly that. BUT(!!) they don’t allow the snapshots to be stored on a different location than on the machine itself. This is total nonsense. Because if the hard disk suddenly gives up, the backups/snapshots are also gone. So BTRFS as a solution is already out of the question. But then Timeshift can also work with rsync.

Unfortunately this doesn’t work either, because Timeshift stores the backups/snapshots only locally on the machine (Path: /timeshift). Well, I thought, I am so clever, quit Timeshift incl all services, delete /timeshift, and create a NFS mountpoint under /timeshift and trick Timeshift. Unfortunately, Timeshift recognizes this and prevents the backup. Short internet research has shown that many people complain about this, but the developer does not want to change this, because this would bring unexpected complications because of hard links and that NFS would not support this (which is not true) etc (source as an example: https://github.com/teejee2008/timeshift/issues/52). So Timeshift is unfortunately also out.

Then I found the day before yesterday but the so promising hero! BackInTime. Installed it, configured it and made a backup. Great. Everything as desired, since then various snapshots were created which are now all nicely on my 3x redunant server. Again Pacman-hooks created etc etc and everything worked. Until yesterday I had the idea to test a restore.

So I used Bash to create a few hundred empty test folders and test files in a folder. I deleted my downloads folder (because there is always a lot of junk in there that can go away), installed Firefox, VLC and a few other small programs with Pacman, and as the icing on the cake I configured my pacman.conf so broken that no update is possible anymore.

After that I started BackInTime full of anticipation and restored the last snapshot. After that was done, restarted, and was disappointed…:

  1. Good: The deleted files from downloads are back.
  2. Bad: My test folder with all the empty test folders and files is also still there.
  3. Even worse: The installed test programs are also there.
  4. Worst: The broken pacman.conf is also still there.

God I cursed… But then I found that there is an option to prevent exactly that. BUT(!) that doesn’t work either, because it explicitly mentions that folders that have been excluded during backup/snapshot creation will be deleted during restore. And that means, for example, everything that is mounted under /mnt … And then my whole server with 80TB data!

So… BackInTime is unfortunately also out.

Then there is duplicity/deja-dup. Exactly the same problem/behavior as BackInTime. Also disclaficated.

And now I’m really desperate and at a loss. I just do not know how to do the following:

  1. Create hourly incremental backups/snapshots and back them up externally.
  2. When restoring, also restore exactly the state of the respective backup/snapshot 1:1.

I am really sorry for you poor people that you have read all this. But I hope I can be helped.

This text was translated with deepl from german to english. But reading through it, everything seems to be right :slight_smile:

Thanks a lot!

Use vorta which is a front-end for borg.

2 Likes

Wow that was fast. How fast can you read? :slight_smile:

And Borg do exactly what i want? If yes, i test it immediately. Thanks.

borg does real deduplicated backups and optionally supports compression and encryption.

As far as your desired recovery strategy, it should depending on how you restore it.

For example you could mount the borg backup and then use rsync to mirror the source to the target which would make it look exactly like the source.

1 Like

So, if i understand right, vorta/borg dont have a restore option?

I installed borg/vorta right now. Didnt found a Option for that. And if i see right, i need a Serversoftware for that? Oh crap. Ok, then i look at Youtube first if that do exactly what i want.

You can restore files directly from Vorta but if you do I suspect you will have the same problem where it restores those files but doesn’t remove the other ones.

That is why, instead, I would mount the borg snapshot and then use rsync to make them the same. Which would copy the old data and remove anything you have added.

I have no idea what you are referring to here.

1 Like

So the same behaviour like BackInTime? Because BackInTime is only a rsync frontend wich makes it easier to make this kind of backup/snapshots like i found out.

And if i use only rsync and use the --delete option, that would delete everything on the target, what isnt in the source. So, it would again delete /mnt and everything underneat (and so my whole Server)…

nvm. i missunderstood something. i thought i need a git server to work with borg because vorta asked me for a git url. Watching a Video right now.

One easy solution would be to unmount your server before restoring a snapshot…

However, if not, there should be options to to rsync that let you exclude things.

Yes, that i know. But no option to skip deletion of folders/files that not exist in the source… (i found none)

Yeah. But sadly no program have a option for that for example. So i could have a half/half solution. For backup i use BackInTime, and for restore a script that unmount everything under /mnt and restore a snapshot… Not very convenient.

Im curious how Timeshift(rsync) is doing this. Because Timeshift(rsync) do exactly what btrfs do. Sadly, i my knowledge about programming is very bad and it isnt possible for me to understand the sourcecode correctly.

Edit:

Vorta/Borg is making first Backup now. 100GB Data (deduplicated 30GB lol) already done of ~1TB. Without Compression or something.

Edit2:

Interessting Size…155.6TB… nice to know that the Size needs to recalculate after exclusions are set.

So…

I have now dealt extensively with Borg.

unfortunately also unsuitable for my use case.

And as feared, because of the restore. Even if I unmount my mounts before the restore, rsyncs deletes the mount points under /mnt. Of course they are empty now, because unmounted. So half so bad. But the folders themselves are deleted. So that after the restore most of my mounts from fstab are not mountable, because the mountpoints are missing…

Solution would be to unmount the mounts already when creating the backup, but then I can’t create a backup, because the place where the backup should be created is not reachable…

Then I found restic. Unfortunately the same problem.

I like borg a lot because of the deduplication.

I’ll have to see if I can find an IRC channel or something with rsync experts. Can’t imagine for the life of me that rsync doesn’t have a “delete everything that doesn’t exist in the source except folder X” option. Didn’t find one until now.

Translated with www.DeepL.com/Translator (free version)

Edit:

Gentoo IRC Chat gave me a hint… instead using one rsync command, use multiple ones. for each dir one. lol. didnt thought about that.

But now i found out, that borg didnt save symlinks… that is really bad.

[root@galaxias galaxias-2021-11-23-005048]# rsync -avh bin/ /bin --delete --dry-run
sending incremental file list
deleting mount.nfs
deleting cupsd
deleting augenrules

sent 61.49K bytes received 52 bytes 123.08K bytes/sec
total size is 895.61M speedup is 14,553.50 (DRY RUN)
[root@galaxias galaxias-2021-11-23-005048]#

he wants to delete mount.nfs on destination (is a symlink to mount.nfs4) because this isnt in the backup. i tested it for another dirs too. it destroys the half system. now i must find out how to save symlinks too with borg.

I finally have a solution. It’s not as nice as with BTRFS or Timeshift, but it works.

Had an intense conversation with the nice people from the #Gentoo and #rsync IRC channels.

Will now be able to take a snapshot to my NFS share every hour with borg and restore it with rsync.

For this I will have to write a script that I can call and then gives me a listing with all available snapshots of Borg which I can simply select then and this is then automatically mounted and restored via rsync just like it does BTRFS/Timeshift. The guys from #rsync told me that restoring while running is not so good because if e.g. glibc differs and is overwritten, it could very well be that the whole system just freezes. They have a point there.

I will now simply install an absolutely minimal Debian as dualboot. In it is then / of my Endeavouros and my nfs backup folder mounted. And when starting Debian is automatically logged in as root and my script is executed. Where I can then select which snapshot should be restored.

This is I think the simplest variant how to implement my project finally…

But first I will evaluate how big the problem really is when you restore the system live on the fly.

More info, and sourcecode for the restore script will come as soon as I’m done. But now I have to go to sleep :smiley:

2 Likes

The correct solution for this would be to use a virtual machine for testing. Vagrant would be worth looking into for quickly creating/destroying/recreating VMs with a specific/programmatic configuration.

1 Like

I guess sometimes people just like to destroy their storage for no real reason.

1 Like

I know. But cant do that. Because i need every % Performance i can get. And i need to have a System wich is used daily and have hundreds of different Packages to look if something conflicts or destroys.

But anyway. I have my Solution and this Thread can be closed. I have open a new one for my Solution:

https://forum.endeavouros.com/t/borgrestore

For testing? KVM is near-metal performance.

Virtual machine. :stuck_out_tongue_winking_eye: (or a chroot, or container, or some sort of CI, depending on what you’re doing - either way wiping your live installation every day is very likely to be an inefficient way of doing things)

Oh, wait, snapshots would also do this - snapshot before testing, make changes, restore the snapshot.

Please mark the post that suggested borg as the solution, rather than your reply that you found a solution. It’s nice to credit the person who helped you solve your problem

1 Like

Done. But for my Solution i could take BackInTime too :stuck_out_tongue_winking_eye: but i like borg. its nice peace of software.

Yeah, like in my original Post, i have already done that. But didnt liked that.

But nvm. Now i have the same like with BTRFS or Timeshift. But without them and the possibility to use NFS Shares…

You should read my first post in this thread.

I did, and you’re combining backups with snapshots - these serve different purposes. On the one hand, you want a remote copy of your filesystem. On the other, you want to be able to quickly restore to a previous state after testing.

Unless you are completely destroying your local disk every time you test things then snapshots are probably all you need.

However, if you have something that meets your specific use-case then that’s good, and it’s always nice to see new approaches to problems.

Yes. But i dont want/need BTRFS. And Timeshift would be perfect (uses btrfs or rsync, like you prefer). But Timeshift didnt let you to specify a NFS Share as destination. And it didnt make sense why.

Sure i want snapshots. But there is no reason why they couldnt be on a NFS Share. Sure you could them copy them manualy to a NFS share. But if you want to restore if a Disk fails, you need first copy the Snapshots from the nfs share to the machine (wich needed first be installed…).

Now, i have it like on Windows or Apple. Wanna go back few hours back? No problem. Wanna set up a new machine and use the snapshots to copy that system? No Problem. You disk melted und you must replace it? No Problem.

Like i wanted it :slight_smile:

And maybe, next time i have vacation, i can try to write something better/nicer looking piece software. Nobody knows.

1 Like

This topic was automatically closed 2 days after the last reply. New replies are no longer allowed.