The “Before restoring” snapshot taken before restoring to a Timeshift snapshot cannot be deleted. The others are fine, only this one is not deleted. Can someone help?
What output do you get in the terminal if you try to delete it from the command line?
sudo btrfs subvolume delete /path/to/snapshot
ERROR: Could not statfs: No such file or directory
That seems rather unambiguous. Perhaps Timeshift has some bad cached data or something along those lines.
One of the strange things is that grub-btrfs doesn’t see this undeletable snapshot either.
Could this situation create a problem in the future?
It looks like it is deleted.
To double-check, list out all subvolumes and verify it isn’t in there.
sudo btrfs subvolume list /
Use grep to narrow it down if there is a lot of output.
If you had an old snapshot you were not able to delete, it would tie up disk space unnecessarily. Other than that, unless you need that specific snapshot for some reason it seems unlikely to cause an issue.
ID 256 gen 1151 top level 5 path timeshift-btrfs/snapshots/2024-08-31_18-45-14/@
ID 258 gen 1402 top level 5 path @cache
ID 259 gen 1403 top level 5 path @log
ID 260 gen 23 top level 256 path timeshift-btrfs/snapshots/2024-08-31_18-45-14/@/var/lib/portables
ID 261 gen 23 top level 256 path timeshift-btrfs/snapshots/2024-08-31_18-45-14/@/var/lib/machines
ID 272 gen 1403 top level 5 path @home
ID 273 gen 1403 top level 5 path @
ID 274 gen 892 top level 5 path timeshift-btrfs/snapshots/2024-08-31_20-23-18/@
ID 275 gen 293 top level 5 path timeshift-btrfs/snapshots/2024-08-31_20-23-18/@home
ID 278 gen 892 top level 5 path timeshift-btrfs/snapshots/2024-08-31_20-45-10/@
ID 279 gen 336 top level 5 path timeshift-btrfs/snapshots/2024-08-31_20-45-10/@home
ID 296 gen 892 top level 5 path timeshift-btrfs/snapshots/2024-09-01_18-44-56/@
ID 297 gen 893 top level 5 path timeshift-btrfs/snapshots/2024-09-01_18-44-56/@home
ID 298 gen 900 top level 5 path timeshift-btrfs/snapshots/2024-09-01_18-48-17/@
ID 299 gen 900 top level 5 path timeshift-btrfs/snapshots/2024-09-01_18-48-17/@home
ID 300 gen 927 top level 5 path timeshift-btrfs/snapshots/2024-09-01_20-22-29/@
ID 301 gen 928 top level 5 path timeshift-btrfs/snapshots/2024-09-01_20-22-29/@home
ID 306 gen 979 top level 5 path timeshift-btrfs/snapshots/2024-09-01_21-01-28/@
ID 307 gen 981 top level 5 path timeshift-btrfs/snapshots/2024-09-01_21-01-28/@home
ID 308 gen 1130 top level 5 path timeshift-btrfs/snapshots/2024-09-02_03-15-18/@
ID 309 gen 1131 top level 5 path timeshift-btrfs/snapshots/2024-09-02_03-15-18/@home
ID 310 gen 1342 top level 5 path timeshift-btrfs/snapshots/2024-09-02_05-00-09/@
ID 311 gen 1343 top level 5 path timeshift-btrfs/snapshots/2024-09-02_05-00-09/@home
ID 312 gen 1400 top level 5 path timeshift-btrfs/snapshots/2024-09-02_14-46-27/@
ID 313 gen 1401 top level 5 path timeshift-btrfs/snapshots/2024-09-02_14-46-27/@home
No, it’s still there. What are the /var/lib/portables and /var/lib/machines subvolumes?
There’s a bug in Timeshift where sometimes not all the files in the snapshot folder are deleted in one go if you have btrfs quotas activated. For example the .json information will still be there while the actual snapshot was deleted. Timeshift will still list this snapshot but will not actually be able to restore it because it was deleted. On the next delete request or automated delete this snapshot entry will be removed.
This actually isn’t a problem as long as you know not to trust Timeshift’s snapshot list.
The usual commands to list, remove or create snapshots will work as expected.
To get rid of this Timeshift error you could simply deactivate btrfs quotas with
sudo btrfs quota disable /
The downside to this: Timeshift will not be able to calculate and display a snapshots size.
sudo btrfs quota disable /
I entered the command and I still get the same error.
Timeshift shouldn’t give out any errors anymore after disabling qgroups. (?)
Check if you have any stale qgroup entries. This will give out an error if qgroups are disabled.
sudo btrfs qgroup show / | grep stale
Try creating and deleting a new snapshot. You may still get an error when deleting old entries.
In any case, the Timeshift error isn’t anything to worry about. Timeshift just tries to delete the qgroup entry, fails, and then doesn’t correctly clean up by leaving it’s own .json file intact. It then assumes the snapshot is still there and will again give an error when you try to delete it again. The entry will be removed after multiple remove attempts.
Those are related to systemd containers and portable service images. It is normal to have these and it is fine if they are empty.
Mount the Btrfs partition somewhere so you can get outside the top-level subvolumes.
sudo mount /dev/[btrfs partition] /mnt
Then delete the subvolume using that mount point to define the path.
sudo btrfs subvolume delete /mnt/timeshift-btrfs/snapshots/[whatever snapshot it is]
The output from that may reveal what the issue is.
I tried these:
sudo mount /dev/sda2 /mnt
sudo btrfs subvolume delete /mnt/timeshift-btrfs/snapshots/2024-08-31_18-45-14/@
output:
Delete subvolume 256 (no-commit): '/mnt/timeshift-btrfs/snapshots/2024-08-31_18-45-14/@'
ERROR: Could not destroy subvolume/snapshot: Directory not empty
and the output of this command:
sudo btrfs qgroup show / | grep stale
output:
ERROR: can't list qgroups: quotas not enabled
The problem still persists…
There may be a nested subvolume preventing it from being deleted; you’ll have to find it and delete that one first.
sudo btrfs subvolume list -o /mnt/timeshift-btrfs/snapshots/2024-08-31_18-45-14/@
Use btrfs subvolume delete
on whatever subvolumes that command outputs.
sudo btrfs subvolume delete /mnt/timeshift-btrfs/snapshots/2024-08-31_18-45-14/@/[whatever the nested subvolume is]
Yes, these solved the problem. The command you gave me gave two outputs.
/mnt/timeshift-btrfs/snapshots/2024-08-31_18-45-14/@/var/lib/portables
and
/mnt/timeshift-btrfs/snapshots/2024-08-31_18-45-14/@/var/lib/machines
Then I deleted the subvolumes given in the output and also deleted the problematic snapshot. Thanks!
This topic was automatically closed 2 days after the last reply. New replies are no longer allowed.