i cant chown the files. Too all extents they dont exist and these are directory remnants of sorts. Many years ago you used to be able to do a form of edit on a directory but i dont want to brick my BTRFS after the memory debacle and im fairly sure thats probably not what needs to happen here.
Theyre not “doing harm”. it would be nice to figure out how to fix it tho.
First, I am assuming you have already tried rebooting?
Next run btrfs-check on the volume and report back if it finds any errors. Don’t pass it the --repair option unless you are 100% sure that errors are able to be repaired. The --repair option can render a filesystem completely unreadable if it is run in the wrong state.
Booted into EOS iso on usb stick and ran a check on the nvme i got:
Opening filesystem to check...
Checking filesystem on /dev/nvme0n1p7
UUID: 618bc5b2-d681-4772-bf2a-2f828db3b5fc
found 206671380480 bytes used, error(s) found
total csum bytes: 199572836
total tree bytes: 1079312384
total fs tree bytes: 781025280
total extent tree bytes: 57802752
btree space waste bytes: 195591760
file data blocks allocated: 417718456320
referenced 232369872896
root 577 inode 224562 errors 1, no inode item
unresolved ref dir 261 index 235 namelen 20 name tuxclocker.conf.lock filetype 1 errors 5, no dir item, no inode ref
root 577 inode 224563 errors 1, no inode item
unresolved ref dir 261 index 237 namelen 15 name tuxclocker.conf filetype 1 errors 5, no dir item, no inode ref
root 577 inode 224564 errors 1, no inode item
All errors were related to the problematic tux files.
If it was my system, I would make a good backup of any important data and then run btrfs-check --repair
That being said, I don’t know enough about btrfs internals to tell you definitively that is the right path and --repair definitely has the potential to destroy data.