I guess it depends on stick quality in the first place.
I keep my sensitive data on usb stick encrypted with luks and ext4 filesystem, but I don’t trust that it will not fail. Especially since rapidly few of my low-end kingston and adata usb sitcks broke just like that- without particular reason.
Things I care the most are encypted with cryptomator and synced between multiple devices with syncthing.
yes, but here is where the problem starts: one needs to be the source after all to keep backups in-sync - and i still have a somewhat controversial opinion on usb sticks acting as the main source. may be due to bad (non linux) experiences in the past. i hoped to solve this problem with a more robust filesystem (and or extras like integrated checksums) than extended-fs actually is, already.
the reason for asking here: i was thinking of moving all sensitive stuff over to a stick. this way, data gains much more flexibility and an arguable higher level of privacy (even without encryption). well it depends on the location for the device
perhaps, but i am not sure about it? this is basically about “today’s” usb stick reliability in general, not so much about specialized pricey hardware.
i also come to think of scenarios like power-loss, or perhaps a forced shutdown. i wonder if usb sticks tend to be more vulnerable against these threats than hdds or ssds. but to some extend, this is probably also a question of the chosen fs?
Some USB sticks are the only drives that have broken on me ever. So far no HDD nor SSD has broken.
As some already suggested, to be more safe, use several different drives for storing backups.
HDDs and SSDs are very reliable nowadays. And USB sticks can be reliable, but their quality varies between manufacturers. That’s why the more sticks you use, the better.
And depending on how much data you want to back up, there are many alternatives for encryption.
One nice little encryption program is ccrypt, it encrypts files.
First you could collect several files and folders into one file with tar (and compress it too). Then you can easily encrypt the file with ccrypt.
And then you can copy that encrypted file to many USB sticks as backups.
For more details, run commands
after installing ccrypt.
The great thing with command line tools is that they (most probably) don’t “age”. After 5 or 10 years from now you’ll still have those same commands available. And they most probably will be backwards compatible too.
And whatever system for backups you choose, remember to periodically (at least once a year?) test that the backups can be restored.
this is what i do already, but just think about a corruption on your master drive - in the best case you end up with a slightly older version of your data on a different drive. worst case, you synced corrupted data over an unspecific period of time without knowing. and this only because of usb sticks as your choice of source?
as a general rule of thumb, i (still) have this correlation on my mind: usb sticks=do not use them for long time storage.
That may be true. USB sticks are usually quite cheap, and “cheap” usually means their life span is not that long compared to alternatives.
But having said that, I do use USB sticks for some amount of backups (not for critical data though) simply because they are so easy to handle.
And speaking of data corruption on the master drive: it is a good idea to use a filesystem that has a good and long history of reliability (I’m talking about ext4 which AFAIK is very reliable). Rushing into the latest available super-duper filesystem may cause tears later…
Any thing of any importance at all is on my EndeavourOS LAN file server, NAS, or whatever you want to call it. My home directory is just about bare. I use rsync to backup the main NAS SSD to a secondary USB backup SSD in an enclosure. The Secondary SSD is only hooked up to the NAS during an actual Backup session. That way the backup SSD is not normally powered up, it is basically in storage. About twice a year I swap the Main SSD with the Backup SSD and spread the usage between them.
Here comes the on topic part. Here is my latest scheme for off site storage. I got a name brand M.2 SSD 2280 (not NVME), then bought a M.2 to USB external enclosure I’m not necessarily pushing this particular name brand, but something like it.
I use rsync to make a copy of my NAS SSD and then take this to my daughter’s house and she stores it in her safe. Or one could put it in a safety deposit box in your bank. Now in case of an extreme disaster such as the house burning down, I still have a relatively recent copy of my data. I use rsync because when I later update the off site SSD it only has to deal with what has changed.
My daughter is about an hour’s drive away. When I get some extra discretionary cash, I am seriously thinking about buying another one of these set ups. Then when I am going to her house I could make a current backup and then swap them out, then bring the other one back for next time. Right now, I go to her house for a visit, bring home the SSD, then make a backup just before the next visit and take that to her. In the mean time, my off site storage is no longer off site until I return it. Sort of defeats the purpose.
As to reliability, IMHO M.2 SSD are as reliable as anything available on the consumer market today. If you want better then maybe Enterprise SSDs would be appropriate, but bring your wallet!
This setup is larger than a USB thumb drive, but smaller than a 2.5 inch SSD or Hard Drive in an external enclosure.
Did you consider 2.5 inch usb drive? HDD or SSD depending on whether it will travel a lot or not.
I use two 1 Tb external drives attached to my HP pre-leased SFF PC (acting as a server - really cheap, under $100) and one 128 Gb ssd drive.
Now they all work upon single FS - zfs. HDD’s are in mirror (RAID1 equivalent) and SSD is partitioned to act both as Log and Cache for ZFS. This is cheap but very reliable. Future (near) plan includes putting tiny PC to my sister home with another 1Tb hdd usb drive in order to have geographically separated backup. Then only a nuclear or EFC bomb will bust my data down
ZFS snapshots are managed with famous Jim Salters’ sanoid app, so I have snapshot retention and persistence across many days.
The servers are on either side, the middle top is the power bricks, and middle bottom are the USB SSD encosures for Backup. The LAN server also has samba and minidlna services installed.
The last time I looked into ZFS was a long time ago. Couple ZFS with sanoid and I just might have to rethink this. Thanks for the information! After you install the computer at your sister’s house, start a new thread and let me know how it worked out.
i knew this is the right place to raise a question like this. you really have some fine set up’s and ideas
@pudge: swapping drives to balance usage is something i never really thought of. good point.
@patryk: i have 2x 2,5 external devices (not perm attached) of the same size, one being a hdd. grsync is my weapon of choice for syncing. under linux & ext4 i never experienced any severe situations (also with usb drives!). i tend to keep things as simple as possible. use open, mature tools and stick with standards also in regards to kernel support.
@manuel: that is also the reason for not juggling with other fs, compression or encryption so far. that of course does not mean not to be fascinated by newer filesystems offering a lot more features ot of the box.
i would love to see lz4 compression on ext4 and some sort of permanent checksum verification like btrfs has?
i must admit, i am still tempted by the idea of getting an usb stick (ext4) for sensitive things after all. this would be affordable, silent and fast, permanently attachable though portable, easy to carry and to store in safe places. and let’s not forget about the privacy aspect to have all data stored “externally”. dunno
3-2-1 rule, which in principle I’ve been applying for about 3 years. Currently external HDD + USB stick, encrypted. I update every few months using rsync (grsync), which is a really reliable tool. I use a rather good USB stick (last “1”) for reasons that @manuel mentioned and I keep it at my girlfriend’s home.
Seems like a safe place
That’s why I’m on ubuntu server. say whatever you want, I don’t like their desktop version, but I find their server edition as very reliable. it runs many dockers, haproxy, zfs like a charm. Didn’t let me down for couple of months, and probably in couple of days I’ll signup for these instant kernel patches that doesn’t require reboots.
And, at the beginning you asked for ZFS
My observation is that many of you rely on manual work, which in my opinion isn’t as reliable as automation… I mean you have to remember to swap disks, or do your backup every ‘x’ days/months. I’d like to point that human is weakest point of any IT system