Moving Install to new Drive

So I found this guy for a great price and bought it and plan to move my current encrypted BtrFS install on a normal NVMe SSD to this new one. Both are the same size.

My question for moving; is Clonezilla, dd and the like sufficient enough for this? Are there any gotchas with BtrFS?

1 Like

Can’t advice on btrfs, but one thing i would advice for sure - if there will be something valuable on this drive, plan regular backup strategy.

I don’t believe in miracles :laughing:


And make sure you’ve got something like this : (CHECK FIRST THAT IT WILL FIT YOUR DRIVE) to adapt the outgoing one to your USB, so that you can run the cloning when you haven’t any more than one NVME slot.

I’ve used Clonezilla a lot in the past; highly recommended.

The encryption layer will force you to use something like dd (which clonezilla will probably use).

For reference, without encryption you could use btrfs-replace.
btrfs replace start <srcdev>|<devid> <targetdev> <path>
On a live filesystem, this duplicates the data which is currently stored on the source device to the target device.

1 Like

Oh for sure! I make a weekly backup of /home to an external USB, so worst case scenario I just have to copy that back over.

1 Like

Fortunately I have 3 M.2 slots on my motherboard, but thanks for looking out!

I’ll give clonezilla a try from a live USB first, hopefully it does the job.

1 Like

The clone went off without a hitch! Used clonezilla, and as @2000 thought, it used dd for its operations.

My only problem now is that because it’s a PCIe Gen4 drive, having encryption cuts my read/write speeds by like 1/4th. Think I might follow this ArchWiki article on removing LUKS encryption as this is my desktop and there’s a close to zero probability that it will get stolen.

Sorry, just to clarify, by 25% (to 75%) or by 75% (to 25%)?
In case of a fast nvme drive I personally could live with the former :grin:.

As long as the CPU supports AES-NI the bottleneck on a system with fairly modern specs will almost always be the disk itself and not the processing power required for encryption or decryption; the performance hit will usually only be about 1-2% on a hdd and standard ssd.

Now, if you have veeery fast storage (nvme, m.2) :wink: , often the CPU can be the bottleneck even with AES-NI enabled.

What does
cryptsetup benchmark

This will give you an idea of what de-/encrypting speeds your system is theoretically capable off.

To check which cipher is currently in use, run
sudo cryptsetup luksDump <yourLuksDevice> | grep "Cipher\|bits"

To check if aes-ni is used/activated, run
grep -m1 -o aes /proc/cpuinfo
Output should contain “aes” if your cpu uses it.

Filesystem type and mount options will themselves add their own overhead, (e.g. Btrfs with zstd level 1 soft compression).
How did you measure and compare encrypted/unencrypted read/write speeds?
( hdparm -t <device> , or similar?)

Sorry, I wasn’t that clear lol. I meant it was reduced by 75%…but now that I do more testing I don’t know what the best metric for testing my drive is and what the speeds I’m getting in regular use are. Windows users regularly show 4+GB/s read/write. Using KDiskMark on the default profile I get these results:


But on the “Real World Performance” profile I get this:


But the benchmark potential doesn’t agree with either of those 2 lol

cryptsetup benchmark
# Tests are approximate using memory only (no storage IO).
PBKDF2-sha1      2551279 iterations per second for 256-bit key
PBKDF2-sha256    4639716 iterations per second for 256-bit key
PBKDF2-sha512    1855886 iterations per second for 256-bit key
PBKDF2-ripemd160  927943 iterations per second for 256-bit key
PBKDF2-whirlpool  699983 iterations per second for 256-bit key
argon2i      12 iterations, 1048576 memory, 4 parallel threads (CPUs) for 256-bit key (requested 2000 ms time)
argon2id     12 iterations, 1048576 memory, 4 parallel threads (CPUs) for 256-bit key (requested 2000 ms time)
#     Algorithm |       Key |      Encryption |      Decryption
aes-cbc             128b      1247.3 MiB/s      4117.3 MiB/s
serpent-cbc       128b       112.9 MiB/s       712.8 MiB/s
twofish-cbc        128b       220.4 MiB/s       413.8 MiB/s
aes-cbc              256b       942.7 MiB/s      3355.0 MiB/s
serpent-cbc        256b       112.8 MiB/s       716.4 MiB/s
twofish-cbc        256b       220.9 MiB/s       418.5 MiB/s
aes-xts              256b      2007.7 MiB/s      1977.7 MiB/s
serpent-xts        256b       708.0 MiB/s       695.3 MiB/s
twofish-xts        256b       408.8 MiB/s       412.8 MiB/s
aes-xts             512b      1756.9 MiB/s      1753.4 MiB/s
serpent-xts       512b       711.7 MiB/s       698.7 MiB/s
twofish-xts        512b       414.6 MiB/s       412.3 MiB/s
sudo cryptsetup luksDump /dev/nvme0n1p2 | grep "Cipher\|bits"
Cipher name:    aes
Cipher mode:    xts-plain64
MK bits:        512

Then dd tests:

time sh -c "dd if=/dev/zero of=ddfile bs=128M count=200 && sync"; rm ddfile
200+0 records in
200+0 records out
26843545600 bytes (27 GB, 25 GiB) copied, 13.4466 s, 2.0 GB/s
time sh -c "dd if=/dev/zero of=ddfile bs=1G count=3 && sync"; rm ddfile
3+0 records in
3+0 records out
3221225472 bytes (3.2 GB, 3.0 GiB) copied, 1.14456 s, 2.8 GB/s

Then hdparm:

sudo hdparm -Tt /dev/nvme0n1p2

Timing cached reads:   29600 MB in  2.00 seconds = 14824.97 MB/sec
HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device
Timing buffered disk reads: 6484 MB in  3.00 seconds = 2160.89 MB/sec

Your real life speeds seem to be even better than cryptsetup’s benchmark suggests. :grin:

Overall, your speeds are what I’d expect from a nvme drive. My own speeds are about in that range.

Sure! :roll_eyes:
According to one of my drives (Samsung 970 EVO Plus) description, it “offers sequential read and write performance levels of up to 3,500 MB/s and 2,500 MB/s, respectively”. I tend to interpret the “up to” as pure marketing and consider about 1,7Gbit/s in real life as adequate on an encrypted system; maybe even on an unencrypted system.

Would be interesting if you’d actually try this and get back to us with some benchmarks … :wink: