I’ve tested speed of my ext4 HDD disk and finding this a little weird…
Speed measured with KDiskMark SEQ1M Q8T1 x3 runs of 32 Mb
Read 196 MB/s
Write 133 MB/s
Filled disk (2,9 TiB out of 3,6 TiB, 87% used):
Read 107 MB/s
Write 88 MB/s
I mean, obviously speed should fall somewhat when disk is filled, but that feels like waaayyy too much.
Any explanations, ideas, suggestions?
Sounds like time to purge!
Less of course, but it’s almost twice including read speed, come on!
No way, it’s new disk and perfectly healthy.
You got it almost full already?
Sure coz i’ve used this for data i already had, and used previous one exactly same model as backup
I have 4 TB sitting here empty. lol
cat pictures! i warned you!
@Stephane is right, spinning platters have to keep seeking to the end of the data area before they can write. Reading requires inspection of the allocation tables before seek.
Of course, but again almost / 2?
I don’t think i’ve seen such massive reduction before.
Usually it something like 10-30 MB/s difference at worst i believe…
File system vs Disk.
Could well be ext4 journal writing slowing it down as the disk fills?
Yeah i’d expect something like that to a degree…
But how to check, fix it then?
To me it looks like some anomaly, buggy behavior perhaps, given amount of decrease.
What i did with this disk was just copy all the data from the other older same model disk, i did this with Dolphin file manager at the time, which was…very long, compared to something like rsync which i’ve recently encountered, thx to @dalto
Jesus…That guy received question with even more hardcore example
sudo tune2fs -l /dev/sdc1
tune2fs 1.46.2 (28-Feb-2021)
Filesystem volume name: STORAGE 2
Last mounted on: /run/media/x133/STORAGE 2
Filesystem UUID: 0b6b7c6d-1d51-414c-a103-d911dc51cfee
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent 64bit flex_bg sparse_super large_file huge_file dir_nlink extra_isize metadata_csum
Filesystem flags: signed_directory_hash
Default mount options: user_xattr acl
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 244195328
Block count: 976754176
Reserved block count: 48837708
Overhead clusters: 15616772
Free blocks: 112144913
Free inodes: 243873374
First block: 0
Block size: 4096
Fragment size: 4096
Group descriptor size: 64
Reserved GDT blocks: 1024
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 8192
Inode blocks per group: 512
Flex block group size: 16
Filesystem created: Sat May 1 15:07:34 2021
Last mount time: Fri May 7 19:00:49 2021
Last write time: Fri May 7 19:00:49 2021
Mount count: 4
Maximum mount count: -1
Last checked: Sat May 1 15:07:51 2021
Check interval: 0 (<none>)
Lifetime writes: 3299 GB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 32
Desired extra isize: 32
Journal inode: 8
Default directory hash: half_md4
Directory Hash Seed: b10db4fc-ca1c-4602-b8df-88db7bf5d983
Journal backup: inode blocks
Checksum type: crc32c
Do you have any extra drives to test with? I would be curious if
xfs would perform differently in this context than
I do have some drives, but none i could use as a test at the moment, maybe in some months later i could use one of my backups for such test, why not
If you have a backup and some free time you could try fstransform
As it may be that your drive is beginning to fail (and it is not a good idea to have a drive filled to such a high percentage - I don’t like to have my drives go above 60% filled) I would suggest that you consider doing what I do: have more than one backup drive of the largest size you can afford.
In my case, to back up my entire computer, I have four (4) 4 TB drives, all containing the same data. They are all only about 11% filled. I update them all daily. Obviously, being filled to such a small amount, they read and write very quickly.
I formerly had 1 TB drives but they filled up too rapidly; I then bought 2 TB drives but they were filling up rapidly as well so I bought the 4 TB drives. These should last for quite some time. (I also have a fifth 4 TB drive which I keep in the safe-deposit box in my bank; I update that one only once every three months so the most I could lose would be three months’ worth of data ]and that would happen only if all 4 of my dives failed - not likely - or if I “goof” - which I did once!]).
I should also mention that I currently use those older smaller drives to store just my music, pictures, and videos (even though they are also stored on my main 4 TB drives). That is a lot of redundancy but I don’t want to take a chance of losing anything, especially those pictures, music, and videos.
These are just my thoughts on reading your posting but I hope that it is of some interest and help to you.
Best of luck.
P.S. My ‘main’ 4 TB drives are formatted as EXT 4 and they are encrypted. The older 1 and 2 TB drives used just for music, pictures, and videos are formatted NTFS and are unencrypted.
While i agree with your strategy overall - it’s a little overkill for me, especially considering drives prices
Oh and like i’ve said before - i’m 100% certain this drive is not failing or anything like that, it’s perfectly fine, quadruple checked new one.
sudo e4defrag -c "/run/media/x133/STORAGE 2/"
Yeah i did, it said something like:
Fragmentation score 0
[0-30 no problem: 31-55 a little bit fragmented: 56- needs defrag]
This directory (/run/media/x133/STORAGE 2/) does not need defragmentation.
It would be pretty odd for a drive that you did a mass single copy of data to have heavy fragmentation.