I was bored and ran some quick and possibly not very accurate throughput tests on I/O schedulers with
The tests were done on a PCIe x2 NVME SSD, btrfs filesystem with no compression.
The Y-axis is in KBytes/s, fsync() not taken into account. Small files (64K), 1 i/o thread:
Small files (64K), 8 i/o threads:
Medium files (1M), 1 i/o thread:
Medium files (1M), 8 i/o threads:
Big files (64M), 1 i/o thread:
Big files (64M), 8 i/o threads:
Make of that what you want
Thx, that’s very interesting!
I’m shy to ask…But can you make such charts with ext4 as well?
Really wonder about current comparison between two and their schedulers
I seem to think these have some fancy built-in queuing which means
noop tends to be the better all-round option.
Yep, that should be exactly what happens
I was surprised to see difference on
Small files (64K), 8 i/o threads though, that’s a lot!
Yes, that’s what I thought as well, still valid as it seems.
However this is just a throughput test, for a more complete picture we would have to run other tests (e.g. latency) as well.
ext4 as well?
Unfortunately this drive is btrfs only and has no free space left…