I/O schedulers comparison charts

I was bored and ran some quick and possibly not very accurate throughput tests on I/O schedulers with iozone.
The tests were done on a PCIe x2 NVME SSD, btrfs filesystem with no compression.
The Y-axis is in KBytes/s, fsync() not taken into account.

Small files (64K), 1 i/o thread:

Small files (64K), 8 i/o threads:

Medium files (1M), 1 i/o thread:

Medium files (1M), 8 i/o threads:

Big files (64M), 1 i/o thread:

Big files (64M), 8 i/o threads:

Make of that what you want :wink:


Thx, that’s very interesting!
I’m shy to ask…But can you make such charts with ext4 as well? :blush:

Really wonder about current comparison between two and their schedulers

I seem to think these have some fancy built-in queuing which means none/noop tends to be the better all-round option.


Yep, that should be exactly what happens :+1:

I was surprised to see difference on Small files (64K), 8 i/o threads though, that’s a lot!

Yes, that’s what I thought as well, still valid as it seems.
However this is just a throughput test, for a more complete picture we would have to run other tests (e.g. latency) as well.

Unfortunately this drive is btrfs only and has no free space left…