Why is Ubuntu 23.04 file I/O so much faster than Fedora 38?
Last week I published two videos: one for Ubuntu 23.04 and one for Fedora Workstation 38. In those two videos was a benchmark of the usual subsystems I measure. The Fio benchmark showed a drastic difference between Ubuntu and Fedora, and I mean Huge difference, I mean 100,000 IOPS difference. I dug into the data and here is what I found.
00:00 – Intro
00:30 – Recap of Fedora Benchmark Issue
01:57 – Four I/O Schedulers
03:57 – I/O Sched: None
04:50 – mq-deadline
07:26 – Budget Fair Queuing BFQ
10:53 – Kyber
14:15 – Red Hat Use Cases
15:06 – Law of Minimiums
18:08 – Final Thoughts
19:15 – Wrap-up
19:33 – Bench
26:25 – Closeout
26:37 – Outro
Support me on Patreon: https://www.patreon.com/DJWare
Follow me:
Twitter @djware55
Facebook:https://www.facebook.com/don.ware.7758
Gitlab: https://gitlab.com/djware27
#Fedora38 #Ubuntu2304 #Performance
ubuntu
I’m curious which is the best scheduler if you are running Optane drives.
Nice explanation
Are you using it in virtual machine? and what file system do you use for test, all this will impact in the result. Here the fedora using brfs is more fast than ubuntu using ext4. I am using nvme for the test.
Hello again, i have a weird question.
What is "Fifo_batch" value on Deadline in I/O scheduler? What i know is, how many writes and reads in that value.
But i don't know how big those values are, 512 bytes, 4096 bytes or something else..
I have aarch64 build of fedora 38 running in the UTM virtual environment. And it has "none" io scheduler on block devices by default. So… there should be some glitch in kernel device type detection code or in dev rules. Maybe.
Found this your channel and this video on a search for why my fedora kept hanging.. good info, definitely subbed. Love your style man!!
"Fedora installations now enable transparent compression on the root Btrfs file system, including all its subvolumes." Is that compression the bottleneck?
ubunt-has no security be default.
This is a lot of noise about nothing. Applying the correct scheduler to the current hardware and situation is and was always the responsibility of the admin. Simple as that.
If the defaults of the distribution are not optimal out of the box for a certain situation it does not matter very much for the non-professional user and a professional user will take care of it anyways. So it does not really matter in practice.
So I think this is just a storm in a teacup and klick-bait – or just a waste of time at best.
Nothing against a video about scheduler variants and learning about it – but this has nothing to do with any specific distribution.
This is simply not an intelligent angle of approach to the topic.
the thing wirh BTRFS is that you do not need all features but more shockingly on a desktop/laptop I do not need or miss the iops also. compiles and all my workloads are no longer disk bottlenecked anyway. and on a server in a lot of cases snapshots are also great to have.
Modern ssd have controllers that operate on information not available to the o/s. I/O schedulers are a problem programmers love to solve, but sadly, it's mostly obsolete today.
9:02 Don't you mean "4 ms and even 3"?
Another conclusion would be, no matter what you do, BTRFS will suck eggs 😛
mine is using "none" also under Fedora (Silverblue) using an nvme device
I just 'nerded out' as my wife says 😆 Thanks so much for this!!
I like the AI art for your thumbnail. The monitor was throwing me off for a bit. And then I realized why.
I'm still frustrated by the fact that people moved away from deadline to noop per latency reasons, when, in reality, they just ran into the difference that noop doesn't io merges and deadline per default does. deadline + no merges worked as fast just preserving a fair scheduler. and this misunderstanding was pushed upwards until most vm disk schedulers used noop. so now we're back to issues like swap stalls just because they hack around instead of writing/reading specs for their own stuff.
"sorting" and "merges" are NOT NOT NOT NOT a mandatory component of deadline.
the effects are disgusting on large applications but i suppose people deserve that.
I liked the video. How about going over the commands to change the scheduler. That would be really helpful.
How were these benchmarks produced? What are the chances each successive test produced higher results because the files were already in the Disk Cache? Separate installations? Recreated partitions?
after I came back to Windows, I notice it also has problems with severe (unnecessary) Disk Caching for process and unusually high Pagefile (Swap) usage on Windows too. But here comes the deal: on Windows the Disk Paging doesn't bring my entire system to a halt like Linux did… something is deeply wrong in the core foundation of Linux kernel, and we've been using pretty much the same foundation from 2 GB spinning disk days (obsolete)