Ubuntu Experiments With Raising CPU Requirement
Ubuntu like many distros support the entire range of x86_64 CPUs but there have been improvements and feature additions over the years so is it about time to change that. Well maybe but first it’s worth experimenting to see what will actually happen.
==========Support The Channel==========
► Patreon: https://brodierobertson.xyz/patreon
► Paypal: https://brodierobertson.xyz/paypal
► Liberapay: https://brodierobertson.xyz/liberapay
► Amazon USA: https://brodierobertson.xyz/amazonusa
==========Resources==========
Optimising Ubuntu: https://ubuntu.com/blog/optimising-ubuntu-performance-on-amd64-architecture
Try The Experiment: https://discourse.ubuntu.com/t/trying-out-ubuntu-23-04-on-x86-64-v3-rebuild-for-yourself/40963
=========Video Platforms==========
🎥 Odysee: https://brodierobertson.xyz/odysee
🎥 Podcast: https://techovertea.xyz/youtube
🎮 Gaming: https://brodierobertson.xyz/gaming
==========Social Media==========
🎤 Discord: https://brodierobertson.xyz/discord
🎤 Matrix Space: https://brodierobertson.xyz/matrix
🐦 Twitter: https://brodierobertson.xyz/twitter
🌐 Mastodon: https://brodierobertson.xyz/mastodon
🖥️ GitHub: https://brodierobertson.xyz/github
==========Credits==========
🎨 Channel Art:
Profile Picture:
https://www.instagram.com/supercozman_draws/
#Ubuntu #Linux #FOSS #CPU #OpenSource
🎵 Ending music
Track: Debris & Jonth – Game Time [NCS Release]
Music provided by NoCopyrightSounds.
Watch: https://www.youtube.com/watch?v=yDTvvOTie0w
Free Download / Stream: http://ncs.io/GameTime
DISCLOSURE: Wherever possible I use referral links, which means if you click one of the links in this video or description and make a purchase I may receive a small commission or other compensation.
ubuntu
What is up with people saying the point of Linux is supporting as many ancient devices as possible? NO the point of any OS is to be the BEST OS. Not just the most support OS.
atomic compare and exchange is actually quite simple. It does a comparison and then exchanges two values in one single atomic operation. Because it is atomic, you can avoid any sort of race conditions. It's quite essential.
Up until last year, my home server was running Ubuntu 22.04 on a Core 2 E6800 CPU, which was definitely a v1. Its power supply kicked the bucket so it got retired even though everything else was in working order and with its 2GB RAM it was still capable enough for most tasks I had for it, so the box is still around in case I ever want to replace the PSU. The current server is an Ivy Bridge 3rd gen, that from my understanding is just under the v3 threshold and I would be very disappointed to not be able to run Ubuntu 24.04 on it.
Luckily, I don't think that will be an issue – maybe I'll need to worry about 26.04
I'm on 12th gen intel
Notes on pronunciation: "cmpxchg" -> "C-M-P exchange", also "comp exchange"; "vfmadd" -> "V-F-M add"; "movbe" -> "mov B-E", possibly "move B-E".
Compare-and-exchange specifically is a super important extension to support highly concurrent applications, like you use for web services and such, without it coordinating access to resources between many threads is much slower
The 2008 MBP I’ve been considering getting a new battery for and making it a Linux machine would be out of the cutoff but I’m also very unlikely to use Ubuntu. I think you have to have a line in the sand occasionally and a lot of what would get cut out is 15 or 20 year old stuff. Switch to a distro that isn’t making the cuts if needed, but I know MacOS saved a lot of install space when they stopped including 32 bit or when they stripped out the originally Rosetta and the old power pc code support. Less things to be maintaining to keep the OS in shape is a win for the devs really.
Isn't there always archive repos where you can just run an old distro on your old machine?
Using Gentoo. No worries.
We have an E5440 CPU in one of our heavy loaded servers. And this is not the oldest one I've seen.
why not make the install image v1 and it installs whichever one is the newest supported by the cpu
I still run my 3rd gen Intel i7 college laptop as a UPS-backed (i.e. laptop battery) hypervisor. It does the job very well for so many little tasks like reverse proxy and dns adblocking and if I lost support for it I’d be furious. Even if the battery goes bad, the rest of the mainboard can function without it, so there’s no hardware death in sight for it yet.
I’m currently running Proxmox on it though so as long as Debian supports it I’m fine.
v2 is pretty much what is required by Windows 8.1 and 10. It should be fairly safe for linux to target it as a lot of users with old hardware were forced to upgrade just for Windows. Of course, users with a very old core 2 based systems with buggy cpuid would still like an option that "just works".
I'm on a 3770 so would not hit the v3 mark. That said I think it would be far more important to test the v2/v3 builds on more general benchmarks as a vast majority of end users on something like Ubuntu are likely using it as their daily driver doing stuff like browsing the web.
Why not just include it as a distinct new architecture like amd64v3 instead of using regular amd64? Some people will compile their flatpaks for v3 while still listing them as "amd64" defeating the purpose of such a system with the "packages for everyone" goal.
I think if they bump upto v2 they should let ubuntu flavours decide whether they're gonna follow suit, cause stuff like xubuntu and lubuntu are targetting older hardware, v3 is harder to justify unless they can provide good stats on performance improvements, I think they should do some kinda hardware survey if they decide to go through with it. Either way I think they should hold off from doing it in the next LTS and maybe test around with it in interim releases.
For packages that don't include custom assembler or their own JIT, maybe we could move to just shipping WASM.
Compile packages for target machines, as it used to be?
I think we should keep support for older and slower machines.
Maybe we should add a feature where newer hardware can install newer features, but that suggests 5 different kernels and … complexity. And if you're talking about 12 ~ 15 year old hardware, then you're talking about pensioners and parents and old age homes, and … I don't want to ask those people to compile and install and configure a custom kernel.
Ideally, you'd have 2 or 3 levels of support, but then you'd have to change other parts of the distro around to suit, and … then you'd need 3x or 4x as many developers, and so you'd need to add telemetry, and before you know it you've become Canonical.
This is why i don't use Ubuntu anymore on Desktop side. Why? They are becoming less Opensource now.
Terrible idea, my mom has a PC that currently runs Windows 10 very well and if people didn't know how old it was they'd think its a modern system because its snappy for everything she does. But its a Sandy Bridge CPU and the motherboard isn't even UEFI yet (Although I could update it to UEFI).
If Windows 10 has wider CPU compatibility than Linux its not going to be that Windows 10 -> Ubuntu switch in 2025.
You can have my Thinkpads after you pry them from my cold dead hands somewhere in the future apocalypse wasteland.
What about those of us that recently went for the great value of Ivybridge E5 Xeons? I'm too lazy to look up which version that is.
Honestly, I just dont see the benefit in removing support for the older CPU ISAs as your OS day to day is not going to benefit greatly from those instruction sets, at best approach this should be done at the application level or library level where the performance tuning/optimizations can be maximized. You can implement this in different ways you can have the dynamic linker pick which is best based on the detected CPU features while would mean larger packages to provide v1-v2-v3 binaries, Or you have the package manager pick the correct build but at the cost of even more packages to maintain for the same software.
My experience with Gentoo taught me that seeking the best performance possible is more complicated than simply enabling every supported cpu flags. There is some dark wizardry involved in software that sometimes makes them run faster with older instruction sets. The reality is that software tested to be as fast as possible on x86-64-v1 didn't necessarily receive as much attention in optimization for newer instruction sets
I would note that pcie v3 server kit works all the way back to x79 / sandy-bridge, and while not power efficient, those systems do allow home users to play with relatively high-end enterprise kit. With minimal performceance improvements, I'm not sure moving the baseline is worthwhile. This seems like a distro-admin play rather than anything else.
Nice clickbait thumbnail text, prompting a nice round of "Canonical = Microsoft" comments and causing swirling conversation based on folks reacting to the thumbnail only, then acknowledging 2/3 of the way into the actual video that it's an exploration, not a plan.
lol @7m:15s in the video , the listing of the distro's you might wanna remove gentoo from that list… gentoo allows you to specify whatever is used for compile time cpu optimizations, (in make.conf with the COMMON_FLAGS variable , using `gcc -march=native -E -v – </dev/null 2>&1 | sed -n 's/.* -v – //p'` to find the instruction sets your cpu supports… so not an issue on gentoo
Tumbleweed actually got optional support for v3 early this year. It’s just a pattern that can be disabled if you don’t want it (and as far as I know it is off by default if your CPU is not supported).
The atomic compare and exchange (aka compare and swap or CAS) is a really interesting operation. It's the most essential operation in parallelized programming and anything else can be built using it. The extension seems to be just the 16 bit version of it, and it definitely existed as an operation before that.
The operation goes something like this: if this register contains the value x, write the value y to it, and let me know if you did. The key is that this is one atomic operation, meaning that it's impossible for another instruction (even from a different core) to happen between the check and the write, which can happen between almost any other check and write action…
MXlinux has an advanced hardware option. I prefer that approach. Make V1 standard and v2 an optional version.
cmpxchg16b is a CPU instruction, that may or may not exist on some CPUs. It essentially compares two values, and if they are identical, exchanges two other values. It also does that "atomically", meaning once it starts it will finish before the CPU does something else (e.g. a different thread/process gets CPU time).
This instruction has been present in most Intel Core 2 CPUs, so if your CPU is 2010 or later, it will likely be supported. Note that Windows 8.1 and Windows 10 also had this as a prerequisite.
I'm not understanding the backlash. There are Plenty of distro that effectively target older chipsets. There is no need to Ubuntu to spend the time and effort supporting hardware that others do more effectively and specialise in. If I have a 15 year old machine, I'm not looking at Ubuntu, I'm looking at Puppy Linux, Tiny Core and others. Ubuntu doesn't need to be everything to everyone.
maybe do gentoo and compile all for your CPUs yourself be even better??
WTF? Linux is doing planned obsolesence now???
This was basically the reason why was Gentoo super popular in 2000' you set compiler flags systemwide, and you compile each application to your specific system. Now I see how foolish it was, but still it was best years of my life
I have some Sandy Bridge machines in service still… Not looking forward to the Haswell cutoff if it happens.
Ubuntu should support everything or have 2 versions, Ubuntu-Newb and Ubuntu-Leet
Maybe, what needs to return is a trend you had in the early 2000s – distributions offered kernels and performance critical packages in multiple, sub architecture optimized builds.
I'm down for optimised builds but if your CPU can run a Linux desktop and apps without issue, it should be supported. 32bit was dropped by many distros because it's realistically difficult to get many apps in 2023 that support them. pre haswell 64bit CPUs are still largely capable of running most apps assuming their performance isnt an issue. Until 64v1 chips physically cant run linux performatively it seems like a bad idea to drop them
I personally refuse to use Ubuntu, or any Ubuntu based distro at this point, as they keep pulling more, and more shady BS. I say a good place for a lot of people start on Linux that support for older hardware is Solus Budgie, or Manjaro Gnome/KDE if they are not comfortable with computers. Ubuntu is just driving itself of a cliff faster, and faster.
Gentoo looking at this: delicious optimization ahead
"that is less than 10 years old is fairly insignificant" …. it is exactly not when you are talking high core count server grade systems (pre-v2 systems are uninteresting these days indeed, v2 systems are a very different story).
linux is useless. So I don't care. It is only good for light users.
Canonical = Microsoft?
There is so easy just make own version for these optimized versions without dropping others.
Some Stalinist canonical staff sure can have their own culture how run their distro.