I can’t help but wonder if kernel devs realize how much this discussion sounds like something you’d expect from Apple. They are talking about obsoleting hardware not because it’s fundamentally broken, but because it no longer fits neatly into a roadmap. Open source has always been about making hardware outlive commercial interest and let it run long after the hardware vendor abandons it.
I'm pretty shocked to see comments like "the RAM for a 32-bit system costs more than the CPU itself", but open source isn’t supposed to be about market pricing or what’s convenient for vendors; it’s about giving users the freedom to decide what’s worth running.
I understand that maintainers don’t want to drag around unmaintained code forever, and that testing on rare hardware is difficult. But if the code already exists and is working, is it really that costly to just not break it? The kernel's history is full of examples where obscure architectures and configs were kept alive for decades with minimal intervention. Removing them feels like a philosophical shift, especially when modern hardware is more locked down and has a variety of black box systems running behind it like Intel ME and AMD PSP.
> They are talking about obsoleting hardware not because it’s fundamentally broken, but because it no longer fits neatly into a roadmap.
Not really. The discussion is about cost, benefits and available resources. Projects are not immune because they are open source or free software. Actual people still need to do the work.
> Open source has always been about making hardware outlive commercial interest and let it run long after the hardware vendor abandons it.
Again, not really. Open source has always been all about freely modifying and distributing software. This leaves some freedom for anyone to keep supporting their pet hardware, but that’s a consequence. In this case, I don’t think it would be a real problem if anyone would step up and commit the ressources necessary to keep supporting older hardware. This freedom was not taken away because a project’s developers decided that something was not worth their time anymore.
> But if the code already exists and is working, is it really that costly to just not break it?
It depends on the feature, but in many cases the answer is in fact 'yes.' There's a reason why Alpha support (defunct for decades) still goes on but Itanium support (defunct for years) has thoroughly been ripped out of systems.
An interesting example of this was i386 support (as in being able to run on the 80386) being removed way back in kernel 3.8 (early 2013). If you can support 80486 why can't you support 80386?
Well, there really wasn't much support for atomic instructions in x86 before the introduction of compare-exchange in the 486. Any time you wanted guaranteed atomicity on a 386 you had to disable interrupts, which among other things means that if you lock up during a critical section your entire system hangs. Another implication is that nearly all of our lockless data structure constructs depend on compare-exchange instructions.
It vastly simplified some very tricky sections of the kernel to remove support for systems that don't have hardware atomic instructions, so it ended up being done.
What's the Venn diagram of people stuck with 32-bit hardware and people needing features of newer kernels? Existing kernels will keep working. New devices probably wouldn't support that ancient hardware; seen any new AGP graphics cards lately?
There's not a compelling reason to run a bleeding edge kernel on a 2004 computer, and definitely not one worth justifying making the kernel devs support that setup.
The bulk of CVEs that crossed my desk in the last couple of years were in things that wouldn’t matter on a 32-bit system, like problems in brand new graphics cards or fibre channel or 10G Ethernet, or KVM hosting, or things like that. There wasn’t a huge flood of things affecting older, single-user type systems.
But in any case, I’m sure Red Hat etc would be happy to sell backports of relevant fixes.
It’s not that I’m unsympathetic to people with older systems. I get it. I’ve got old hardware floating around that I’ve successfully kept my wife from ecycling. It’s that I’m also sympathetic to the kernel devs who only have so many hours, and don’t want to use them supporting ancient systems that aren’t still widely used.
If I'm running a living museum of computer history, to let the youth of today experience 15" CRTs and learn the difference between ISA, PCI and AGP slots - I'm probably not connecting my exhibits to the internet.
For legacy hardware like this it's usually not anywhere close to as important as it is for modern systems.
These systems are not being used to browse the modern web, they're too slow.
They're not being used to host production multiuser environments beyond a few retrocomputing homelabbers' toy systems, where the worst case scenario is a restore from backup and banning a user or two rather than data loss/breach and/or legal action.
The ones still in actual active use are almost all appliance systems that mostly haven't seen a kernel update in years or ever because they are usually exist to go with some piece of hardware that never got an in-tree driver and thus can't work with anything much newer than what it shipped with and/or some software that depends on ancient libraries no distro ships anymore. These systems don't need to (and shouldn't) be exposed to untrusted networks, users, or content, they can (and already should) be locked down by a skilled admin to only communicate with the minimum number of systems needed for whatever purpose they serve. If the admin isn't sufficiently skilled to confidently handle that, the system and admin should both be replaced.
---
I have an old IBM PS/2 that's my family's first computer, which still has its original Windows 3.1 install on it. I imaged the original hard drive and moved it to a CF card, but that also means I can screw around with it and not worry about breaking anything because I can just restore the last known good image. I don't connect it to the internet often but if on one of those rare times I happened to somehow stumble upon someone who had been saving a drive-by exploit for IE 3.0 or a RCE against Trumpet Winsock that then infected my system I'd just do the same. Anything this old is small enough to be imaged easily.
> open source isn’t supposed to be about market pricing or what’s convenient for vendors; it’s about giving users the freedom to decide what’s worth running.
Ehhh, it's about users having the ability to run whatever they like. Which they do.
If a group of users of 32 bit hardware care to volunteer to support the latest kernel features, then there's no problem.
If no one does, then why should a volunteer care enough to do it for them? It's not like the old kernel versions will stop working. Forcing volunteers to work on something they don't want to do is just a bad way to manage volunteers.
> If a group of users of 32 bit hardware care to volunteer to support the latest kernel features, then there's no problem.
It's not just the case that you need people to support 32bit/nommu; you also have to account for the impact on other kernel devs working on features that are made harder.
This is called out in the article around keeping highmem support.
That is a fair point! I do think though that it would make sense for maintainers to at least put out an open call to users and developers before dropping something as fundamental as 32 bit support. The reality is that not all users are going to be kernel developers, and even many developers today aren’t familiar with the workflow kernel development requires. Mailing lists, patch submission processes, and the cultural expectations around kernel work are all a pretty steep barrier to entry, even if someone does care about the removal and also happens to be a developer.
The other dynamic here is that the direction in Linux does come from the top. When you have maintainers like Arnd Bergmann saying they would "like" to remove support for hardware (like the ARM boards), that sets the tone, and other contributors will naturally follow that lead. If leadership encouraged a philosophy closer to "never break existing hardware" the same way we’ve had "never break userspace" for decades, we probably wouldn’t even be debating removing 32 bit.
I’m not saying kernel devs need to carry the weight alone, but it would be nice if the community’s baseline stance was towards preservation rather than obsolescence. :(
I'm pretty shocked to see comments like "the RAM for a 32-bit system costs more than the CPU itself", but open source isn’t supposed to be about market pricing or what’s convenient for vendors; it’s about giving users the freedom to decide what’s worth running.
I understand that maintainers don’t want to drag around unmaintained code forever, and that testing on rare hardware is difficult. But if the code already exists and is working, is it really that costly to just not break it? The kernel's history is full of examples where obscure architectures and configs were kept alive for decades with minimal intervention. Removing them feels like a philosophical shift, especially when modern hardware is more locked down and has a variety of black box systems running behind it like Intel ME and AMD PSP.