This was fixed both in userspace and the kernel in 2019... the userspace fix was created in April and merged on the 3rd of September, while the kernel commit is dated Sept 18.
I'm unclear on why this is being sold as "no one maintains X11" and not "the kernel wants to support old versions of X11".
The mailing list post states "Fortunately, since this was committed, somebody did actually disable the userspace side by default in X11", which appears to be false (it happened first)?
I am so glad it does. During my PhD, I had to maintain a very old stack of software (about 1 million lines of C++ on top of 300,000 of Fortran).
It was not possible to compile it with something different than Windows 2000, it was not possible to upgrade the MFC version and all the old compilers.
It was basically stuck in time but with a very high value for the end-users and our research group had not the bandwidth to upgrade. Microsoft was nice with us and kept the compatibility and still keep the compatibility 20 years later!
I am using Linux as my daily driver, but I must say, Microsoft is the king of backward compatibility. A feeling of "compile once, run forever on Windows".
> "Microsoft is the king of backward compatibility."
The thing is, this hasn't been true for a long time. At this point Wine/dosbox are far more compatible with old weird software than the current version of windows. For example, you can't run 16bit Windows programs at all on Windows.
You even have projects like otvdvm, running wine on windows to support older programs (or dosbox, which I think everyone is a lot more familiar with). http://www.columbia.edu/~em36/otvdm.html
The genuine article tends to be better while older platforms are supported, but at some point, inevitably, older platforms are dropped because supporting them costs money. When that happens, open source imitations end up with better support than the original.
> You even have projects like otvdvm, running wine on windows to support older programs (or dosbox, which I think everyone is a lot more familiar with).
Note that otvdvm isn't the same as DOSBox which emulates everything. Otvdm passes the calls to the native 32bit APIs and it relies on the fact that Windows does preserve backwards compatibility. It wouldn't work without it.
DOSBox on the other hand only needs a framebuffer, an audio device and an input device to work and does everything else itself.
NTVDM and DOS/16-bit compatibility only died with the death of 32-bit Windows with 11. I'd say 40 or so years of backwards compatibility is pretty good.
It is and isn't. Really old stuff seems to work well. But just old stuff more often than not is unworkable. There's a serious issue in the Linux world with distros going down and software disappearing over time.
> For example, you can't run 16bit Windows programs at all on Windows.
Isn't that architecture limit? AFAIK 64-bit mode on x86_64 machines does not support vm86 mode, so 16-bit code cannot run directly on 64-bit OS. That is why dosemu no longer works in (64bit) Linux, while dosbox works, because it is full CPU emulator.
AFAIK not exactly, under 64bit mode you can run 16bit protected mode but not 16bit real mode code. While this would be an issue for Win1.x/Win2/x compatibility, the majority of 16bit Windows applications however are made for Win3.x which tend to be made for 16bit protected mode.
This is basically why you can run 16bit Windows programs/games under Wine on 64bit Linux (though you may need to enable some flag via /proc or /sys i don't remember the name of).
Correct, AFAIK the main reason to not support Win16 protected mode was the size the relevant bits of HANDLEs which would have been severely limited to 16bits. Completely obviating the reason to use 64bit windows. As is HANDLEs are 32bits of significant data on at least through Windows 10. In theory now that Windows 11 has dropped 32bit support could be 64bits on Windows 11. All of this because someone will inevitably want to use the clipboard between a 64bit app and a 16bit app and it would have made things VERY painful. Could MS have written a layer that "fixed" that? Potentially but the number of people using 16bit software was relatively small AFAIK at that point.
But the page tables do technically support 16bit protected mode code when in compatibility mode under long mode AFAICT.
It's worth stressing out that it's not just about 16-bit applications; even older 32-bit games for Windows that use things like DirectX 6 often work better under Wine than on Windows these days.
Linux has a rule to "never break the userland". So if you maintain your own userland(libc/gui libs, etc..) even the latest kernel will run your old software.
> "The 32-bit Oracle 8i database will no longer run on modern Linux."
Yes it will.
> "I forget the exact errors, but the software is simply too old."
You're running into compatibility issues with newer libraries. Libraries do change in incompatible ways -- but the solution is simply to install the older library.
You absolutely can run 32-bit Oracle 8i on a modern kernel. The steps to do this in a nutshell (omitting a few details) are:
1) Grab a linux userland which 8i was built for (eg: RedHat7) and extract it somewhere. Use docker, create a chroot, whatever you like.
2) Enter the container/chroot and install 8i. It'll work fine, because you'll be linking the correct library versions and those libraries will still work even on a modern kernel due to the aforementioned kernel compatibility guarantees.
True, but users don't care why it breaks, only that it does, and most people consider "the distro" to be Linux.
So you probably CAN get old Oracle databases to run on the latest kernel, but all the supporting software would be a weird mishmash of old libraries, etc.
VMware is a different beast altogether. Maybe it'd remove the maintenance of a physical NT machine, but you'd still be maintaining the whole NT software system, and honestly that's most of the headache. Plus the overhead VMs have and all.
Containers run on your kernel and hardware natively, with a tiny bit of shims from the kernel side so it's a different root and possibly removing raw hardware access. Oracle 8i should run just as well as it would have 20 years ago, if you have all the userland libraries in order (hence putting an old distro in the container).
As long as Oracle isn't internet-facing (you can setup the container's networking however you wish, with whatever firewall rules you wish), it should be fairly safe. The modern kernel you run it on should be up to speed on the latest security patches too.
Windows 2000 was a major platform for 5+ years, so if you have a development environment on it and never moved it elsewhere, there will be lots of assumptions built-in.
As the OP mentioned they didn't have _time_ to fix it; likely 80% of the problems are hard-coded paths and such that would be relatively easy to "fix" - but the other 20% may be dependencies on things that got deprecated/removed in later versions of the build environment, and updating would take quite a bit of work.
This is one of the reasons that if you are trying to build cross-platform software (and Windows 2000/XP/98/Vista is cross platform, mind you) you should start building and testing on the various platforms early.
> if you are trying to build cross-platform software (and Windows 2000/XP/98/Vista is cross platform, mind you) you should start building and testing on the various platforms early.
Yeah people tell me they're having problems migrating their Java 8 app to something modern and I can't understand why haven't they always been running all versions of Java in CI, including pre-releases, and just fixing the one thing a month that pops up as a problem. They can still run on Java 8, but they will know it's going to work fine everywhere, and if they find something that won't work in a future version they can engage the Java developers before they release it.
People do some of that, but Java 8 is in that weird state where it is Java for so many things that moving off of it can be incredibly hard. It took Mojang/Microsoft how many years to get Minecraft working on something that wasn't Java 8?
And that doesn't even cover the changes in the versions of Java 8 itself that break things - one even had to be reverted IIRC.
I don't know what you mean - if that's the case why didn't they realise that in like 2015, 2016, when these Java 9 features were added? And why haven't they been able to fix them since then?
I know about development realities, but seriously come on at some point?
I think Linux is the king of backwards compatibility, in that you can always download an old version of Linux and run your old software stack. Windows forces you to use a new version; Microsoft puts a lot of effort into backwards compatibility but it's always a gamble.
> in that you can always download an old version of Linux and run your old software stack.
That's not really what's meant by backwards compatibility. Backwards compatibility generally means your current OS can run your old software without having to install a raft of older (and likely insecure) versions.
Windows accumulates backwards compatibility "hacks" over time meaning you likely have a fairly decent chance of running your older software where vendors misused API's in ways they ought not to have.
Of course there are the usual caveats about running 16bit apps on 64 bit Windows where you'll need to find a 16bit emulator.
> Windows forces you to use a new version
It doesn't. If you were running Windows 7 or 8 on the cusp of the Windows 10 roll out, that 20-25 year old app will still run on the currently supported versions of Windows. The "hacks" for that ancient app you still know and love will still be there. Now there are some far away and yet undiscovered edge cases, as in really exotically and mythically rare that yes their hacks might only appear in a future version of Windows.
I'd rather have old software run on a new OS and hardware.
Than access to old OS and hardware to run the software.
Simply because that means you're not reliant on obselete hardware (expensive poor performance hard to find) and potentially a OS full of security holes.
Similarly I'd rather have to use a VM at compile than run time. For convenience.
Will that old version of Linux be able to run on a modern computer with working drivers for all the hardware? Backwards compatibility means that you can use the current OS with the current drivers and still run the old software.
Where I'm from, you can still easily buy a Windows XP license. I've also seen a win2k license maybe about a year ago. Should have bought the box for nostalgic purposes when I had the chance now that I think of it.
> It was basically stuck in time but with a very high value for the end-users
I wonder why it wasn't properly maintained if it was such a valuable piece of software ? I assume they perform regular maintenance on all other high-value property they own ?
I would guesstimate that software 100% was ‘maintained’ by PhD students with x years of funding in which they 1) had to get their thesis done. 2) had to get their thesis done. 3) etc.
Spending a few months to make life easier for your successors just isn’t worth it. And yes, it might be worth it at the start of your PhD, but then, you don’t know the software well enough to dare embark on such a journey.
Also, it’s quite possible none of those PhD students had decent schooling in software engineering.
I completely reject preserving backwards compatibility for the sake of application bugs.
Why should Microsoft produce a hack for a particular piece of software? That software might be used by a 10s, 100s, 1000s of people for a certain amount of time, whereas Windows itself impacts billions of people daily and will continue to do so for decades to come.
At some point the programmers, PMs, and maybe even the bug tracker behind these hacks will move on. Institutional knowledge is rarely forever. A newer generation of developers, PM, etc, will have to waste time understanding the bug, and deciding to support it or not. Possibly without ever knowing if the software in question has moved on itself, or is even in use anymore.
Say what you want about Apple, but rarely the words "badly aged" or "crufty" are used to describe their ecosystems. They are the stunning definition of darwinism at its finest.
Being able to run software is what makes a computer valuable. If you constantly break software, the computer is useless.
Fortunately nowadays we have more resources to throw at the problem and can run such software in containers, VMs, or other such isolated sub-environments when necessary, but that wasn't always the case.
> Why should Microsoft produce a hack for a particular piece of software?
Because if software app Foo works fine on Windows X, and suddenly breaks on Windows X+1, users don't care if Foo is actually buggy or not. It worked on Windows X. What they see is that Windows X+1 broke Foo. And they talk to each other - "Do you use Foo? If so, don't upgrade to Windows X+1, it breaks Foo!"
And if there's another app Bar which has a different bug that nevertheless works on Windows X but breaks on X+1, those users will chime in saying "Oh yeah, we noticed it breaks Bar too!"
And then you get the general conversation go around "Don't upgrade to Windows X+1, it breaks random software!" Do you rely on some niche app, or even a semi-popular one made by anyone who isn't Microsoft? If so, better stay away from Windows X+1 - it breaks a bunch of random apps dontchaknow.
Apple is a terrible example to hold up for breaking backwards compatibility. Their capricious and self-serving changes are terrible for developers and users that just want their software to work.
Apple and Windows are extremes in each direction. The best (and hardest) answer is to live in the middle somewhere.
Microsoft has a systematic approach to applying these hacks so its less expensive for them to maintain them than it would be otherwise.
For instance they changed the implementation of malloc() between Win95 and Win2k that broke some applications that depended on the behavior of the old malloc() so there is a flag in the system for those applications that restores the old malloc().
I suspect, however, that Windows is a little more sophisticated in their implementation of the concept. "Starts with X" is a much wartier approach than (invented example) a rules-based framework that checks against exe metadata/checksum.
Microsoft's head of marketing at the time posted about this on internal Yammer. He said there were two main reasons: first, "Windows 10" just did better in focus groups than Windows 9 or other alternatives; second, since the plan at the time was for all future changes to be delivered as updates rather than a newly branded Windows release†, it felt better to end on a round number.
† btw, what became Windows 11 started development as just another Windows 10 feature update, and what gets branded as a "new release" vs an "update" is mostly a marketing decision.
> what became Windows 11 started development as just another Windows 10 feature update, and what gets branded as a "new release" vs an "update" is mostly a marketing decision.
The move from Windows 10 to Windows 11 came with some pretty severe restrictions on supported hardware, so the version jump in this case makes sense to me. It would be a lot more difficult to push an "update" which obsoleted so much hardware like that.
> came with some pretty severe restrictions on supported hardware
That's hardware, and just like with the Linux kernel you have some tough choices to make about what you continue to support. Apple rendered my Intel Mac Mini a brick because despite having a 64 bit CPU its BIOS/UEFI or whatever was 32 bit and I was stuck with MacOS 10.6, no way to upgrade to 10.7 Lion.
The number 9 is considered an unlucky number in some Chinese cultures.
Same reason we never had an iPhone 9. Other companies skip the number 9 too for the same reason.
Companies will not admit this in public as to not offend the CCP or it's people. It's why their reasons when questioned tend to seem a little odd or incomplete.
I don't really see how that's related, since -- as far as I recall -- that was Microsoft working around outside programs inspecting the kernel, rather than the kernel inspecting an outside program... but please feel free to enlighten me.
I think this is probably exaggerated. IN windows basically all old softwware works with very little fiddling. Maybe downloading a dll it literally just tells you it needs, maybe trying a few different compatibility modes. It is normally a <5 minute affair.
I am glad that MS goes ahead and fixed some of the most popular software, but even the ones they clearly did not do this for, work almost as seamlessly.
There have been some notable application-specific hacks applied, at least in previous generations of Windows. SimCity comes to mind: it used memory after free and Windows... 95(?) broke it, so a hack was put in place to explicitly allow processes named simcity.exe to use memory after free. Or something like that.
What people miss from the story is why it was so important that Microsoft do that (and it's a similar reason to why Linus insists on never breaking userspace).
The article makes it sound like the only choices were this hack or break X, but that's not the case.
It would be perfectly fine to have a "modeswitch_prohibit" property (hooked to sysfs and kconfig) which contains a list of process names to match. It would be unusual, but it would ot be an ugly hack.
The steam client is still 32bits, statically load "libX11" libs, statically load libGL.
Waiting for wayland->x11 and vulkan->GL->CPU runtime fallbacks to happen, and why not that the client becomes a pure and simple ELF64 apps (with the least amount of relocation types), libdl-ing everything, even the libc services it uses and opening itself to alternative C runtimes.
Similarly, you haven't been able to play neither GoldSrc nor Source engine games on macOS either for a while, i.e ever since its 32-bit userland has been yanked out. I would definitely enjoy a 64-bit rebuild.
(Now there's arm64-darwin, although I don't see Rosetta 2 going away anytime soon unless the whole world ends up skipping on Intel)
Rosetta 1 has been available for 6 years between tiger and lion, and rosetta 2 already been introduced two years ago.
Since the performance upgrade between intel to apple M is less critical than power to intel, that there is less incentive to move from intel, and that apparently apple uses it an opportunity for virtualization compat', Rosetta 2 might be here longer, but i'd be careful on any assomption :)
Reminds me of when I tracked some bug in linux game (don't remember the name) to some fs call on xfs not returning what game expected, and it only happened if you had "old" (formatted with some older version) of XFS.
There was separate problem with some games borking out if your FS returned 64 bit inode.
There was also a swath of bugs with Unity games crashing or behaving weirdly if your locale didn't had same numeric separators as default one,
So if say, your locale printed 0.0 as 0,0 game didn't work till you changed environmental variables to use english or C
I am not talking about games but about the steam client.
dota2 and cs:go are vulkan. I wonder if they can run natively in a wayland compositor though, I still run xorg (and I should not). Curious... anybody knows ?
That would be one of the maintainers of the Linux graphics subsystem. He has almost 4000 Linux kernel commits on his resume, so you're in for quite some work.
Also the patch that was committed (which was the third version) was reviewed by two other developers and acked by two more. That will add about 1700 more commits to your endeavor.
The point which your parent post made, and which I was implying in my original post, is that if the committer was willing to commit such a clumsy "fix", what else might they commit in other places?
Nobody said the committer was not intelligent nor prolific. What was implicitly brought into question was the judgement of the person.
"But all the people who reviewed and accepte the commit!..." Sometimes reviewers don't pay full attention (because they are busy). Or sometimes they don't feel confident to challenge someone. Or they may be inexperienced.
I understood, and I do believe you're experiencing the Dunning-Kruger effect as well. First, the graphics subsystem of Linux is in fact one of those that puts more weight on reviews even for code authored by the maintainers.
Second, do you have any idea if the issue being solved here and what a less clumsy fix would look like. Nobody is denying that it's clumsy, which is why it is being reverted, but have you evaluated the alternatives or even bothered to look up mailing list discussions before making your conclusions about it?
Apparently the root cause of this hack in the kernel is : "nobody wanting to maintain X11 anymore". Is X11 becoming yet another underfunded cornerstone of the open source ecosystem?
Note that this was fixed on the X side around that time (it mentioned in this article too) but the kernel hack remained in order to avoid breaking any existing X installations.
AFAIK the issue with the X server is that its maintainers at the time (basically Red Hat) didn't want to continue the maintenance, however since then it has been passed to an independent maintainer, Povilas Kanapickas who -if you want to financially support- has a Patreon[0] (also he has posted here on HN occasionally). He is the one who has made the most recent releases.
This has been discussed in comments of Wayland threads on HN for years. People who want to keep using X complain that they still use it, someone replies "All the X maintainers have moved onto Wayland because they think it's the path forward, if you want X to be maintained step up and do it" and it's not clear if anyone ever did.
For whatever reason, it's only in recent years that distros have switched to using Wayland by default. So it's a bit early to completely drop the former display system. But it would be expensive to keep it alive as well !
It's annoying, X got to the level where it "just works" out of the box for me on my setup and now it's "well, fuck that, we got bored, let's go Wayland!"
And some design decisions are.... iffy at best. Like the fact now every DM have to implement its own mouse(trackpad etc.) management coz "well we don't wanna" (basically, I'm sure Wayland developers have better articulated response to why everyone else involved have to duplicate the code).
Or how many hoops software have to jump just to record screen (or is that fixed now?)
Shared code can (and does) exists outside of the compositor or display server protocol. E.g. for your ending question xdg-desktop-portal is the shared resource in use these days.
I think the biggest mistake of Wayland was the rush on early migration for general use before it really had time to soak. The general design and not having everything in the display server itself is perfectly fine.
1 reason I can think is kinda the same thing with Vulkan. They both are so agnostic to what they run on that they have a minimal set of things that they function with.
In the case of Vulkan its: I interact with the graphics processor, and don't care if you have a display so windowing is an extension and has platform implementations if applicable.
And in the case of Wayland its: I am a protocol for managing and displaying windows, and I don't care what kinda inputs you have (example: VR doesn't have classical PC inputs).
Userspace fix: https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests...
Kernel fix: https://github.com/torvalds/linux/commit/26b1d3b527e7
I'm unclear on why this is being sold as "no one maintains X11" and not "the kernel wants to support old versions of X11".
The mailing list post states "Fortunately, since this was committed, somebody did actually disable the userspace side by default in X11", which appears to be false (it happened first)?