I think that's only true perspective of an OS developer:
ABI inter-compatibility (e.g. the Windows and Linux model) prioritizes customer experience. Customers hate it when their applications stop working, and application developers don't want to spend lots of effort to track platform API changes just to avoid breakage.
Abandoning ABI inter-compatibility (OpenBSD, Apple) prioritizes platform developer experience. They want to be able to freely make API changes and don't want to spend time maintaining old APIs that they could use to work on new APIs.
I think the problem with the latter is, while there may be a few hundred developers working on a particular OS, there's are orders of magnitude more customers and application developers. Totally abandoning ABI inter-compatibility seems like putting the interests of the very few over the interests of the very many.
I'm not really an OS developer* but I feel like all the build up of clutter and deprecated-but-still-there-but-unmaintained dead-end APIs with kinda-working (with caveats) replacements and redundant tools (targetting old and new APIs) is actually significantly hurting my experience as a user as well as a developer who develops applications on top of the platform with all that clutter. That's right, I hate Linux as a development platform as well as in terms of UX. By contrast, the Linux kernel, where they do not maintain internal API compatibility, is a much nicer area to be writing code in.
I love OpenBSD specifically as a development platform as well as due to its UX. (Not all of its qualities can be attributed to lack of legacy clutter and ABI stability, but I do believe it plays a role)
* For day job, I develop and maintain low level systems software on a custom embedded distro, along with some kernel bits.
Transplant this approach to Windows, where users habitually expect software that was built twenty years ago against Windows XP to still run, and you get a lot of angry people calling.
IMHO (spoiler alert: OS developer, but not on OpenBSD), whether or not you want to spend time maintaining old APIs shouldn't be relevant, not after you reach adulthood anyway. I don't think OpenBSD chose this approach because the alternative just ain't fun. If a stable API is part of your approach, then that's what you do. It's not glamorous but a lot of things in programming aren't. You can certainly add new functionality while maintaining compatibility. Linux does it pretty well, for example. (Linux does break things now and then, but very rarely.)
It's also a valid choice to not maintain ABI compatibility or API compatibility, but making that choice doesn't magically absolve you of making this work for users and third-party developers.
There are projects that eschew this responsibility and offer various silly reasons for it. Unfortunately, unless you own and operate the kind of reality distortion field device that Steve Jobs owned and operated, people usually see right through the silly reasons and tend to lose patience with this model (unless they're financially committed, which is why it doesn't work that well for FOSS software).
- More bugs
- More vulnerabilities
- Increasing cost of support
Which absolutely impact the customer experience, just not in the short-term. A little bit of short-term friction averts long-term intractability.
To put this in another context: newer building codes may result in better and safer homes, but it'd be extremely user hostile to force homeowners proactively upgrade their homes to compliance each time a new version is released (at the threat of having their home condemned if they do not). The sensible compromise, in buildings and software, is to allow things to be upgraded over time, as they're modified.
Breaking backwards compatibility has a larger negative impact in the _present_ than the cruft of old APIs and code. But that impact is temporary. However, the negative impacts of cruft can have a larger impact _in total_ over the entire lifetime of the operating system.
They affect it far worse, because they affect _every_ user. Having unmaintained/outdated software break only affects the subset of users that want to use that particular software.
One huge market where this does happen is games. Disregarding the current plague of microtransaction-funded 'live experiences', most games are pieces of software that get released and are mostly done, barring some added content going out for a year or two. Losing the ability to play these games because someone has decided that ABI compatibility is kinda hard is ridiculous, and would definitely not fly for a consumer OS.
It would be interesting for someone to try to apply this same argument to hardware: would it make sense to abandon old hardware support every release? Doing this with device drivers was one of the things which hurt Linux adoption on the desktop, and hurt Windows Vista's release immensely.
Overall, end-users do not and should not care for OS updates. They are a necessary evil, to help fix bugs that the OS developers missed that threaten their security; and to be able to use new applications that rely on new OS features. But breaking old applications or hardware is a massive pain point that makes users weary of updating despite the risk to their security.
Old games have a tendancy to break reasons even without ABI breaks..
I think it's ridiculous that games are still primarily closed source binary blobs that cannot be easily fixed and patched by the users to keep them running fine for decades.
It's not a bit of short-term friction, it's constant friction. As long as development is continuing, there's always something about the API that can be improved and would really pay off if only that was actually its final state and not just the next step until we find a good reason to break it again.
I think the implicit calculation here is that if you push a release that breaks the user's workflow, they can point to a specific point in time where things became frustrating and there will be a PR hit at that moment in time.
If instead you maintain compatibility, the small costs of all the technical debt accrue over time to make the experience worse than it might otherwise be, but users may not even notice or have a conception of what they may be missing for having stayed on this path.
They ultimately may end up with a worse product / UX, but they have no specific reason to complain about it.
Note that this doesn't reflect my personal values about software, but I can see how it serves Apple's priorities and keeps their products attractive in the eyes of their customers.
In the OpenBSD and Apple worlds, you can change an ABI because applications are expected to, as they should, call libc and similar platform abstraction libraries instead, and these libraries are dynamically linked.
But that only shifts the maintenance burden. Anytime Windows changes its inner working that means that every existing library has to be changed to expose the same interface to old code, including being bug-compatible and supporting all the abuses of undocumented features (or at least those used in software used by relevant clients).
Here's an example from my experience with ordering food or drinks. I'm floating the idea that it generalises to other areas as well, but that's up for debate. If I don't know the restaurant or bar well and I have a chance to ask the kitchen staff or bartender (in case of a drink) for suggestions, I'll do it. I'll ask what they like to make. I'll always ask indirectly, because it's a personal question. I'll engage them in a short dialogue in which I get a feel for _what would feel good for them to make_, _how_, etc., and then I'll ask them to do it for me. It's often something that involves a bit more skill or know-how or is stimulating to do in some other way. Sometimes it's something gourmand that they're proud to be able offer, and they're thrilled that someone's willing to walk off the beaten path and that they can accomodate that.
That's the geometric opposite of looking at the menu and picking what you fancy the most. In a way it's about not taking your likes and dislikes seriously, so as to stay open, because, this is a place you don't know, while, this person you're talking to, he essentialy is the place.
So why do I prioritize the experience of the service provider over mine? I don't. I recognize that my experience is very closely tied to that of the cook or the bartender, and if she's happy, all other things being equal, my chances to be satisfied are the highest. A happy cook makes better food, a happy bartender makes a better drink, and the experience of us aligning our needs leaves us both feeling uplifted, because this wasn't just another impersonal money-for-goods transaction.
But now, 1 year passes and you go for a regular check-up to your mechanic and they start visibly sighing when you enter. They now hate repairing this ancient piece of tech, it barely matches any of their tools, and they caution you that the tool manufacturers are actually moving to a six-month schedule for new car repair tools getting released, and half the tools they use for your car are already deprecated and likely to be discontinued next release.
Would you be happy to just buy a new car, to prioritize the service provider's experience? Or would you seek a different service provider/car brand, that doesn't do this?
I guess what I'm talking about is looking for win-win situations. I get the feeling that when it seems like you can't get there, a more fundamental problem has been introduced earlier in the process. Your example looks a bit complicated, because it's more obvious that the relationship is actually made up of more than 2 parties. Those explicit in your example: manufacturer, mechanic, driver. You could keep adding parties: regulator, importer, suppliers to factory, trade unions, etc.
As a developer I love everything that is new (I mean NEW not repacked old technology!), as someone that has to also support software on system level, I absolutely hate it.
But when they do so, I think they do it in spite of the lax attitude towards compatibility, due to other factors. Chief among them is the higher prestige Apple products have, the fact that many basic users only use very few programs that are all under heavy development (e.g. browsers) that lessens the sting, and many regular users have become resigned to being abused by their technology providers.
If you narrow the question down to one factor, we're going to break your stuff vs. we're going to do everything we can to keep your stuff working, I think most people would choose the latter
> As nice as backward compatibility is from a user convenience perspective when this feature comes as a result of a static kernel Application Binary Interface this trade-off is essentially indistinguishable from increasing time-preference (or in other words declining concern for the future in comparison to the present).
I would argue that users are not using the system due to the lack of backward compatibility, rather my contention would be that this feature comes at a cost that outweighs the benefit (also from the article):
> This can be seen with a continuous layering of hacky fixes, sloppily 'bolted on' feature additions (built in such a way that new additions don't conflict with existing APIs), and unremovable remnants of abandoned code segments left in place purely to ensure that applications continue to run. These issues not only cause their own problems but, in aggregate, cause a huge creep in the number of lines of code in privileged memory. This does not grow the exploitability of a system linearly but rather it causes exploitability to grow exponentially due to the fact that by there being more code to exploit, malicious functionalities can be chained together and made more harmful.
What I am upset about is the removal of a standard that worked well. Just don't allow OpenGL apps in the Store and that would be enough.
Even worse, removal of 32-bit support in macOS. Now that's a extraordinarily bad move that confirms Apple does not care about enterprise nor gaming.
The fact that it is now officially deprecated is just a warning from Apple to remove it soon without bad PR.
> What I am upset about is the removal of a standard that worked well
That is not correct on either count:
1) OpenGL is a terrible fit for modern GPUs so I don't agree that it "worked well"
2) OpenGL was not removed anyway
> OpenGL support has been abandoned
That is not correct. Code continues to be written and maintained to keep OpenGL working on newer GPUs, both OpenGL ES on iOS GPUs, as well as OpenGL on mac GPUs.
> The fact that it is now officially deprecated is just a warning from Apple to remove it soon without bad PR.
macOS/iOS continue to support /many/ deprecated APIs, some of which have been deprecated for over a decade. Contrary to popular opinion, things are not removed just for fun. Things are removed when there is a sound security/technical reason, or when there is a high ongoing cost either to end users or the development process.
The alternative is to be MS and never remove anything, and where any changes to observable behavior of the system (or even moving internal struct values around!) can cause breakage and so is not done or requires inserting hacks to preserve behavior. If you think that doesn't impact individual engineer's decision making process ... well I don't know what to tell you. It must be soul-crushing to know if you change an internal data structure from uint16_t to uint32_t some crappy app that depends on being able to poke around will break. Surely such policies encourage some developers to do even more such hacky things, knowing MS will take the blame and end up making sure you can keep getting away with it.
Yes, but from the outside this is not necessarily obvious. Just ask the VLC developers: they’re running into breakage constantly on macOS. There were a couple of builds back in the summer where OpenGL just wouldn’t initialize at all, meaning even system components like WebGL, iTunes visualizers, and certain screensavers wouldn’t work correctly (though I did notice that there’s a new one written in Metal that wasn’t affected…)
They do remove stuff, all the time, the transitions just happen to be a bit more gentle, with more years to prepare for it, although not always, e.g. 16 bit, MS-DOS support, WinG, WinRT (Win 8.x variant), WDDM, WCF, Remoting.
No, I am not moving goalposts. OpenGL has been useless for all intents and purposes in macOS for years already. The deprecation is the least of concerns.
> 1) OpenGL is a terrible fit for modern GPUs so I don't agree that it "worked well"
A terrible fit? What are you even talking about? It is as featureful as the latest D3D11 which is the API most games are based on. What is useless is the OpenGL version and drivers that Apple ships.
Please don't spread FUD. There are no security nor technical reason nor significant cost on maintaining a proper OpenGL.
32-bit macOS used a fragile ABI for Objective-C, meaning all ivars had to be in public headers and changing any of them changed the runtime layout of your class and _all_ subclasses. AKA adding a field was a breaking change. This imposed a huge maintenance burden up and down the stack. Often classes would include a void* pointer to a side-table where new fields could be added, or the public class was a wrapper around an internal implementation. Both of those have a performance cost (extra pointer chasing + extra mallocs, or double objc_msgSends for every method/property).
There are many other performance optimizations (eg non-pointer ISA) that were impossible on 32-bit, meaning every app that refuses to move to 32-bit imposes even more performance costs. Shipping 32-bit versions of frameworks also bloats the size of the OS and forces the OS to load a duplicate dyld shared cache. It even makes system updates and installs take twice as long (two shared caches need rebuilding after all!)
None of this accounts for future optimizations and improvements that would have been nearly impossible if the system still had to support 32-bit without forking half of the userspace frameworks... that kind of forking would be an absolute maintenance nightmare.
The deprecation of 32-bit has been obvious for over a decade. It is based on sound technical reasons. You may not agree with those reasons but they have nothing to do with not caring.
You might instead ask why a developer would choose to ship a 32-bit app/game anytime after 2010 when 64-bit was available and obviously the future? Should all Mac users continue to pay the cost in disk space, memory, and performance? How long should they pay that cost? Were you hoping macOS would support 32-bit applications in 2030? 2040? What about the undeveloped features and unfixed bugs due to the overhead of continuing to support 32-bit?
The move to 64bit on the other hand is not all upside. There is overhead for pointer-heavy data structures, which has kept some applications from preferring it for a long time, mainly in the gaming arena. I also remember that browsers were very reticent in moving to 64 bit, until the security benefits from ASLR were overwhelming (and web apps started being so memory hungry that 2-4GB started looking small).
Also, the clear technical deficiency that you cite with ObjectiveC on 32 bits (the breaking ABI from adding private fields) is an oft defended feature in the most used native programming language in the world, C++. There, it is done because of performance optimization reasons - to avoid indirection for each structure allocated on the stack.
Overall, my point is that the technical case for 64 bit is not nearly as clear-cut as you make it out to be for many applications. I do agree that the ecosystem case for moving to 64 bit is pretty strong, but I don't think it is enough to say that developers who don't do it are idiots and dragging everyone back.
And just to be clear, I'm not being defensive or bitter, the applications I work on moved to 64 bit as soon as we could, probably 7 years ago, with very good technical reasons to do so - they were very memory hungry and benefitted greatly from having access to more than 2 GB RAM (Windows).
The rest of your arguments about storage size, update time, "optimization" etc. are not just irrelevant, but also solved more than a decade ago without penalties in other operating systems.
The funniest thing is the last one about 2010 devs. Go ahead, go back in time to 2010 and tell everybody to make their hundreds of millions of LOC and third party dependencies (many without source) and duplicate their testing cost just so that Apple cannot be blamed to drop 32-bit support ten years later.
Let me tell you the reality: Apple is a hardware company, not software. Apple cares about selling iPhones, not enterprise long-term support or non-casual gaming. On the other hand, Linux and Microsoft and other companies care about users and customers and they are paid for that because they are a software shop. That's the difference.
Why not make the 32-bit runtime an optional download?
But it can be recreated with Unix domain sockets and what have you, in exchange for a kernel with more active work going on, not just by a single entity, more wood and arrows behind it so to speak. And mach is dated, from the microkernel experimentation era of the 80s and 90s, with a lot of overhead of its own.
The fact remains that Linux outperforms macOS on the same hardware. Yes I have run comparisons.
Or, heck, they could port the IPC and anything else worth keeping. Even just building a private fork of FreeBSD with the Darwin personality grafted on top would be viable.
This is already how game consoles and low latency systems work for the most part.
At that point microservices might become more palatable since the context switching won’t damage the rest of the processes performance as much. And Linux’s performance advantage might dissipate as scheduling and cache pressure become less relevant.
Well, unless they want to play games or run any non-mac software … I don't know where you get your idea that customers prefer MacOS, but apart from some niche communities (designers, some subgroups of developers) this simply doesn't hold.
Plus, regarding the theme being discussed here, the use of Metal is transparent when using SceneKit, SpriteKit, Core Graphics,....
I have to say that I don't know why you dismiss open source graphics stacks. Anyone who has worked with the open source Mesa knows it is a breath of fresh air compared to the proprietary Qualcomm drivers, Mali drivers, Apple OpenGL drivers, or (horrors) fglrx.
And I've spoken with the Unity engine developers, who say they still consider OpenGL ES 2 support essential. Sure, Unity uses Metal (and I use Metal) because Apple forces us to to get maximum performance, but does anyone really want to write both Vulkan and Metal? Valve wouldn't have acquired MoltenVK if they were really itching to write Metal!
Valve is on the driving seat with Khronos regarding Vulkan.
Let's not forget that OpenGL is only alive thanks to NeXT acquisition, and Apple"s trying to cater to UNIX devs on their survival years, OpenGL wasn't even on the radar for Copland, rather QuickDraw 3D.
Which actually was probably the only OS that provided an usable OpenGL SDK, that neither ARB nor Khronos ever delivered.
Computer system design reflects the business that a company is in. It isn't the case that after years of development Microsoft has ended up with a bad operating system because people at Microsoft are idiots, rather it's the case that they're in the enterprise software business.
It isn't the case that Linux has not adopted the architectural advancements [...] because they aren't smart enough to implement those changes. The reason they have not adopted these changes is because they are in the business of ensuring that the people who pay them aren't made unhappy by a massive change to the kernel's architecture that necessitates a non-trivial expenditure of time and capital to modernize all it's software products just to keep them running.
Computers are becoming less secure and in many cases only a few systems are continually innovating in both the APIs they offer developers and the architecture of the underlying system itself.
In the case of Linux everyone is now shipping entire userlands with their applications via docker just to workaround compatibility issues. We'd be shipping entire VMs if the kernel wasn't the only one holding the line on compatibility.
Its been a long time now since I saw a programming post talking about how some new paradigm or way of doing things would make life great for the users.
If you install from a package, just update. If you built from source, just recompile. If you got a binary, use the support contract you paid for. And if you paid for a binary without a support contract, you got screwed hard, since you can't get bug fixes even if the OS was immutable. But if you did screw yourself, there's vmd that lets you freeze your OS in time.
An immutable OS prevents new bugs from cropping up.
This is a terrible experience for customers (since their apps break every year) and for developers (since they have an ongoing maintenance burden dumped on them by Apple just to keep their apps working across yearly iOS updates.)
The main beneficiary of abandoning ABI compatibility (as Apple has done) is the platform developer (e.g. Apple) who avoids the maintenance burden of backward compatibility.
It's arguably the wrong approach because it helps the platform developer (Apple) at the expense of existing customers and developers. There is multiplicative burden of pain - each time Apple breaks something, millions of customers and thousands of developers pay an immediate price.
There is a long-term user benefit to platform evolution, but the short-term cost is relentless and ongoing.
For game developers in particular, the stability and backward compatibility of Microsoft/Sony/Nintendo platforms is a dream compared to the quicksand of iOS development.
Then life happened and I left WoW completely.
Fast forward ~10 years to 2019, they released World of Warcraft Classic, which I assume uses the same API as the original game in 2004. Someone emailed me asking if I could release a new version of the plugin that works with WoW classic.
I was like, "No."
Not because of ABI compatibility. The system frameworks even put in effort to work as they did before if you don’t update your application.
Just give me xterm and bash, even with a touchscreen that would be better than dealing with everyone trying to reinvent the wheel every year.
You have to be a really terrible app developer for that to be true.
Selecting OSX used to imply much more attempt to handle this, maybe n-3 is outside the goal but n-1 and n+1 kinda works usually. Except when things like "we don't want 32 bit any more" hits, after 2 or more years of heads-up. Turns out vendors don't want to incur that cost. Stuff which people want and "depend on" as Kext don't work.
Consider how python2 dependencies are going in a world of Python3, and thats userspace, not ABI. Its not the OS, but.. its similar.
Indeed, which is why it's market share is tiny.
Overwhelmingly I think desktop support and the Ubuntu/LTE effect did it: FreeBSD demanded more of you, to get it to work. The working outcome I still like, but commodity UNIX is just simpler from OSX, or from Ubuntu. And vendors back it enough to mean you can get more things to work, more quickly, closer to the cutting edge. I am pretty sure I will get a working Linux desktop on any laptop I plausibly buy next time. I believe 80% of things will work fine in FreeBSD but the last 20% (Synaptics driver, fingerprint driver, TPM driver, blob-ridden WiFi Driver...) are going to be hard.
That was essentially what MS did with "Windows on Windows" that brought 16-bit applications over to Win32. And Apple with Rosetta, the blue box, etc. These were hugely expensive because they had to track down all the unwritten interfaces applications use.
If Linux standardizes virtualization for enterprise support, applications should run in it all the time, so it's impossible for them to access any private interfaces.
And it's sustainable because when enterprises find they're stuck with these closed source applications, they'll have a direct interest in supporting maintenance of the older virtualization.
“Any other asset” is not informative. When my company buys me a laptop, the assumption is that it will continue to function for three years. When they buy me a chair, seven. When they buy a building, thirty.
That’s an order of magnitude difference in depreciation schedules. The two problems here I see are:
1) Nobody in the accounting department had any clue how to do this in the 1980s and 1990s. So their cost projections were badly inaccurate, and they didn’t have realistic depreciation schedules.
2) The contracting firms are not incentivized to do maintenance and don’t even know how to do it in the first place.
This absolutely is distinguishable. Backwards compatibility is a complex tradeoff, no matter who you are (OS developer, app developer, end user, etc). It’s as complex as opex vs capex (and probably more similar to that tradeoff).
The one case where breaking ABI would make things so much easier is y2038 but it only applies to 32-bit systems, again nothing that matters to the Oracles and SAPs.
So this means the Linux kernel is not as bulky and full of bugs like the article has claimed?
p.s. I am just an average Linux user who wants to know more about this
Yes, the core is bigger than OpenBSD's. It's also more scalable and generally has higher performance. It's got nothing to do with backwards compatibility.
* The essay appeared to continually interchange the terms API and ABI, which to my mind are very different things.
* The examples provided weren't as concrete and concise as I would have liked.
* The author failed to convince me that layers of emulation and abstraction (ie what Windows currently does) are somehow fundamentally flawed. Actually there's an argument in favor of this at the end; is the author under the mistaken impression that Windows doesn't already do this? Perhaps I've misunderstood the intended point?
> Computers are becoming less secure
That is not my impression _at all_.
> It isn't the case that after years of development Microsoft has ended up with a bad operating system because people at Microsoft are idiots, rather it's the case that they're in the enterprise software business.
I found it hard to take the essay seriously due to statements such as the above. The author would do well to call out explicit problems with Windows rather than generally smearing it.
The author's point seems to boil down to an argument to shift resource expenditure off of OS developers and on to user space developers. That's difficult to take seriously, because in the real world resources are limited. The OS is the underlying infrastructure that everything is built on top of - if it changes too quickly, it's no longer particularly useful as an OS. Preventing breakage is (to my mind) one of the core aspects of an OS developer's job. Linus "WE DO NOT BREAK USERSPACE!" Torvalds is one of the primary reasons I'm comfortable using a Linux OS as a daily driver instead of Windows or macOS (the Debian maintainers are the other reason).
(I should note that I choose to use Linux due to development tooling and open source ideals, but at their core Windows and macOS both seem like perfectly reasonable OSes to me.)
The author seems focused on that aspect as the reason Linus is against ABI changes. But in fact this was his stance for years as he's user-centric: people expect things to continue to work when they upgrade the kernel, so if you have to break their experience, you really need a very good reason. It's not like he's started thinking this way when he became an employee of the LF.
I use a Linux desktop. If I want old versions of open source stuff, Wine running the Windows binary is where it's at.
We now have a complete, futureproof free software stack with decades of backwards compatibility! It just has win32 in the middle.
This article makes the case for the advancement of OS development, but not for what people use computers for.
The author claims Linus Torvalds enforces Linux binary interface stability because the Linux foundation members that pay his salary want it. Is this really true? If that was the case, I'd expect the internal kernel interfaces to be stable as well. They are unstable and he actively fights to keep them unstable even though the companies would very much enjoy having stable driver interfaces.
> Stuff outside the kernel is almost always either (a) experimental stuff that just isn't ready to be merged or (b) tries to avoid the GPL.
> Neither is worth a _second_ of anybodys time trying to support, and when you say "people spend lots of money supporting you", you're lying through your teeth. The GPL-avoiding kind of people don't spend a dime supporting me, they spend their money actively trying to debase and destroy what I and thousands of others have been working our butts off for.
> So don't try to make it sound like something it isn't. We support outside projects a hell of a lot better than we'd need to, and I can tell you that it's mostly _me_ who does that. Most of the core kernel developers argue that I should support less of it - and yes, they are backed up by lawyers at their (sometimes quite big) companies.
Of course, I will not attest to any architectural advantage of NT, especially today. Everything from the filesystem to the schedulers to the memory management... it all leaves a lot to be desired. Maybe with Genode coming along, we'll get a serviceable seL4 desktop that I can run my Chicago-style UI on. :- )
My impression of the eventual ideal is that the formally-verified stuff can be allowed into the kernel, if there is some valid reason to do so; and everything else can sit elsewhere.
So yes, it is ongoing. But not Kernel -> Userspace. Instead it is Hypervisor mode Kernel(?) -> Kernel.
I even installed it with floppies on a laptop.
I wasn't concerned with anything the article was about at the time. It just felt like the PC OS world took a jump into the future... even if the reason I mostly felt that way was the UI and etc.
Specifically, the Linux kernel maintainers, not the Linux Foundation, determine the policy that the user space ABI remains stable while the device driver API is unstable.
Disclosure: I work for the Linux Foundation, and I know that if we told the kernel maintainers to change their policy they would laugh at us.
Worse is better, and Microsoft got this one right.
But we can do this much more efficiently. IIRC, Prior variants of this were called "personalities". I think the term's been reused now.
I think we could have the program loader consume the loaded program and act as an API proxy between it and the actual kernel.
The model is App -- Static ABI --> [Simulated Kernel ABI by actual kernel] -> Actual kernel.
Everything outside of the specified kernel abi version is not existed to the application.
So it can run as much as years old application as long as the kernel is willing to simulate the abi for it.
And it is also how windows 64 runs win32 app and wsl. There is a api proxy inside the kernel and simulate the api for them.
BTW, there are several misspellings of "its" in your article. Search for "it's" because most of them should be changed to "its"
EDIT: forced into paying for upgrade
It's also interesting to consider the web as an application platform in this context. It too has an append-only API that places high importance on indefinite backwards-compatibility. However, because that API is dynamic, not binary, the underlying implementation has much more room to maneuver and re-structure without breaking it.
Other libraries, sure, but when it comes to glibc, this is false. glibc uses symbol versioning. E.g. a program that uses fork uses a versioned symbol:
$ nm a.out | grep fork
I tried googling to find what this limit is and where it's mentioned. Could anyone help me out with a link? What is the limit?
Is that accurate?
This is one of the main reasons that corporate OS producers like Microsoft support backward compatibility that seems excessive.
You're probably right that within a given hardware platform, older software will generally be very fast on newer hardware, but if one tries to migrate platforms (e.g. mainframe emulation on Linux or Windows), that's not a safe assumption. Around 2009, I saw a team try to migrate some of those early-80s mainframe apps to an emulator, and even on high-end HP servers running Windows, performance was too poor to use a lot of it in production, because it had all been written with ridiculously high-throughput mainframe storage I/O in mind, and emulation couldn't keep up.
You may think (as I do) that those corporations should just bite the bullet and replace those old apps with something modern, but they're the ones writing the checks, so Microsoft and company give them what they want.