Hacker News new | past | comments | ask | show | jobs | submit login
Learning from OpenBSD can make computers marginally less horrible (telegra.ph)
220 points by ArcVRArthur 42 days ago | hide | past | web | favorite | 127 comments



> OS Application Binary Interface (ABI) release inter-compatibility is the cancer killing the modern operating system.

I think that's only true perspective of an OS developer:

ABI inter-compatibility (e.g. the Windows and Linux model) prioritizes customer experience. Customers hate it when their applications stop working, and application developers don't want to spend lots of effort to track platform API changes just to avoid breakage.

Abandoning ABI inter-compatibility (OpenBSD, Apple) prioritizes platform developer experience. They want to be able to freely make API changes and don't want to spend time maintaining old APIs that they could use to work on new APIs.

I think the problem with the latter is, while there may be a few hundred developers working on a particular OS, there's are orders of magnitude more customers and application developers. Totally abandoning ABI inter-compatibility seems like putting the interests of the very few over the interests of the very many.


> I think that's only true perspective of an OS developer

I'm not really an OS developer* but I feel like all the build up of clutter and deprecated-but-still-there-but-unmaintained dead-end APIs with kinda-working (with caveats) replacements and redundant tools (targetting old and new APIs) is actually significantly hurting my experience as a user as well as a developer who develops applications on top of the platform with all that clutter. That's right, I hate Linux as a development platform as well as in terms of UX. By contrast, the Linux kernel, where they do not maintain internal API compatibility, is a much nicer area to be writing code in.

I love OpenBSD specifically as a development platform as well as due to its UX. (Not all of its qualities can be attributed to lack of legacy clutter and ABI stability, but I do believe it plays a role)

* For day job, I develop and maintain low level systems software on a custom embedded distro, along with some kernel bits.


Both, as a user and a developer, I am very happy, that Microsoft 10 still ships with ActiveScripting, ActiveX and COM, as well as all the Win32 stuff. Because, anything, that came after that, was a pile of crap for power-users. Of course, these are user-space technologies.


Naturally it still ships with COM, that is what Windows modern ABI is all about, since Longhorn.


It's important to remember that this works for OpenBSD in a context where a lot of the tools it's used with are in base and developed in sync with the ABI, by the same developers, or in the ports tree, which is maintained by an extraordinarily capable team of generous volunteers. It's not zero-effort, by far, but the effort is also not offloaded to its users. The people who break it are also smart and responsible enough to also fix it, so the breakage users see is basically zero. (I'm using OpenBSD on some machines, and I've been bit by this once or twice, I think, since OpenBSD 3.1, which was released in 2002).

Transplant this approach to Windows, where users habitually expect software that was built twenty years ago against Windows XP to still run, and you get a lot of angry people calling.

IMHO (spoiler alert: OS developer, but not on OpenBSD), whether or not you want to spend time maintaining old APIs shouldn't be relevant, not after you reach adulthood anyway. I don't think OpenBSD chose this approach because the alternative just ain't fun. If a stable API is part of your approach, then that's what you do. It's not glamorous but a lot of things in programming aren't. You can certainly add new functionality while maintaining compatibility. Linux does it pretty well, for example. (Linux does break things now and then, but very rarely.)

It's also a valid choice to not maintain ABI compatibility or API compatibility, but making that choice doesn't magically absolve you of making this work for users and third-party developers.

There are projects that eschew this responsibility and offer various silly reasons for it. Unfortunately, unless you own and operate the kind of reality distortion field device that Steve Jobs owned and operated, people usually see right through the silly reasons and tend to lose patience with this model (unless they're financially committed, which is why it doesn't work that well for FOSS software).


Except it means unbounded growth in complexity, which inevitably leads to, among other things,

- More bugs

- More vulnerabilities

- Increasing cost of support

Which absolutely impact the customer experience, just not in the short-term. A little bit of short-term friction averts long-term intractability.


That's true: those things do negatively affect customer experience. However they don't affect customer experience as negatively as the experience of having their software break.

To put this in another context: newer building codes may result in better and safer homes, but it'd be extremely user hostile to force homeowners proactively upgrade their homes to compliance each time a new version is released (at the threat of having their home condemned if they do not). The sensible compromise, in buildings and software, is to allow things to be upgraded over time, as they're modified.


From the OP: > declining concern for the future in comparison to the present

Breaking backwards compatibility has a larger negative impact in the _present_ than the cruft of old APIs and code. But that impact is temporary. However, the negative impacts of cruft can have a larger impact _in total_ over the entire lifetime of the operating system.


> However they don't affect customer experience as negatively as the experience of having their software break.

They affect it far worse, because they affect _every_ user. Having unmaintained/outdated software break only affects the subset of users that want to use that particular software.


You know the saying about how no Excel user uses more than 10% of its features? But everyone uses a different 10%, so ~100% matters? I defy you to find me a business, or probably even a human, more than 2 years old (using computers for more than 2 years) and not using any "legacy" applications. We maintain compatibility for everyone, because everyone uses it.


Who is using these legacy applications and for what? 99% of people use a web browser only these days. As for myself, the closest thing I can think of some in-house legacy crap but even that was 10 years ago. The majority of businesses and humans don't have any of this. What kind of circles do you run in where people are routinely using legacy software?


You seem to think that software must be a continuously updated thing, or it becomes legacy. This is somewhat true in the current world, but it is massively wasteful and unnecessary. It should be normal for software to be finished, and one should expect finished software to keep working for many years.

One huge market where this does happen is games. Disregarding the current plague of microtransaction-funded 'live experiences', most games are pieces of software that get released and are mostly done, barring some added content going out for a year or two. Losing the ability to play these games because someone has decided that ABI compatibility is kinda hard is ridiculous, and would definitely not fly for a consumer OS.

It would be interesting for someone to try to apply this same argument to hardware: would it make sense to abandon old hardware support every release? Doing this with device drivers was one of the things which hurt Linux adoption on the desktop, and hurt Windows Vista's release immensely.

Overall, end-users do not and should not care for OS updates. They are a necessary evil, to help fix bugs that the OS developers missed that threaten their security; and to be able to use new applications that rely on new OS features. But breaking old applications or hardware is a massive pain point that makes users weary of updating despite the risk to their security.


> Losing the ability to play these games because someone has decided that ABI compatibility is kinda hard is ridiculous, and would definitely not fly for a consumer OS.

Old games have a tendancy to break reasons even without ABI breaks..

I think it's ridiculous that games are still primarily closed source binary blobs that cannot be easily fixed and patched by the users to keep them running fine for decades.


If you've walked into a bank, hospital, or anywhere using a PoS system, you've interacted with legacy software--in some cases, literally jury-rigged DOS software. Outside of trendy dev circles, legacy software is the name of the game.


OK, banks, hospitals. You forgot SCADA systems. How on earth do you leap from these to not only all business, but all humans?


Nearly all retail and food service businesses in the west operate POS systems. Very few of those run anything more than Windows XP, and a huge number run DOS or OS/2.


I'd think emulators would make short work of those, especially the DOS ones.


Just one anecdote - 10,000s of insurance staff work on software that’s been in maintenance mode since Windows 3.1. There are massive insurance companies that have been attempting to replace such software for more than 20 years.


Games?


> A little bit of short-term friction averts long-term intractability.

It's not a bit of short-term friction, it's constant friction. As long as development is continuing, there's always something about the API that can be improved and would really pay off if only that was actually its final state and not just the next step until we find a good reason to break it again.


Except it may or may not be a only little bit of friction, depending on the breakage, and it is not short-term if it happens regularly.


Right - maintaining backwards compatibility is deciding to take on additional technical debt.

I think the implicit calculation here is that if you push a release that breaks the user's workflow, they can point to a specific point in time where things became frustrating and there will be a PR hit at that moment in time.

If instead you maintain compatibility, the small costs of all the technical debt accrue over time to make the experience worse than it might otherwise be, but users may not even notice or have a conception of what they may be missing for having stayed on this path.

They ultimately may end up with a worse product / UX, but they have no specific reason to complain about it.


You forgot lower performance.


"among other things" :P


I think by "customer" you mean "a certain kind of customer." Apple prioritizes their customers, who largely buy one device at a time and use recent, actively developed software, as opposed to customers who buy software licenses in bulk and sometimes need to run decade-old unsupported software. In the courses of this, they've attracted a lot of enterprise customers as well, but those customers aren't buying desktop software, because they're aware of Apple's development model.


Do Apple customers use recent actively developed software because they want to or because Apple often breaks their APIs and actively developed software is all that you can use?


I think it's both. Ten-year-old enterprise desktop applications may be functional and even highly usable, but they look and feel ancient. It's been a long time, but I remember using Windows software in the 2000s where I could almost smell the mold and dust. Applications like that are the software equivalent of Miss Havisham's wedding dress. If you're positioning yourself as a fashion-conscious brand selling products that become part of a person's identity, it helps to have a mechanism for sloughing off applications that aren't constantly being refreshed and rewritten. Constant rewrites aren't economical for enterprise applications with a few hundred or few thousand users, but those have moved to the web anyway.

Note that this doesn't reflect my personal values about software, but I can see how it serves Apple's priorities and keeps their products attractive in the eyes of their customers.


A contributing factor could also be that because for many in the Apple world, hardware goes in tandem with the software, it follows the natural cycle of hardware refresh. Sure every now and then someone finds that an app no longer works, but if they aren't paying big big bucks, who cares?


If you are writing applications that directly call OS ABIs you are doing it wrong.

In the OpenBSD and Apple worlds, you can change an ABI because applications are expected to, as they should, call libc and similar platform abstraction libraries instead, and these libraries are dynamically linked.


It’s not just those two - probably every mature system on the planet except Linux defines the system interface at the library level, not the syscall level.


By that definition Windows is freely changing ABI too, since the only defined interfaces are libraries like the KernelBase.dll or netapi32.dll.

But that only shifts the maintenance burden. Anytime Windows changes its inner working that means that every existing library has to be changed to expose the same interface to old code, including being bug-compatible and supporting all the abuses of undocumented features (or at least those used in software used by relevant clients).


I see this discussion about the conflict between user needs and developer needs coming up time and time again. I think this disconnect between user and developer is an interesting phenomenon in itself. What's more, often this disconnect seems to be defended as a natural feature of the service sector: it's an exchange of money for service, so the service provider's experience doesn't matter, as long as he's willing to work for that much. Reducing the relationship to money is often how these discussions go, in my experience, and I think that it's something that can look much better on paper (unintentional pun) than in practice.

Here's an example from my experience with ordering food or drinks. I'm floating the idea that it generalises to other areas as well, but that's up for debate. If I don't know the restaurant or bar well and I have a chance to ask the kitchen staff or bartender (in case of a drink) for suggestions, I'll do it. I'll ask what they like to make. I'll always ask indirectly, because it's a personal question. I'll engage them in a short dialogue in which I get a feel for _what would feel good for them to make_, _how_, etc., and then I'll ask them to do it for me. It's often something that involves a bit more skill or know-how or is stimulating to do in some other way. Sometimes it's something gourmand that they're proud to be able offer, and they're thrilled that someone's willing to walk off the beaten path and that they can accomodate that.

That's the geometric opposite of looking at the menu and picking what you fancy the most. In a way it's about not taking your likes and dislikes seriously, so as to stay open, because, this is a place you don't know, while, this person you're talking to, he essentialy is the place.

So why do I prioritize the experience of the service provider over mine? I don't. I recognize that my experience is very closely tied to that of the cook or the bartender, and if she's happy, all other things being equal, my chances to be satisfied are the highest. A happy cook makes better food, a happy bartender makes a better drink, and the experience of us aligning our needs leaves us both feeling uplifted, because this wasn't just another impersonal money-for-goods transaction.


Would you extend your experience with the food sector to your car? Say you bought a car that was made by people who loved working with it. The mechanics even love to repair it.

But now, 1 year passes and you go for a regular check-up to your mechanic and they start visibly sighing when you enter. They now hate repairing this ancient piece of tech, it barely matches any of their tools, and they caution you that the tool manufacturers are actually moving to a six-month schedule for new car repair tools getting released, and half the tools they use for your car are already deprecated and likely to be discontinued next release.

Would you be happy to just buy a new car, to prioritize the service provider's experience? Or would you seek a different service provider/car brand, that doesn't do this?


That's a thought evoking question. I don't own a car, so this is hypothetical. Use a car that your mechanic does like to work on (doesn't start to dislike it after 1 year). In your scenario the tooling sounds like a significant investment, so the mechanic probably also "sighs" that he has to get new tools often, and that he can't service older models. Is there a repair-friendly car on the market that's also good enough in other respects? From what I hear people take note of repairability when choosing a car (I do it when choosing a laptop). If the mechanic liked the "1-year car" initially, he misjudged it.

I guess what I'm talking about is looking for win-win situations. I get the feeling that when it seems like you can't get there, a more fundamental problem has been introduced earlier in the process. Your example looks a bit complicated, because it's more obvious that the relationship is actually made up of more than 2 parties. Those explicit in your example: manufacturer, mechanic, driver. You could keep adding parties: regulator, importer, suppliers to factory, trade unions, etc.


That is not what it looks like at regular car repair shops.


What really is about here: the selling point of OS is software running there (which was proven by windows) The more it is a moving target, the more costs are on developers part to support old versions and follow the new OS releases. And the less software is going to support it.

As a developer I love everything that is new (I mean NEW not repacked old technology!), as someone that has to also support software on system level, I absolutely hate it.


I actually think the opposite is true. Typically customers choose MacOS over Windows (when price is no object). The features in the Darwin/XNU systems are certainly very customer focused. Having said that anecdotally I think that while it could be said that developers enjoy having new capabilities rather it is more often the case than not that when old APIs are deprecated and new APIs are used to replace them (forcibly) developers end up upset (see: Metal replacing OpenGL on MacOS / 32-bit application deprecation on MacOS).


> Typically customers choose MacOS over Windows (when price is no object).

But when they do so, I think they do it in spite of the lax attitude towards compatibility, due to other factors. Chief among them is the higher prestige Apple products have, the fact that many basic users only use very few programs that are all under heavy development (e.g. browsers) that lessens the sting, and many regular users have become resigned to being abused by their technology providers.

If you narrow the question down to one factor, we're going to break your stuff vs. we're going to do everything we can to keep your stuff working, I think most people would choose the latter


I think backward compatibility is a feature, and the lack of it can be considered an anti-feature as I mentioned in the article:

> As nice as backward compatibility is from a user convenience perspective when this feature comes as a result of a static kernel Application Binary Interface this trade-off is essentially indistinguishable from increasing time-preference (or in other words declining concern for the future in comparison to the present).

I would argue that users are not using the system due to the lack of backward compatibility, rather my contention would be that this feature comes at a cost that outweighs the benefit (also from the article):

> This can be seen with a continuous layering of hacky fixes, sloppily 'bolted on' feature additions (built in such a way that new additions don't conflict with existing APIs), and unremovable remnants of abandoned code segments left in place purely to ensure that applications continue to run. These issues not only cause their own problems but, in aggregate, cause a huge creep in the number of lines of code in privileged memory. This does not grow the exploitability of a system linearly but rather it causes exploitability to grow exponentially due to the fact that by there being more code to exploit, malicious functionalities can be chained together and made more harmful.


I am not upset because Metal replaced OpenGL. I actually like Metal's design.

What I am upset about is the removal of a standard that worked well. Just don't allow OpenGL apps in the Store and that would be enough.

Even worse, removal of 32-bit support in macOS. Now that's a extraordinarily bad move that confirms Apple does not care about enterprise nor gaming.


AFAIK OpenGL hasn't been removed, and isn't planned to be in the near future; it's just been deprecated. Apple isn't adopting new versions of OpenGL or Vulkan, but OpenGL code that used to run on macOS still runs on macOS.


OpenGL support has been abandoned by Apple years ago, not to mention being broken.

The fact that it is now officially deprecated is just a warning from Apple to remove it soon without bad PR.


You are moving the goalposts. You stated:

> What I am upset about is the removal of a standard that worked well

That is not correct on either count:

1) OpenGL is a terrible fit for modern GPUs so I don't agree that it "worked well" 2) OpenGL was not removed anyway

> OpenGL support has been abandoned

That is not correct. Code continues to be written and maintained to keep OpenGL working on newer GPUs, both OpenGL ES on iOS GPUs, as well as OpenGL on mac GPUs.

> The fact that it is now officially deprecated is just a warning from Apple to remove it soon without bad PR.

macOS/iOS continue to support /many/ deprecated APIs, some of which have been deprecated for over a decade. Contrary to popular opinion, things are not removed just for fun. Things are removed when there is a sound security/technical reason, or when there is a high ongoing cost either to end users or the development process.

The alternative is to be MS and never remove anything, and where any changes to observable behavior of the system (or even moving internal struct values around!) can cause breakage and so is not done or requires inserting hacks to preserve behavior. If you think that doesn't impact individual engineer's decision making process ... well I don't know what to tell you. It must be soul-crushing to know if you change an internal data structure from uint16_t to uint32_t some crappy app that depends on being able to poke around will break. Surely such policies encourage some developers to do even more such hacky things, knowing MS will take the blame and end up making sure you can keep getting away with it.


> Code continues to be written and maintained to keep OpenGL working on newer GPUs, both OpenGL ES on iOS GPUs, as well as OpenGL on mac GPUs.

Yes, but from the outside this is not necessarily obvious. Just ask the VLC developers: they’re running into breakage constantly on macOS. There were a couple of builds back in the summer where OpenGL just wouldn’t initialize at all, meaning even system components like WebGL, iTunes visualizers, and certain screensavers wouldn’t work correctly (though I did notice that there’s a new one written in Metal that wasn’t affected…)


> The alternative is to be MS and never remove anything,

They do remove stuff, all the time, the transitions just happen to be a bit more gentle, with more years to prepare for it, although not always, e.g. 16 bit, MS-DOS support, WinG, WinRT (Win 8.x variant), WDDM, WCF, Remoting.


> You are moving the goalposts.

No, I am not moving goalposts. OpenGL has been useless for all intents and purposes in macOS for years already. The deprecation is the least of concerns.

> 1) OpenGL is a terrible fit for modern GPUs so I don't agree that it "worked well"

A terrible fit? What are you even talking about? It is as featureful as the latest D3D11 which is the API most games are based on. What is useless is the OpenGL version and drivers that Apple ships.

Please don't spread FUD. There are no security nor technical reason nor significant cost on maintaining a proper OpenGL.


> Even worse, removal of 32-bit support in macOS

32-bit macOS used a fragile ABI for Objective-C, meaning all ivars had to be in public headers and changing any of them changed the runtime layout of your class and _all_ subclasses. AKA adding a field was a breaking change. This imposed a huge maintenance burden up and down the stack. Often classes would include a void* pointer to a side-table where new fields could be added, or the public class was a wrapper around an internal implementation. Both of those have a performance cost (extra pointer chasing + extra mallocs, or double objc_msgSends for every method/property).

There are many other performance optimizations (eg non-pointer ISA) that were impossible on 32-bit, meaning every app that refuses to move to 32-bit imposes even more performance costs. Shipping 32-bit versions of frameworks also bloats the size of the OS and forces the OS to load a duplicate dyld shared cache. It even makes system updates and installs take twice as long (two shared caches need rebuilding after all!)

None of this accounts for future optimizations and improvements that would have been nearly impossible if the system still had to support 32-bit without forking half of the userspace frameworks... that kind of forking would be an absolute maintenance nightmare.

The deprecation of 32-bit has been obvious for over a decade. It is based on sound technical reasons. You may not agree with those reasons but they have nothing to do with not caring.

You might instead ask why a developer would choose to ship a 32-bit app/game anytime after 2010 when 64-bit was available and obviously the future? Should all Mac users continue to pay the cost in disk space, memory, and performance? How long should they pay that cost? Were you hoping macOS would support 32-bit applications in 2030? 2040? What about the undeveloped features and unfixed bugs due to the overhead of continuing to support 32-bit?


In an ideal world, 32bit would continue to be supported for as long as people use 32bit applications. If there is no good reason to port an application to 64 bit (other than the OS developers forcing it), why do it? Windows supported 16bit up until windows 10,and there were absolutely no mainstream desktop applications or even games on 16bit for over a decade at that time. That seems like a good standard.

The move to 64bit on the other hand is not all upside. There is overhead for pointer-heavy data structures, which has kept some applications from preferring it for a long time, mainly in the gaming arena. I also remember that browsers were very reticent in moving to 64 bit, until the security benefits from ASLR were overwhelming (and web apps started being so memory hungry that 2-4GB started looking small).

Also, the clear technical deficiency that you cite with ObjectiveC on 32 bits (the breaking ABI from adding private fields) is an oft defended feature in the most used native programming language in the world, C++. There, it is done because of performance optimization reasons - to avoid indirection for each structure allocated on the stack.

Overall, my point is that the technical case for 64 bit is not nearly as clear-cut as you make it out to be for many applications. I do agree that the ecosystem case for moving to 64 bit is pretty strong, but I don't think it is enough to say that developers who don't do it are idiots and dragging everyone back.

And just to be clear, I'm not being defensive or bitter, the applications I work on moved to 64 bit as soon as we could, probably 7 years ago, with very good technical reasons to do so - they were very memory hungry and benefitted greatly from having access to more than 2 GB RAM (Windows).


What a load of BS. The 32-bit ABI is independent of the 64-bit one. Nobody is asking Apple to backport new changes or APIs to 32-bit because nobody cares about new apps in 32-bit. We do care about not breaking the existing ones, though.

The rest of your arguments about storage size, update time, "optimization" etc. are not just irrelevant, but also solved more than a decade ago without penalties in other operating systems.

The funniest thing is the last one about 2010 devs. Go ahead, go back in time to 2010 and tell everybody to make their hundreds of millions of LOC and third party dependencies (many without source) and duplicate their testing cost just so that Apple cannot be blamed to drop 32-bit support ten years later.

Let me tell you the reality: Apple is a hardware company, not software. Apple cares about selling iPhones, not enterprise long-term support or non-casual gaming. On the other hand, Linux and Microsoft and other companies care about users and customers and they are paid for that because they are a software shop. That's the difference.


> Shipping 32-bit versions of frameworks also bloats the size of the OS and forces the OS to load a duplicate dyld shared cache.

Why not make the 32-bit runtime an optional download?


OpenGL has not been removed from macOS though. It’s just been deprecated.


See the sibling reply.


There is a lot of weird stuff in Darwin that makes me wonder... Why do they deprecate useful things at other layers and keep this clunky Mach thing? They could switch to FreeBSD or Linux and probably perform better. It is hard to take them seriously when they say they don't like to maintain creaky stuff when there is Mach...


Sounds like you don’t understand what Mach actually offers. Check out Amit Singh’s book.


You might think I don't understand, but I do know about kernel development and yes, the IPC is pretty unique for instance.

But it can be recreated with Unix domain sockets and what have you, in exchange for a kernel with more active work going on, not just by a single entity, more wood and arrows behind it so to speak. And mach is dated, from the microkernel experimentation era of the 80s and 90s, with a lot of overhead of its own.

The fact remains that Linux outperforms macOS on the same hardware. Yes I have run comparisons.


> But it can be recreated with Unix domain sockets and what have you

Or, heck, they could port the IPC and anything else worth keeping. Even just building a private fork of FreeBSD with the Darwin personality grafted on top would be viable.


FWIW, the NextBSD folks did exactly that - they ported Mach IPC into the FreeBSD kernel.


I think as core counts increase, in 5-10 years we’ll see operating systems that run solely on a dedicated low power core while the other processes run in tickless mode with a more topology aware scheduler and almost no context switching or core migration.

This is already how game consoles and low latency systems work for the most part.

At that point microservices might become more palatable since the context switching won’t damage the rest of the processes performance as much. And Linux’s performance advantage might dissipate as scheduling and cache pressure become less relevant.


> Typically customers choose MacOS over Windows

Well, unless they want to play games or run any non-mac software … I don't know where you get your idea that customers prefer MacOS, but apart from some niche communities (designers, some subgroups of developers) this simply doesn't hold.


Only FOSS developers are upset with Metal, everyone else appreciates not having a clunky 3D API still based on C, and is already taking advantage of it on their 3D engine.

Plus, regarding the theme being discussed here, the use of Metal is transparent when using SceneKit, SpriteKit, Core Graphics,....


It is very much not the case that only open source developers prefer cross-platform standards to Metal and all the other vendor-specific APIs.

I have to say that I don't know why you dismiss open source graphics stacks. Anyone who has worked with the open source Mesa knows it is a breath of fresh air compared to the proprietary Qualcomm drivers, Mali drivers, Apple OpenGL drivers, or (horrors) fglrx.


Then you don't go much to game developer conferences, watch GDC talks, or read game development related publications like EDGE or Develop.


It doesn't take much searching to find game developers saying the exact same things I'm saying: https://appleinsider.com/articles/18/06/05/some-game-develop...

And I've spoken with the Unity engine developers, who say they still consider OpenGL ES 2 support essential. Sure, Unity uses Metal (and I use Metal) because Apple forces us to to get maximum performance, but does anyone really want to write both Vulkan and Metal? Valve wouldn't have acquired MoltenVK if they were really itching to write Metal!


A couple of indie developers it seems, not the group I was talking about.

Valve is on the driving seat with Khronos regarding Vulkan.


I'd like to point out that this is very much not what you started out saying upthread, that only open source developers are dissatisfied with Metal.


Fair enough, I should have been more specific.

Let's not forget that OpenGL is only alive thanks to NeXT acquisition, and Apple"s trying to cater to UNIX devs on their survival years, OpenGL wasn't even on the radar for Copland, rather QuickDraw 3D.

Which actually was probably the only OS that provided an usable OpenGL SDK, that neither ARB nor Khronos ever delivered.


Customers also choose Ferrari over Volkswagen, when price is no object.


RTFA:

Computer system design reflects the business that a company is in. It isn't the case that after years of development Microsoft has ended up with a bad operating system because people at Microsoft are idiots, rather it's the case that they're in the enterprise software business.

It isn't the case that Linux has not adopted the architectural advancements [...] because they aren't smart enough to implement those changes. The reason they have not adopted these changes is because they are in the business of ensuring that the people who pay them aren't made unhappy by a massive change to the kernel's architecture that necessitates a non-trivial expenditure of time and capital to modernize all it's software products just to keep them running.

Computers are becoming less secure and in many cases only a few systems are continually innovating in both the APIs they offer developers and the architecture of the underlying system itself.


I thought Linux didn't provide a stable ABI and tells developers to upstream instead? Is this the same topic?


The article is about OS ABI for applications (which Linux has), not internal OS ABI for drivers (which Linux hasn't).


Linux doesn't have it, but Linux Trebelized has it.


It seems like focus on developer ergonomics over actual users has been monotonically increasing since I started my career. It may be great for the OS developers that they don't have to care about back compat but it is terrible for the users. Your API may be beautiful but my software no longer works, so the system is useless.

In the case of Linux everyone is now shipping entire userlands with their applications via docker just to workaround compatibility issues. We'd be shipping entire VMs if the kernel wasn't the only one holding the line on compatibility.

Its been a long time now since I saw a programming post talking about how some new paradigm or way of doing things would make life great for the users.


> Your API may be beautiful but my software no longer works, so the system is useless.

If you install from a package, just update. If you built from source, just recompile. If you got a binary, use the support contract you paid for. And if you paid for a binary without a support contract, you got screwed hard, since you can't get bug fixes even if the OS was immutable. But if you did screw yourself, there's vmd that lets you freeze your OS in time.


> since you can't get bug fixes even if the OS was immutable

An immutable OS prevents new bugs from cropping up.


It certainly keeps old security holes in play.


One of the most horrible things about iOS is that it breaks your apps every year.

This is a terrible experience for customers (since their apps break every year) and for developers (since they have an ongoing maintenance burden dumped on them by Apple just to keep their apps working across yearly iOS updates.)

The main beneficiary of abandoning ABI compatibility (as Apple has done) is the platform developer (e.g. Apple) who avoids the maintenance burden of backward compatibility.

It's arguably the wrong approach because it helps the platform developer (Apple) at the expense of existing customers and developers. There is multiplicative burden of pain - each time Apple breaks something, millions of customers and thousands of developers pay an immediate price.

There is a long-term user benefit to platform evolution, but the short-term cost is relentless and ongoing.

For game developers in particular, the stability and backward compatibility of Microsoft/Sony/Nintendo platforms is a dream compared to the quicksand of iOS development.


I created and maintained a game plugin for World of Warcraft. For those of you who don't know, the plugin API changed constantly. It used to be that after every new release, the first thing I did was not enjoying the new version of the game, but fixing my plugin so it ran on the new version. Later on they introduced public beta (maybe they always had it I just didn't know), so I could test it ahead of time and release a new version along with the game release. But it was all for fun, so it's all good.

Then life happened and I left WoW completely.

Fast forward ~10 years to 2019, they released World of Warcraft Classic, which I assume uses the same API as the original game in 2004. Someone emailed me asking if I could release a new version of the plugin that works with WoW classic.

I was like, "No."


Open source it?


All WoW plugins are open source by default. They are essentially Lua scripts.


> One of the most horrible things about iOS is that it breaks your apps every year.

Not because of ABI compatibility. The system frameworks even put in effort to work as they did before if you don’t update your application.


There has been one ABI break in iOS ever. In over 11 years.


At this point I’m pretty much done with smartphone apps.

Just give me xterm and bash, even with a touchscreen that would be better than dealing with everyone trying to reinvent the wheel every year.


> One of the most horrible things about iOS is that it breaks your apps every year.

You have to be a really terrible app developer for that to be true.


If the number of app updates I get every year with "iOS XX compatibility" in the release notes is any indication, there must be a lot of really terrible app developers in the world!


That would be consistent with all the really terrible apps I've seen.


Apple’s own UI components break from year to year. UINavigationController is infamous for changing its layout code every other release and doing so in such a way that apps with relatively standard usage entirely within the realm of public API see odd bugs and changes in behavior–sometimes even without an SDK relink.


Breaking ABIs freely is a decision OpenBSD made that's been helpful in many ways, but the lesson we should learn from them is to engineer layers of failure mitigation into all our systems. Software bugs are unknown unknowns.


Selecting a BSD comes with an implied social contract regarding its mutability across versions. If you go into OpenBSD believing code from n-3 runs on version n+1 you misunderstood the social contract. FreeBSD or NetBSD or DragonflyBSD might have a different social contract.

Selecting OSX used to imply much more attempt to handle this, maybe n-3 is outside the goal but n-1 and n+1 kinda works usually. Except when things like "we don't want 32 bit any more" hits, after 2 or more years of heads-up. Turns out vendors don't want to incur that cost. Stuff which people want and "depend on" as Kext don't work.

Consider how python2 dependencies are going in a world of Python3, and thats userspace, not ABI. Its not the OS, but.. its similar.


> Selecting a BSD comes with an implied social contract regarding its mutability across versions.

Indeed, which is why it's market share is tiny.


I seriously doubt that's the reason, especially compared to hardware support and the usual hurdle of "not installed by default".


It might be some people's reason. I got to a point where I couldn't even get decent 2D X behaviour, and DSDT configs for laptops stopped working, or even depended on Linux to get them working. It was a signal. Van Jacobsen dropping primary development of his TCP work in BSD and moving to Linux was another signal to me, maybe some others.

Overwhelmingly I think desktop support and the Ubuntu/LTE effect did it: FreeBSD demanded more of you, to get it to work. The working outcome I still like, but commodity UNIX is just simpler from OSX, or from Ubuntu. And vendors back it enough to mean you can get more things to work, more quickly, closer to the cutting edge. I am pretty sure I will get a working Linux desktop on any laptop I plausibly buy next time. I believe 80% of things will work fine in FreeBSD but the last 20% (Synaptics driver, fingerprint driver, TPM driver, blob-ridden WiFi Driver...) are going to be hard.


> Otherwise various efforts making use of containers, lightweight virtualization, and binary wrappers for the purposes of introducing new options to companies allowing them reasonable backward compatibility for the various applications that have become entrenched in their organizations will be the only way to break away from the stagnation of the current paradigm of enterprise operating system development.

That was essentially what MS did with "Windows on Windows" that brought 16-bit applications over to Win32. And Apple with Rosetta, the blue box, etc. These were hugely expensive because they had to track down all the unwritten interfaces applications use.

If Linux standardizes virtualization for enterprise support, applications should run in it all the time, so it's impossible for them to access any private interfaces.

And it's sustainable because when enterprises find they're stuck with these closed source applications, they'll have a direct interest in supporting maintenance of the older virtualization.


> Companies who make such investments often view the money they've paid for the development of this software in a similar manner to how they would view the investment into any other asset - which is to say that the expectation is that it will continue to function for years.

“Any other asset” is not informative. When my company buys me a laptop, the assumption is that it will continue to function for three years. When they buy me a chair, seven. When they buy a building, thirty.

That’s an order of magnitude difference in depreciation schedules. The two problems here I see are:

1) Nobody in the accounting department had any clue how to do this in the 1980s and 1990s. So their cost projections were badly inaccurate, and they didn’t have realistic depreciation schedules.

2) The contracting firms are not incentivized to do maintenance and don’t even know how to do it in the first place.

> As nice as backward compatibility is from a user convenience perspective when this feature comes as a result of a static kernel Application Binary Interface this trade-off is essentially indistinguishable from increasing time-preference (or in other words declining concern for the future in comparison to the present).

This absolutely is distinguishable. Backwards compatibility is a complex tradeoff, no matter who you are (OS developer, app developer, end user, etc). It’s as complex as opex vs capex (and probably more similar to that tradeoff).


This makes no sense. The bulk of the ABI compatibility is not in the kernel, and Linus's mantra of "not breaking userspace" hardly applies to applications from the Linux Foundation's most paying members. The bulk of the ABI for Linux applications comes from libc and other libraries.

The one case where breaking ABI would make things so much easier is y2038 but it only applies to 32-bit systems, again nothing that matters to the Oracles and SAPs.


Speaking of time... OpenBSD bumped time_t to be 64 bit in 2013, even on 32 bit systems: https://www.openbsd.org/papers/eurobsdcon_2013_time_t/


Right, that's why I mention it.


> The bulk of the ABI compatibility is not in the kernel

So this means the Linux kernel is not as bulky and full of bugs like the article has claimed?

https://en.wikipedia.org/wiki/Linux_kernel_interfaces#Linux_...

https://upload.wikimedia.org/wikipedia/commons/b/bb/Linux_AP...

p.s. I am just an average Linux user who wants to know more about this


Most of the 100.000.000 lines of code in Linux are drivers, or support for architectures that you have never seen.

Yes, the core is bigger than OpenBSD's. It's also more scalable and generally has higher performance. It's got nothing to do with backwards compatibility.


I too would like to know more about this.

* The essay appeared to continually interchange the terms API and ABI, which to my mind are very different things.

* The examples provided weren't as concrete and concise as I would have liked.

* The author failed to convince me that layers of emulation and abstraction (ie what Windows currently does) are somehow fundamentally flawed. Actually there's an argument in favor of this at the end; is the author under the mistaken impression that Windows doesn't already do this? Perhaps I've misunderstood the intended point?

> Computers are becoming less secure

That is not my impression _at all_.

> It isn't the case that after years of development Microsoft has ended up with a bad operating system because people at Microsoft are idiots, rather it's the case that they're in the enterprise software business.

I found it hard to take the essay seriously due to statements such as the above. The author would do well to call out explicit problems with Windows rather than generally smearing it.

The author's point seems to boil down to an argument to shift resource expenditure off of OS developers and on to user space developers. That's difficult to take seriously, because in the real world resources are limited. The OS is the underlying infrastructure that everything is built on top of - if it changes too quickly, it's no longer particularly useful as an OS. Preventing breakage is (to my mind) one of the core aspects of an OS developer's job. Linus "WE DO NOT BREAK USERSPACE!" Torvalds is one of the primary reasons I'm comfortable using a Linux OS as a daily driver instead of Windows or macOS (the Debian maintainers are the other reason).

(I should note that I choose to use Linux due to development tooling and open source ideals, but at their core Windows and macOS both seem like perfectly reasonable OSes to me.)


> Linus Torvalds continues receiving his Linux Foundation salary paid for by the massive cheques it's member organizations cut him in exchange for influence over the kernel's development.

The author seems focused on that aspect as the reason Linus is against ABI changes. But in fact this was his stance for years as he's user-centric: people expect things to continue to work when they upgrade the kernel, so if you have to break their experience, you really need a very good reason. It's not like he's started thinking this way when he became an employee of the LF.


The writer mentions the corporate user, but then never mentions them again.

I use a Linux desktop. If I want old versions of open source stuff, Wine running the Windows binary is where it's at.

We now have a complete, futureproof free software stack with decades of backwards compatibility! It just has win32 in the middle.

This article makes the case for the advancement of OS development, but not for what people use computers for.


I think the stability of user space interfaces is simply good engineering. Linux can run binaries compiled way back in the 90s. Because of this discipline, people trust Linux as a platform. People generally have no problems updating their kernels and it's safe to assume there will be no problems. This isn't the case in user space: many projects have no problem with breaking compatibility and forcing dependent packages to be updated as well.

The author claims Linus Torvalds enforces Linux binary interface stability because the Linux foundation members that pay his salary want it. Is this really true? If that was the case, I'd expect the internal kernel interfaces to be stable as well. They are unstable and he actively fights to keep them unstable even though the companies would very much enjoy having stable driver interfaces.

https://yarchive.net/comp/linux/gcc_vs_kernel_stability.html

> Stuff outside the kernel is almost always either (a) experimental stuff that just isn't ready to be merged or (b) tries to avoid the GPL.

> Neither is worth a _second_ of anybodys time trying to support, and when you say "people spend lots of money supporting you", you're lying through your teeth. The GPL-avoiding kind of people don't spend a dime supporting me, they spend their money actively trying to debase and destroy what I and thousands of others have been working our butts off for.

> So don't try to make it sound like something it isn't. We support outside projects a hell of a lot better than we'd need to, and I can tell you that it's mostly _me_ who does that. Most of the core kernel developers argue that I should support less of it - and yes, they are backed up by lawyers at their (sometimes quite big) companies.


I will say, Windows 95 was pretty great, I identify with the Microsoft customer in the hero image. I'm gathering notes to write a GUI toolkit which only makes well-formed Windows 95-style UIs.


Maybe my memory is a bit rose colored, but I still think NT 4.0 was great. Win 95 interface and rock solid OS.


Windows 2000 was even more rock-solid. I used to run it on a couple of Sony Vaio laptops and I never had a blue screen.


Didn't they move the video driver back into the kernel in 2000?


Yeah, but they actually worked so it was fine.

Of course, I will not attest to any architectural advantage of NT, especially today. Everything from the filesystem to the schedulers to the memory management... it all leaves a lot to be desired. Maybe with Genode coming along, we'll get a serviceable seL4 desktop that I can run my Chicago-style UI on. :- )

My impression of the eventual ideal is that the formally-verified stuff can be allowed into the kernel, if there is some valid reason to do so; and everything else can sit elsewhere.


At least in recently windows10 preview, patch kernel memory area itself was no longer allowed, only hooks are allowed to be used to alter the kernel behavior (thus breaking some silly anti cheat engine). And it also comes with a option to enforce these with virtualization.

So yes, it is ongoing. But not Kernel -> Userspace. Instead it is Hypervisor mode Kernel(?) -> Kernel.


I'm not commenting on the OS architecture, just my quite extensive user experience.


You were definitely living in the future with NT 4.0. But the memory requirements were way too high for the average home user to afford it. Win95 was a bridge into this modern new world of applications protected from each other.


I was pretty excited too.

I even installed it with floppies on a laptop.

I wasn't concerned with anything the article was about at the time. It just felt like the PC OS world took a jump into the future... even if the reason I mostly felt that way was the UI and etc.


I authored this document on the Linux Device Driver model 11 years ago and amazingly it still represents the current policy: https://www.linuxfoundation.org/events/2008/06/the-linux-dri...

Specifically, the Linux kernel maintainers, not the Linux Foundation, determine the policy that the user space ABI remains stable while the device driver API is unstable.

Disclosure: I work for the Linux Foundation, and I know that if we told the kernel maintainers to change their policy they would laugh at us.


Agreed. And I don't think Linus' opinions on compatibility come from the funding model.


This is all lovely as a matter of the platonic ideal of an operating system. But... the users have spoken. They don’t want their software to break.

Worse is better, and Microsoft got this one right.


The way I see it, VMs already encapsulate this. App --ABI--> VM'd Kernel -> Hypervisor API.

But we can do this much more efficiently. IIRC, Prior variants of this were called "personalities". I think the term's been reused now.

I think we could have the program loader consume the loaded program and act as an API proxy between it and the actual kernel.


It sounds like what Solaris container did. The kernel responsible for handling kernel abi compatibility. And everything includes the system utilities runs inside a container that got given abi simulated by the kernel.

The model is App -- Static ABI --> [Simulated Kernel ABI by actual kernel] -> Actual kernel.

Everything outside of the specified kernel abi version is not existed to the application.

So it can run as much as years old application as long as the kernel is willing to simulate the abi for it.

And it is also how windows 64 runs win32 app and wsl. There is a api proxy inside the kernel and simulate the api for them.


This article helped me understand a lot; I knew development on iOS required constant updates, but now I know why. Thank you.

BTW, there are several misspellings of "its" in your article. Search for "it's" because most of them should be changed to "its"


no thanks. I have had the terrible experience of being forced to upgrade software purely because a newer version of macOS does not support the old version of my music software. I am looking at going completely hardware now for music production so I don't have to deal with unnecessary upgrade treadmill that is entrenched in computer culture.

EDIT: forced into paying for upgrade


Really interesting. I didn't know much about OpenBSD before, nor did I know that Windows/Linux maintain ABI compatibility indefinitely, although it makes sense.

It's also interesting to consider the web as an application platform in this context. It too has an append-only API that places high importance on indefinite backwards-compatibility. However, because that API is dynamic, not binary, the underlying implementation has much more room to maneuver and re-structure without breaking it.


Note that the while the linux kernel does maintain ABI compatibility indefinitely, the same is not true for glibc, so any dynamically linked applications (i.e. most applications in the past 20 years) have very poor ABI compatibility.


the same is not true for glibc, so any dynamically linked applications (i.e. most applications in the past 20 years) have very poor ABI compatibility

Other libraries, sure, but when it comes to glibc, this is false. glibc uses symbol versioning. E.g. a program that uses fork uses a versioned symbol:

    $ nm a.out | grep fork
                 U fork@@GLIBC_2.2.5

glibc typically ships with functions with the current ABI version and previous ABI versions, so glibc supports programs compiled against many older versions of glibc.

See:

https://developers.redhat.com/blog/2019/08/01/how-the-gnu-c-...


FWIW, I've seen dozens of programs break with changes to glibc.


> the project enforces a hard ceiling on the number of lines of code that can ever be in ring 0 at a given time

I tried googling to find what this limit is and where it's mentioned. Could anyone help me out with a link? What is the limit?


IIUC, ABI compatibility is one of the key design goals of the Fuchsia OS project.

Is that accurate?


Maybe the answer is a rolling window of stability for OS APIs--something like 10 years (Windows 10 having Windows 95 compatibility mode is a bit absurd). On the other hand, if you have a large library of test software, maintaining API bridges might be doable, and for software more than 5 years old, performance on modern hardware shouldn't be a major concern.


There are major corporations running key business software which is 30, or even 40 years old. I wouldn't be surprised if some were evening hitting 50+ with old COBOL mainframe applications.

This is one of the main reasons that corporate OS producers like Microsoft support backward compatibility that seems excessive.

You're probably right that within a given hardware platform, older software will generally be very fast on newer hardware, but if one tries to migrate platforms (e.g. mainframe emulation on Linux or Windows), that's not a safe assumption. Around 2009, I saw a team try to migrate some of those early-80s mainframe apps to an emulator, and even on high-end HP servers running Windows, performance was too poor to use a lot of it in production, because it had all been written with ridiculously high-throughput mainframe storage I/O in mind, and emulation couldn't keep up.

You may think (as I do) that those corporations should just bite the bullet and replace those old apps with something modern, but they're the ones writing the checks, so Microsoft and company give them what they want.


And time to innovate in languages used for writing OS kernel and core services as well. All mainstream kernels stuck with C/C++. Something newer, cleaner, Rust, D, you name it, something that indeed wasn't afraid to deprecate legacy too, and something that offers many new important features for OS developers.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: