Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Probably the right call - many of them don’t have abi or api stability guarantees, or in the case of python needlessly broke backwards compatibility that is still causing pain today.

Apple used to ship an incredibly old version of OpenSSL because people used its API which was not ABI stable. Even getting rid of it was a nontrivial amount of work.

The lack of care about API&ABI stability in developer facing open source projects (interpreters, libraries, frameworks, commandline arguments) is still bizarre to me: why make your work hard to use/update? People ship out of date libraries because updating requires changes beyond just pulling a new binary requires more work than the gain will give them.

It’s part of why companies like Apple and Microsoft care so much about backwards compat: they want people running the most recent OS possible, and they don’t want people to avoid updates. Every time an update breaks something for anyone, they hold off updating in future.

On the other side, developers don’t want to invest time and money into a feature that they don’t think will work/cause their app to crash in future.




It’s part of why companies like Apple and Microsoft care so much about backwards compat

Compared to (former) Microsoft, I don't think Apple spends anywhere near the amount of effort MS does on backwards compatibility.

You can create a single .exe that will work on any Windows starting from Windows 95 - nearly 25 years. In that timespan, Apple changed CPU architectures twice, and their OS architecture once. If you're willing to restrict yourself to 32-bit Windows, which can run Win16 and DOS apps too, then it goes back to the early 80s.


Not true. Apple spends an enormous amount of effort on backwards compatibility. For example, they wrote a JIT compiler to be able to run 68k code on PPC. They did the same for the PPC to Intel transition. They maintained the "Classic" ABI and binary formats for years after the move to Mac OS X.

It is true that Apple doesn't maintain backward compatibility for a long as Microsoft does, but that's not the same thing as not caring. They put the effort into making transitions as smooth as possible for users and developers and getting through them quickly.


That's simply wrong.

Apple maintained 68k compatibility for years after it migrated to ppc. That's a completely different hardware architecture

It maintained classic compatibility for years after it moved to osx. That's a completely different kernel and basic system architecture.

It maintained PPC compatibility for a few years after transitioning to intel.

It supported 32 bit x86 for a number of years after the entire line was 64 bit capable.

Apple does care about compatibility, it just isn't as beholden to the past: it's much more willing to say "this is bad, so we're deprecating it, eventually it won't exist/work".

But they work to not break any software until the feature being used is actually removed.

It is considered a high priority bug if someone changes behaviour of a function or object in a way that isn't backwards compatible. All APIs are essentially statically vetted to ensure they don't change, including things like struct layouts and sizes.

That said, it kind of misses the point to respond to "Apple and MS care about backwards compatibility" in this context to say " I don't think Apple spends anywhere near the amount of effort MS does on backwards compatibility". The point is that if you care at all then libraries and services that are unwilling (or even hostile to) abi stability cannot viably be shipped in any publicly facing way as part of your platform.


> struct layouts and sizes

To be fair, this is less an an issue with non-fragile ivars in Objective-C.


Yup but OSX has a tonne of C (and unfortunately C++) APIs.

But yeah having most of the system apis being in a language that has built in dynamic resolution makes a world of difference in the difficulty of maintaining abi compat


Publicly exposed? Most of CoreFoundation and the other C APIs vend objects that are entirely opaque, and there are very few actual C++ APIs that I'm aware of–though many are written in (Objective-)C++ internally.


There are plenty of APIs that have structs in their interface, although they are always passed by reference so that the structs can increase in size, while still retaining compatibility with old software, e.g.

struct APIFoo { int a_field; }

void do_something(struct APIFoo* argument);

later on we add a new field, and that becomes:

struct APIFoo { int a_field; int new_field; }

For API compatibility you just need the source level field names to remain, and to remain the same type. But for ABI compatibility - e.g a existing compiled program - the fields must remain in the same location, so you can only add members to the end, and you can't change the alignment or padding rules.

What remains is how the API implementation knows if the argument is the size of the old api, or the new one.

The Microsoft API idiom is to have a size field that you initialize to sizeof(APIFoo). This means when you recompile you code against a more recent version of the API you'll get a modern sized struct. My feeling has always been that this approach is more fragile, as you can unintentionally increase the struct without initializing the new members. Apple APIs that need it tend to use an explicit version number, so you have to explicitly say that you know which fields you want to update - as an example you can look up JSClassDefinition (IIRC) in JavaScriptCore.

The other thing that Apple APIs will do is use a linked-on-or-after check: basically the macOS and iOS include metadata about the system version that a library or application was compiled to target. This is generally used to control legacy behaviour to handle cases where some internal logic leaked across an encapsulation boundary but the behaviour that was leaked is simply too awful to retain in general, and in the worst of worst cases code may include explicit tests for what the current application is (you can see a bunch of these in the WebKit codebase). Microsoft accomplishes similar tasks using (I think they're called) compatibility shims.

You are right however that the actual objects you get returned in macOS and iOS APIs are basically all opaque objects - looking at the JSC example, the actual API objects are opaque: so JSClassMake() takes a pointer to a JSClassDefinition, and returns an opaque JSClassRef that provides the actual API object you use to create objects.

As for C++: I would suggest you look at the nightmare that is IOKit. That's right, the kernel driver APIs are in C++. Because NEXT wrote that code in the 90s, when C++ was considered the perfect solution to all problems (e.g. before people really understood how hard C++ makes ABI compatibility)


Well aware of how C++ structures have issues with ABI compatibility, as well as the various linked-on-or-after checks ;) Note that DriverKit on NeXTSTEP, on which IOKit is based, was written in Objective-C. The switch to embedded C++ for Mac OS X was done to attract developers comfortable in the language, AFAIK.


Completely true. Apple only cares about back-compact down to a few older major versions. So often the big projects (like modern apps, Windows 10 S, etc) Microsoft does fail or struggle to gain traction IMO because of the blessing and curse of their back-compat requirements.


I think Apple has the right formula. The do put the effort in to maintain back compat to a certain point, but they also aggressively deprecate and remove within a few years of the arrival of a replacement technology. So you get the benefit of things not suddenly breaking, with the benefit of the whole ecoystem moving forwards. If that means old unmaintained software is left behind, then so be it. The right balance I think.


Those big-projects you mentioned failed to gain traction in the first place - with a tiny use-count for (e.g.) UWP vs Win32 it’s easy for MS to not take back-compat as seriously.


32 bit Windows binaries will even run on the ARM version of Windows.


32bit PPC ran on intel macOS, which is probably harder (it's basically a problem of how you map 32 registers onto 8).

The difference is that after a while Apple is willing to deprecate things. Versus a lot of open source projects that don't guarantee any stability at all, even across minor version increments. That's the problem that makes them dangerous for platforms like macOS and iOS to expose them as public API (intentionally or unintentionally - my understanding is for instance that the system openSSL was accidentally exposed publicly - and that took most of a decade to remove from the OS, e.g. most of a decade of shipping an increasingly old Frankenstein branch of openSSL)


If you're referring to the python2->python3 change, this was a pretty important one. Having a string type that isn't required to have valid encoding is an awful idea.


And rather than just accepting that was what you had and accepting that you occasionally had to deal with that (as JavaScript did), they decided to cause a decade of problems that aren’t going to go away just because they’ve now declared python 2.7 dead.


Because of that, Javascript is now a pile of accumulated technical debt and it one of the worst language you can dev in. Hell, it has no type and still 4 ways to declare a variable.

In fact it's so terrible the most popular JS projects are actually project to avoid writing JS or emulate features that don't exist in one JS implementation or another (typescript, babel, JSX, webpack, polyfills, etc)


A lot of those frameworks exist to bring modern features of JS back to older implementations, and others exist to functionally bolt on a distinct type system because many people like static typing.

And yeah, JS has some old crufty things, but it also has the ability to do things nicely now. No one is forced to use old syntax, and most new code does not, but because those features have not been unilaterally removed there has not been anything like the problems python3 has produced.

Seriously: what new syntax in python three necessitated breaking python2 compatibility? Even your string example is weird because JS allows arbitrary invalid ucs2/utf16 strings despite adding actual unicode compatible APIs.


> The lack of care about API&ABI stability in developer facing open source projects (interpreters, libraries, frameworks, commandline arguments) is still bizarre to me: why make your work hard to use/update?

If you're a FOSS developer with limited time and man power (and if you're lucky, limited budget) maintaining backwards compatibility is hard. So hard, that new features and bug fixes might have to take a back seat to it. Also, I'm almost certain that developers would rather be implementing new features, and fixing bugs, ahead of maintaining backward compatibility. It's much easier to show a new feature or a bug fix, than it is to show how backward compatibility was maintained.

Also, the big users of FOSS, projects like Debian (and Ubuntu) and Fedora, will rebuild their entire systems when new versions of software are released/integrated. This reduces the maintainer's requirements to keep ABI compatible.


> rebuild their entire systems when new versions of software are released/integrated

Yeah, the ability to rebuild the entire system from source whenever you need it has resulted in those communities just not caring about a number of things that really do matter on proprietary systems; it's also part of why Linux doesn't have fat binaries (why bundle multiple architectures together when everybody can just grab or build each one themselves?).


Thats the reason installing proprietary software on linux used to be a massive pain. The expectation is the distro maintainer packages everything for you but they only do that for things that are open source. This has changed since flatpack was created.


and it still breaks things if you've built things yourself because they're not part of your distro


I don't know if I completely buy into that argument. API's on the bleeding edge are often prone to breakage because the development frontier is in constant flux.

However, software that neglects to update also tends to breakdown as libraries change to reflect new knowledge.

If you're developing software based on someone else's library you are implicitly accepting the demand to modify your software to maintain currency with those underlying requirements. It seems a bit far-fetched to expect that you only have to write the software once without having to maintain it.


What I'm saying is that that is a choice on the part of the library developer.

You can make APIs that are ABI stable (literally that's the entirety of the Microsoft and Apple platform APIs, and plenty of open source libraries as well - Qt and GTK for instance). You can also explicitly choose not to (which I believe includes OpenSSL).

Basically if a library or application is not or cannot provide a stable ABI, it cannot be used be a platform where the default update model does not include the possibility of recompiling all software. Essentially such a model would require Apple and Microsoft to have the source for all apps, and be the primary/only distribution point for all of them as well - note that this is host most linux distributions operate: any given update can result in an arbitrarily large portion of all software for the system being recompiled.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: