
Learning from OpenBSD can make computers marginally less horrible - ArcVRArthur
https://telegra.ph/Why-OpenBSD-is-marginally-less-horrible-12-05
======
CharlesColeman
> OS Application Binary Interface (ABI) release inter-compatibility is the
> cancer killing the modern operating system.

I think that's only true perspective of an OS developer:

ABI inter-compatibility (e.g. the Windows and Linux model) prioritizes
_customer_ experience. Customers hate it when their applications stop working,
and application developers don't want to spend lots of effort to track
platform API changes just to avoid breakage.

Abandoning ABI inter-compatibility (OpenBSD, Apple) prioritizes _platform
developer_ experience. They want to be able to freely make API changes and
don't want to spend time maintaining old APIs that they could use to work on
new APIs.

I think the problem with the latter is, while there may be a few hundred
developers working on a particular OS, there's are orders of magnitude more
customers and application developers. Totally abandoning ABI inter-
compatibility seems like putting the interests of the very few over the
interests of the very many.

~~~
_bxg1
Except it means unbounded growth in complexity, which inevitably leads to,
among other things,

\- More bugs

\- More vulnerabilities

\- Increasing cost of support

Which absolutely impact the customer experience, just not in the short-term. A
little bit of short-term friction averts long-term intractability.

~~~
CharlesColeman
That's true: those things do negatively affect customer experience. However
they don't affect customer experience _as negatively_ as the experience of
having their software break.

To put this in another context: newer building codes may result in better and
safer homes, but it'd be extremely user hostile to force homeowners
proactively upgrade their homes to compliance each time a new version is
released (at the threat of having their home condemned if they do not). The
sensible compromise, in buildings and software, is to allow things to be
upgraded over time, as they're modified.

~~~
boring_twenties
> However they don't affect customer experience as negatively as the
> experience of having their software break.

They affect it far worse, because they affect _every_ user. Having
unmaintained/outdated software break only affects the subset of users that
want to use that particular software.

~~~
yjftsjthsd-h
You know the saying about how no Excel user uses more than 10% of its
features? But everyone uses a _different_ 10%, so ~100% matters? I defy you to
find me a business, or probably even a human, more than 2 years old (using
computers for more than 2 years) and not using any "legacy" applications. We
maintain compatibility for everyone, because everyone uses it.

~~~
boring_twenties
Who is using these legacy applications and for what? 99% of people use a web
browser only these days. As for myself, the closest thing I can think of some
in-house legacy crap but even that was 10 years ago. The majority of
businesses and humans don't have any of this. What kind of circles do you run
in where people are routinely using legacy software?

~~~
tsimionescu
You seem to think that software must be a continuously updated thing, or it
becomes legacy. This is somewhat true in the current world, but it is
massively wasteful and unnecessary. It should be normal for software to be
finished, and one should expect finished software to keep working for many
years.

One huge market where this does happen is games. Disregarding the current
plague of microtransaction-funded 'live experiences', most games are pieces of
software that get released and are mostly done, barring some added content
going out for a year or two. Losing the ability to play these games because
someone has decided that ABI compatibility is kinda hard is ridiculous, and
would definitely not fly for a consumer OS.

It would be interesting for someone to try to apply this same argument to
hardware: would it make sense to abandon old hardware support every release?
Doing this with device drivers was one of the things which hurt Linux adoption
on the desktop, and hurt Windows Vista's release immensely.

Overall, end-users do not and should not care for OS updates. They are a
necessary evil, to help fix bugs that the OS developers missed that threaten
their security; and to be able to use new applications that rely on new OS
features. But breaking old applications or hardware is a massive pain point
that makes users weary of updating despite the risk to their security.

~~~
clarry
> Losing the ability to play these games because someone has decided that ABI
> compatibility is kinda hard is ridiculous, and would definitely not fly for
> a consumer OS.

Old games have a tendancy to break reasons even without ABI breaks..

I think it's ridiculous that games are still primarily closed source binary
blobs that cannot be easily fixed and patched by the users to keep them
running fine for decades.

------
jdsully
It seems like focus on developer ergonomics over actual users has been
monotonically increasing since I started my career. It may be great for the OS
developers that they don't have to care about back compat but it is terrible
for the users. Your API may be beautiful but my software no longer works, so
the system is useless.

In the case of Linux everyone is now shipping entire userlands with their
applications via docker just to workaround compatibility issues. We'd be
shipping entire VMs if the kernel wasn't the only one holding the line on
compatibility.

Its been a long time now since I saw a programming post talking about how some
new paradigm or way of doing things would make life great for the users.

~~~
ori_b
> Your API may be beautiful but my software no longer works, so the system is
> useless.

If you install from a package, just update. If you built from source, just
recompile. If you got a binary, use the support contract you paid for. And if
you paid for a binary without a support contract, you got screwed hard, since
you can't get bug fixes even if the OS was immutable. But if you did screw
yourself, there's vmd that lets you freeze your OS in time.

~~~
saagarjha
> since you can't get bug fixes even if the OS was immutable

An immutable OS prevents new bugs from cropping up.

~~~
ori_b
It certainly keeps old security holes in play.

------
musicale
One of the most horrible things about iOS is that it breaks your apps every
year.

This is a terrible experience for customers (since their apps break every
year) and for developers (since they have an ongoing maintenance burden dumped
on them by Apple just to keep their apps working across yearly iOS updates.)

The main beneficiary of abandoning ABI compatibility (as Apple has done) is
the platform developer (e.g. Apple) who avoids the maintenance burden of
backward compatibility.

It's arguably the wrong approach because it helps the platform developer
(Apple) at the expense of existing customers and developers. There is
multiplicative burden of pain - each time Apple breaks something, millions of
customers and thousands of developers pay an immediate price.

There is a long-term user benefit to platform evolution, but the short-term
cost is relentless and ongoing.

For game developers in particular, the stability and backward compatibility of
Microsoft/Sony/Nintendo platforms is a dream compared to the quicksand of iOS
development.

~~~
Redoubts
> One of the most horrible things about iOS is that it breaks your apps every
> year.

You have to be a really terrible app developer for that to be true.

~~~
csande17
If the number of app updates I get every year with "iOS XX compatibility" in
the release notes is any indication, there must be a lot of really terrible
app developers in the world!

~~~
ben509
That would be consistent with all the really terrible apps I've seen.

------
juped
Breaking ABIs freely is a decision OpenBSD made that's been helpful in many
ways, but the lesson we should learn from them is to engineer layers of
failure mitigation into all our systems. Software bugs are unknown unknowns.

------
ggm
Selecting a BSD comes with an implied social contract regarding its mutability
across versions. If you go into OpenBSD believing code from n-3 runs on
version n+1 you misunderstood the social contract. FreeBSD or NetBSD or
DragonflyBSD might have a different social contract.

Selecting OSX used to imply much more attempt to handle this, maybe n-3 is
outside the goal but n-1 and n+1 kinda works usually. Except when things like
"we don't want 32 bit any more" hits, after 2 or more years of heads-up. Turns
out vendors don't want to incur that cost. Stuff which people want and "depend
on" as Kext don't work.

Consider how python2 dependencies are going in a world of Python3, and thats
userspace, not ABI. Its not the OS, but.. its similar.

~~~
EdwardDiego
> Selecting a BSD comes with an implied social contract regarding its
> mutability across versions.

Indeed, which is why it's market share is tiny.

~~~
yjftsjthsd-h
I seriously doubt that's the reason, especially compared to hardware support
and the usual hurdle of "not installed by default".

~~~
ggm
It might be some people's reason. I got to a point where I couldn't even get
decent 2D X behaviour, and DSDT configs for laptops stopped working, or even
depended on Linux to get them working. It was a signal. Van Jacobsen dropping
primary development of his TCP work in BSD and moving to Linux was another
signal to me, maybe some others.

Overwhelmingly I think desktop support and the Ubuntu/LTE effect did it:
FreeBSD demanded more of you, to get it to work. The working outcome I still
like, but commodity UNIX is just simpler from OSX, or from Ubuntu. And vendors
back it enough to mean you can get more things to work, more quickly, closer
to the cutting edge. I am pretty sure I will get a working Linux desktop on
any laptop I plausibly buy next time. I believe 80% of things will work fine
in FreeBSD but the last 20% (Synaptics driver, fingerprint driver, TPM driver,
blob-ridden WiFi Driver...) are going to be hard.

------
ben509
> Otherwise various efforts making use of containers, lightweight
> virtualization, and binary wrappers for the purposes of introducing new
> options to companies allowing them reasonable backward compatibility for the
> various applications that have become entrenched in their organizations will
> be the only way to break away from the stagnation of the current paradigm of
> enterprise operating system development.

That was essentially what MS did with "Windows on Windows" that brought 16-bit
applications over to Win32. And Apple with Rosetta, the blue box, etc. These
were hugely expensive because they had to track down all the unwritten
interfaces applications use.

If Linux standardizes virtualization for enterprise support, applications
should run in it all the time, so it's impossible for them to access any
private interfaces.

And it's sustainable because when enterprises find they're stuck with these
closed source applications, they'll have a direct interest in supporting
maintenance of the older virtualization.

------
klodolph
> Companies who make such investments often view the money they've paid for
> the development of this software in a similar manner to how they would view
> the investment into any other asset - which is to say that the expectation
> is that it will continue to function for years.

“Any other asset” is not informative. When my company buys me a laptop, the
assumption is that it will continue to function for three years. When they buy
me a chair, seven. When they buy a building, thirty.

That’s an order of magnitude difference in depreciation schedules. The two
problems here I see are:

1) Nobody in the accounting department had any clue how to do this in the
1980s and 1990s. So their cost projections were badly inaccurate, and they
didn’t have realistic depreciation schedules.

2) The contracting firms are not incentivized to do maintenance and don’t even
know how to do it in the first place.

> As nice as backward compatibility is from a user convenience perspective
> when this feature comes as a result of a static kernel Application Binary
> Interface this trade-off is essentially indistinguishable from increasing
> time-preference (or in other words declining concern for the future in
> comparison to the present).

This absolutely _is_ distinguishable. Backwards compatibility is a complex
tradeoff, no matter who you are (OS developer, app developer, end user, etc).
It’s as complex as opex vs capex (and probably more similar to that tradeoff).

------
bonzini
This makes no sense. The bulk of the ABI compatibility is not in the kernel,
and Linus's mantra of "not breaking userspace" hardly applies to applications
from the Linux Foundation's most paying members. The bulk of the ABI for Linux
applications comes from libc and other libraries.

The one case where breaking ABI would make things so much easier is y2038 but
it only applies to 32-bit systems, again nothing that matters to the Oracles
and SAPs.

~~~
wltprgm
> The bulk of the ABI compatibility is not in the kernel

So this means the Linux kernel is not as bulky and full of bugs like the
article has claimed?

[https://en.wikipedia.org/wiki/Linux_kernel_interfaces#Linux_...](https://en.wikipedia.org/wiki/Linux_kernel_interfaces#Linux_ABI)

[https://upload.wikimedia.org/wikipedia/commons/b/bb/Linux_AP...](https://upload.wikimedia.org/wikipedia/commons/b/bb/Linux_API_and_Linux_ABI.svg)

p.s. I am just an average Linux user who wants to know more about this

~~~
bonzini
Most of the 100.000.000 lines of code in Linux are drivers, or support for
architectures that you have never seen.

Yes, the core is bigger than OpenBSD's. It's also more scalable and generally
has higher performance. It's got nothing to do with backwards compatibility.

------
dvfjsdhgfv
> Linus Torvalds continues receiving his Linux Foundation salary paid for by
> the massive cheques it's member organizations cut him in exchange for
> influence over the kernel's development.

The author seems focused on that aspect as _the_ reason Linus is against ABI
changes. But in fact this was his stance for years as he's user-centric:
people expect things to continue to work when they upgrade the kernel, so if
you have to break their experience, you really need a very good reason. It's
not like he's started thinking this way when he became an employee of the LF.

------
davidgerard
The writer mentions the corporate user, but then never mentions them again.

I use a Linux desktop. If I want old versions of open source stuff, Wine
running the Windows binary is where it's at.

We now have a complete, futureproof free software stack with decades of
backwards compatibility! It just has win32 in the middle.

This article makes the case for the advancement of OS development, but not for
what people use computers for.

------
matheusmoreira
I think the stability of user space interfaces is simply good engineering.
Linux can run binaries compiled way back in the 90s. Because of this
discipline, people trust Linux as a platform. People generally have no
problems updating their kernels and it's safe to assume there will be no
problems. This isn't the case in user space: many projects have no problem
with breaking compatibility and forcing dependent packages to be updated as
well.

The author claims Linus Torvalds enforces Linux binary interface stability
because the Linux foundation members that pay his salary want it. Is this
really true? If that was the case, I'd expect the internal kernel interfaces
to be stable as well. They are unstable and he actively fights to keep them
unstable even though the companies would very much enjoy having stable driver
interfaces.

[https://yarchive.net/comp/linux/gcc_vs_kernel_stability.html](https://yarchive.net/comp/linux/gcc_vs_kernel_stability.html)

> Stuff outside the kernel is almost always either (a) experimental stuff that
> just isn't ready to be merged or (b) tries to avoid the GPL.

> Neither is worth a _second_ of anybodys time trying to support, and when you
> say "people spend lots of money supporting you", you're lying through your
> teeth. The GPL-avoiding kind of people don't spend a dime supporting me,
> they spend their money actively trying to debase and destroy what I and
> thousands of others have been working our butts off for.

> So don't try to make it sound like something it isn't. We support outside
> projects a hell of a lot better than we'd need to, and I can tell you that
> it's mostly _me_ who does that. Most of the core kernel developers argue
> that I should support less of it - and yes, they are backed up by lawyers at
> their (sometimes quite big) companies.

------
microcolonel
I will say, Windows 95 was pretty great, I identify with the Microsoft
customer in the hero image. I'm gathering notes to write a GUI toolkit which
only makes well-formed Windows 95-style UIs.

~~~
protomyth
Maybe my memory is a bit rose colored, but I still think NT 4.0 was great. Win
95 interface and rock solid OS.

~~~
zabzonk
Windows 2000 was even more rock-solid. I used to run it on a couple of Sony
Vaio laptops and I never had a blue screen.

~~~
protomyth
Didn't they move the video driver back into the kernel in 2000?

~~~
microcolonel
Yeah, but they actually worked so it was fine.

Of course, I will not attest to any architectural advantage of NT, especially
today. Everything from the filesystem to the schedulers to the memory
management... it all leaves a lot to be desired. Maybe with Genode coming
along, we'll get a serviceable seL4 desktop that I can run my Chicago-style UI
on. :- )

My impression of the eventual ideal is that the formally-verified stuff can be
allowed into the kernel, if there is some valid reason to do so; and
everything else can sit elsewhere.

~~~
mmis1000
At least in recently windows10 preview, patch kernel memory area itself was no
longer allowed, only hooks are allowed to be used to alter the kernel behavior
(thus breaking some silly anti cheat engine). And it also comes with a option
to enforce these with virtualization.

So yes, it is ongoing. But not Kernel -> Userspace. Instead it is Hypervisor
mode Kernel(?) -> Kernel.

------
dankohn1
I authored this document on the Linux Device Driver model 11 years ago and
amazingly it still represents the current policy:
[https://www.linuxfoundation.org/events/2008/06/the-linux-
dri...](https://www.linuxfoundation.org/events/2008/06/the-linux-driver-model-
a-better-way-to-support-devices/)

Specifically, the Linux kernel maintainers, not the Linux Foundation,
determine the policy that the user space ABI remains stable while the device
driver API is unstable.

Disclosure: I work for the Linux Foundation, and I know that if we told the
kernel maintainers to change their policy they would laugh at us.

~~~
yobert
Agreed. And I don't think Linus' opinions on compatibility come from the
funding model.

------
egdod
This is all lovely as a matter of the platonic ideal of an operating system.
But... the users have spoken. They don’t want their software to break.

Worse is better, and Microsoft got this one right.

------
lallysingh
The way I see it, VMs already encapsulate this. App --ABI--> VM'd Kernel ->
Hypervisor API.

But we can do this much more efficiently. IIRC, Prior variants of this were
called "personalities". I think the term's been reused now.

I think we could have the program loader consume the loaded program and act as
an API proxy between it and the actual kernel.

~~~
mmis1000
It sounds like what Solaris container did. The kernel responsible for handling
kernel abi compatibility. And everything includes the system utilities runs
inside a container that got given abi simulated by the kernel.

The model is App -- Static ABI --> [Simulated Kernel ABI by actual kernel] ->
Actual kernel.

Everything outside of the specified kernel abi version is not existed to the
application.

So it can run as much as years old application as long as the kernel is
willing to simulate the abi for it.

And it is also how windows 64 runs win32 app and wsl. There is a api proxy
inside the kernel and simulate the api for them.

------
thunderrabbit
This article helped me understand a lot; I knew development on iOS required
constant updates, but now I know _why_. Thank you.

BTW, there are several misspellings of "its" in your article. Search for
"it's" because most of them should be changed to "its"

------
ptah
no thanks. I have had the terrible experience of being forced to upgrade
software purely because a newer version of macOS does not support the old
version of my music software. I am looking at going completely hardware now
for music production so I don't have to deal with unnecessary upgrade
treadmill that is entrenched in computer culture.

EDIT: forced into paying for upgrade

------
_bxg1
Really interesting. I didn't know much about OpenBSD before, nor did I know
that Windows/Linux maintain ABI compatibility indefinitely, although it makes
sense.

It's also interesting to consider the web as an application platform in this
context. It too has an append-only API that places high importance on
indefinite backwards-compatibility. However, because that API is _dynamic_ ,
not binary, the underlying implementation has much more room to maneuver and
re-structure without breaking it.

~~~
aidenn0
Note that the while the linux kernel does maintain ABI compatibility
indefinitely, the same is not true for glibc, so any dynamically linked
applications (i.e. most applications in the past 20 years) have very poor ABI
compatibility.

~~~
danieldk
_the same is not true for glibc, so any dynamically linked applications (i.e.
most applications in the past 20 years) have very poor ABI compatibility_

Other libraries, sure, but when it comes to glibc, this is false. glibc uses
symbol versioning. E.g. a program that uses _fork_ uses a versioned symbol:

    
    
        $ nm a.out | grep fork
                     U fork@@GLIBC_2.2.5
    
    

glibc typically ships with functions with the current ABI version and previous
ABI versions, so glibc supports programs compiled against many older versions
of glibc.

See:

[https://developers.redhat.com/blog/2019/08/01/how-the-
gnu-c-...](https://developers.redhat.com/blog/2019/08/01/how-the-gnu-c-
library-handles-backward-compatibility/)

~~~
aidenn0
FWIW, I've seen dozens of programs break with changes to glibc.

------
mijoharas
> the project enforces a hard ceiling on the number of lines of code that can
> ever be in ring 0 at a given time

I tried googling to find what this limit is and where it's mentioned. Could
anyone help me out with a link? What is the limit?

------
plantsbeans
IIUC, ABI compatibility is one of the key design goals of the Fuchsia OS
project.

Is that accurate?

------
dehrmann
Maybe the answer is a rolling window of stability for OS APIs--something like
10 years (Windows 10 having Windows 95 compatibility mode is a bit absurd). On
the other hand, if you have a large library of test software, maintaining API
bridges might be doable, and for software more than 5 years old, performance
on modern hardware shouldn't be a major concern.

~~~
blincoln
There are major corporations running key business software which is 30, or
even 40 years old. I wouldn't be surprised if some were evening hitting 50+
with old COBOL mainframe applications.

This is one of the main reasons that corporate OS producers like Microsoft
support backward compatibility that seems excessive.

You're probably right that within a given hardware platform, older software
will generally be very fast on newer hardware, but if one tries to migrate
platforms (e.g. mainframe emulation on Linux or Windows), that's not a safe
assumption. Around 2009, I saw a team try to migrate some of those early-80s
mainframe apps to an emulator, and even on high-end HP servers running
Windows, performance was too poor to use a lot of it in production, because it
had all been written with ridiculously high-throughput mainframe storage I/O
in mind, and emulation couldn't keep up.

You may think (as I do) that those corporations should just bite the bullet
and replace those old apps with something modern, but they're the ones writing
the checks, so Microsoft and company give them what they want.

------
xvilka
And time to innovate in languages used for writing OS kernel and core services
as well. All mainstream kernels stuck with C/C++. Something newer, cleaner,
Rust, D, you name it, something that indeed wasn't afraid to deprecate legacy
too, and something that offers many new important features for OS developers.

