Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, I understand the mechanism, but why would you want to structure your OS like that?

You either have to maintain a stable syscall ABI, or a stable userspace function ABI, and it's not like one is materially more difficult than the other. If anything, the Linux approach has proven to be superior -- glibc has had multiple breaking changes over the years and to this day users will resort to Docker to run binaries that depend on "old" glibc, but programs written against the Linux syscall ABI twenty years ago will run fine to this day.

Even if you decide to have a userspace trampoline/shim, it doesn't make sense for that to be libc. The C standard library is huge and requiring userspace programs to link it regardless of implementation language does nothing except add forty years of bad ideas into your process's address space.



Multiple organizations ship Linux kernels which do not support the original x86-64 system call ABI at all. Old applications only keep running because these applications are dynamically linked against glibc, and the system glibc version is new enough to use the new system call ABI. Some Linux subsystems are curiously exempt from the ABI stability, or different ABIs can be provided through compile-time or system configuration. Either way, as an application developer, you cannot be sure which ABI will be available.

On the glibc side, it's challenging for us upstream developers because a lot of people assume that we do not aim to provide backwards-compatibility, so they do not bother reporting compatibility issues. I suspect this perception is there because one of the backwards compatibility mechanisms we use prevents running binaries built on newer systems on older systems. (New programs use a new symbol with a backwards-incompatible change, old programs get the old implementation.) But that is not actually about backwards compatibility, it's requesting forward compatibility.

The goal is to require recompilation only in limited scenarios: static libraries and object files not yet fully linked, deliberate dependencies on undocumented internals (e.g., internal struct offsets), and dependency on behavior that is not standards-conforming (e.g., non-sticky EOF on stdio streams). And in the latter case, it's often possible to add a kludge to maintain backwards compatibility.


If anything, the Linux approach has proven to be superior -- glibc has had multiple breaking changes over the years and to this day users will resort to Docker to run binaries that depend on "old" glibc, but programs written against the Linux syscall ABI twenty years ago will run fine to this day.

Windows takes the opposite approach (except that the libraries are gdi32/kernel32/ntdll/user32/etc.), and it's definitely an example of backwards-compatibility done well.


I think the case of Windows is slightly different. Its architecture has the same backwards compatibility challenges of GNU libc, but where GNU took a user-hostile approach ("just recompile"), Microsoft invested thousands of person-years to mitigate and resolve issues as they were discovered.

Most folks reading this are likely familiar with Raymond Chen's blog posts about compatibility shims for older binaries, or stories like the Windows 95 work around for memory-management bugs in SimCity. That's the kind of thing that happens when you design an OS ABI with problems, but are committed to bearing the cost of those problems yourself rather than passing them along to users.

I'm sure if you asked today's Windows team if they'd have a different approach, they'd rattle off a half-dozen great ideas for backwards-compatible syscall ABIs. There's been lots of research in that area since the foundations of our current OS ecosystem were laid down 30 years ago.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: