It would suck. We've been there before - older OSes like VMS were a lot more feature-rich and expected deeper integration from their applications. But the Unix "worse is better" approach won out. I think it's a similar effect to "the standard library is where modules go to die": the more functionality gets put in the OS, the slower it can be iterated on. So however cool what you originally build into the OS is, it gets lapped by what eventually gets implemented by loosely coupled userspace programs.
> But the Unix "worse is better" approach won out.
Did it? In my opinion, modern Unix systems are much more like VMS than they're like traditional "worse is better" Unix. One of the classic "worse is better" interfaces is synchronous character IO. That is, to write a string, you call the write() routine repeatedly, sequentially, as you have things to write out. The call blocks the program until it completes.
It's trivial to implement. It's a nice, easy-to-understand API from a programmer's perspective. It's also so terribly inefficient that every modern Unix has bolted on VMS-style asynchronous IO to replace it. Queues of transfer requests, where you design the full IO transaction, submit it to the OS, get an interrupt when it completes. More complex, unfriendly to the programmer, yet fundamental to getting real IO throughput from an operating system. Worse is better did not stand the test of time there.
A few dedicated high-performance programs use those interfaces, but I wouldn't say they're mainstream, and they still seem to be oriented towards unstructured byte sequences and "everything is a file" as far as possible. And of course the highest-performance systems bypass the kernel entirely and do everything in userspace.
OTOH one could make the argument that the pendulum is swinging back with things like dbus and overlayfs, or even modern linux sound APIs. So IDK.
Looking to who's rulling desktop and mobile OSes, doesn't seem like UNIX did win.
Yes there is a UNIX under macOS, and some POSIX under iOS/iPadOS/watchOS, Android, which hardly matters to userspace frameworks and official programming stacks.
And naturally Windows is a kind of VMS reboot in spirit.
UNIX only won on the server room, and even that seems to start to be irrelevant in the age of cloud computing and OS abstraction infrastructure.
> Yes there is a UNIX under macOS, and some POSIX under iOS/iPadOS/watchOS, Android, which hardly matters to userspace frameworks and official programming stacks.
On the contrary, the fact that more and more of the parts where development happens shifts further and further into userspace is exactly what I'm getting at.
Maybe it won out on servers, but the world is mobile now, and mobile OSes are all about deep integration.
Linux has some of it with DBus, and a lot of the coolest stuff is based on it, or some other standardized rich shared daemon like PipeWire, which is about as close as Linux can get to having a feature rich OS, since the kernel devs would never go for making it truly like Windows and all monolithic.
The slower you iterate, the less you break, and the more focus you can have on app development rather than low level tool development.
The time you waste on not quite that great stale APIs seems to be offset by the time you save not constantly fixing stuff.
I suspect that the larger an app gets, the more it benefits from stability, because a million line app is probably going to have more code that nobody has needed to touch for 8 years
> What if we turned everything upside down.. What if there was ONLY the SYSTEM.. And everything was a plugin, literally, everything.. Your drivers, your web browsers your libraries, your shells and GUI and games.. All of it, plugins to the SYSTEM.. What does that even mean?
But seriously what if modern OSes were as responsive and resilient as QNX?
Though after building a modern desktop recently, it does kind of feel like building a minimally viable desktop and mounting a secondary computer for graphics into it. At least when I think of the powering situation. So maybe a move towards a less is more/very modular OS could find some niches today. It would be awesome if graphics drivers were more like a standardized communications interface than a binary blob dependent upon oem good effort.
The way I read it the post is suggesting the opposite - making the macrokernel even more macro, tightly integrating what were traditionally userspace programs with the OS proper (perhaps having everything run as a kernel module).
You mean like Emacs, the comprehensive operating system without a decent text editor?
I've noticed this, how most operating systems (or, rather, shells) are structured around running "applications software" to perform specific tasks, and how it differs from computing with Emacs (or Smalltalk, or a Lisp OS) where all of the system is available all the time. You get some of that back in Windows with COM, but even COM was oriented toward siloed applications being able to call into each other's silos. It took Microsoft some time to make even common controls and dialogs available. There were aspects to the Office UI, like the status bar and toolbar, that became table stakes in other applications -- but you had to hand-code them yourself if you wanted them.
The Macintosh, or more properly the Lisa, ruined computing. Had Jef Raskin's Canon Cat taken off, we would have seen a vastly different computing landscape.
Classic Mac was like this (as were other systems) - the toolbox started in rom, but the OS would patch out it's jump table (effectively) as needed to add new functionality or fix bugs - but since the interface and data structures were basically what was defined in 1984 for much of it, it was really easy to insert yourself as a third party up and down the system. It was a lot of fun - and completely fell apart because of memory safety issues.
Even more so with the Mac's predecessor, the Lisa. Lisa didn't really have the concept of "applications" or "user programs". The "document" was a central feature, a core OS concept. Programs provided the OS with handlers for new document objects, which might be interactive. No programs ever started or exited from the user's perspective; it was always the same document interface, with slightly different document properties, depending on the document. In the background it was transparently loading and running the different handlers as OS tasks. No concept of "saving a file" either; the document state was implicitly always preserved when you closed it.
It was mostly programmed in object-oriented Pascal. One of the first big uses of an OO language. And it was terribly big and slow, requiring a fatally expensive amount of RAM at the time. A lot of those abstractions were stripped out for Macintosh in a bid to slim things down extensively, and they have not really been tried elsewhere since.
> The "document" was a central feature, a core OS concept.
Isn't this largely the same in the modern desktop? File explorers do call handlers for each files. But it's also true that the recent trends among apps half-killed the concept of file explorer - they use built-in search functions these days. It's good in its own ways, but often it's necessary to go through filesystem when working with multiple apps, which can be awkward in terms of data management.
Also, many apps automatically save states on close. A problem is that the behavior is far from being standardized, and there are tons of different ways how apps handle their states. This usually gets really painful, as one must learn each app through trial-and-error.
The PC could go there somewhat too; you have the interrupt vector table at the bottom of memory, nominally in control of the BIOS.
If you invent a new disc controller, for example, you can change the vector for Interrupt 0x19 to point to your code, and if you realize from the body of the request that the call was intended for another device, you pass it on to the original code.
There were two problems:
* The standard BIOS features were somewhat limited and narrow.
* They added enough overhead that you might want to ignore them and start fiddling with registers directly.
This was especially true for video control (BIOS Interrupt 0x10). Nobody was going to follow through on the proper way of stuffing the framebuffer when it was memory mapped and you could just dump stuff straight into it. So on the one hand, it meant a BIOS-level compatibility wasn't enough, and it also forced hardware rictus (I could imagine, for example, a video standard which communicated with I/O ports to open up more addressable memory, but there's no way it would support anything but the most well-behaved software).
The idea does not sound to me like it is fully formed. But it reminds me of "datatypes" in Amiga OS 3.0.
A datatype was a kind of plugin that could be used to decode/encode a specific file format. You could just drag the datatype file icon into the "DEVS:Datatypes" directory, and it would become available to all programs that requested datatypes for a specific category of file formats.
Thanks to Datatypes, Amiga OS was the first OS that got support for the PNG file format in all web browsers on the platform.
> What if every buffer in my text-editor registered "I am a source of text,
and I am a target of text".. When I open a file, I can chose a file-system
plugin to provide the text to the buffer, but I could also use the HTTPS provider
if I just wanted the source-code of some website directly into my buffer.
This is "everything is a file", as realized in Plan 9. And if you restrict yourself to unstructured character data, it works beautifully. In the abstract, you can, more or less, direct any stream to any stream.
Unstructured character streams are a very simple interface, perhaps the simplest practical one. Get byte. Put byte. The problems begin when the interface has a type more complex than an unstructured byte stream. Every program needs to be coded around a common interface. What should the common file-like interface for image data, or GPS locations, look like? Ask three programmers and you'll get four or five strong, and strongly differing, opinions.
The complexities of such interfaces and their associated types, are probably why such all-encompassing library-OSes have historically tended to be restricted to a single language which is very strongly statically typed (rigorous interface definitions), or dynamic (all the conditions can be handled at runtime), and/or very object-oriented. Lisp, Smalltalk, Oberon, etc. The implementation language's details and type system become very relevant (and incompatible with others) as complex OS interfaces start to dominate the architecture.
This is what computers would be like if it were up to me.
Start with Android's API levels, pack as much as you can into the standard base, none of this Qt vs Gtk vs Wx stuff, then add Linux's DBus for all the stuff you still need to make swappable.
Add Window's commitment to stability, and the browser's JITting rather than compiling anything not performance critical so we don't have to deal with any more architecture incompatibility.
> One benefit that I can think of is a niche, overriding DLLs and SOs to provide
updated functionality, for example, on my Windows 10 machine, when I want to
play a game that has 3dfx vooodoo glide support, I usually use that, but I replace
the glide2x.dll with a new version from the nGlide project, which translates the
glide api into a modern API and also gives me control over resolution and stuff
that the original program does not know anythig about..
Unikernels are a great idea. But IMO they're the opposite of this vision - even more isolated and single-purpose than traditional userspace programs, rather than being integrated like plugins.
Arch is very heavyweight for a task like this, there are already far more stripped-down options. But even with those, if you're running a general-purpose OS just to run a single application, I have to ask why.
My point is that if you use a full-size linux distribution then that inherently comes with a bunch of random background processes etc. that you don't want.
BIOS updates used to sometimes be shipped as bootable images. But they'd generally use DOS or something equally simple.