Of course, if all else fails, you can just go with the "write a Mach kernel module for Linux" route and make Linux cannibalize itself. The Darling (OS X compatibility layer for Linux) developer has done some work on this for his project, and it's one I might try to pick up for the Hurd if things advance too slowly, or simply as a research curiosity after I'm done with other projects.
I tried it out before in a Lubuntu virtual machine and had to install CLANG and other stuff to compile it with GNUStep and it is very limited in what it can run.
I linked to the project status page because some of the links on the main page are 404 errors.
Here it is on Github if people want to help out:
Here are the build requirements:
I disagree given how many microkernel projects by tiny teams have been started or finished before they got to even 0.7. Especially the 2-3 person project that was MINIX 3. I think GenodeOS started as a few people's L4 work, too. Maybe what they're doing is fundamentally wrong and they should do it different while leveraging existing work. Just a thought.
EDIT: Also, version names in of themselves don't mean much. Hurd is at 0.7 yet capable of running multiple folds more Unix software than MINIX at 3.3.x.
"we've only been at it for a bit over a year with three people"
By now, probably a lot more. I'm only talking about original architecture and implementation that possessed the reliability features.
a) The MINIX 3 developers had an enormous head start from reusing MINIX 2 in its entirety. In fact, that's how the project began: it was a refactoring of MINIX 2. The same cannot be said for the Hurd, which for the first few years was literally one guy (Thomas Bushnell) writing everything from scratch based on the stock CMU Mach 3.0 he was only getting acquainted with (he was much more familiar with 4.4BSD instead), and that the GNU project spent about 3 years procuring due to its proprietary licensing. Further, the MINIX 3 people later imported NetBSD, again heading on from that.
b) I ran GitStats on MINIX 3 and got a lifetime contributor count of 70, with around 13 major developers. Contrast to Hurd's lifetime of 51 with never more than 4-5 major devs, first few years being only 1.
EDIT: The EU grant no doubt helped, too.
"According to Thomas Bushnell, the initial Hurd architect, their early plan was to adapt the 4.4BSD-Lite kernel and, in hindsight, "It is now perfectly obvious to me that this would have succeeded splendidly and the world would be a very different place today"."
That was an epic hit. :) Far as microkernels, the few in existence were tied up in I.P.. So, I wouldn't rag them if they had to do that part custom. Mach was available with painful work. I believe Chorus was, but commercial. Only alternative I would consider was cloning Hydra's capability-secure microkernel. It ran on PDP-11's that UNIX was built on. CMU might have even given up the code since the project was dead. So, possibilities there. Past that, doing a microkernel at that time would be foolish and 4.4BSD integrated with GNU components was obvious choice.
After certain amount of time passed, there were a number of usable microkernels that could be split off and turned into working systems. There had also been numerous published projects... TrustedMach, Distributed TrustedMach, DTOS... by DOD to improve Mach's failures with UNIX compatibility without much success. Several commercial microkernels, including QNX, achieved success in in the market while publishing concrete details on how. Given all this, I think there's little justification for Hurd to keep trying failed strategy on the very microkernel that gave the rest a bad name. They're better off trying to rewrite parts from scratch on a L4-based kernel or QNX clone. Or joining MINIX 3. ;)
Past that, doing a microkernel at that time would be foolish and 4.4BSD integrated with GNU components was obvious choice.
I wouldn't have expected such a defeatist attitude from you. Really - trying to push the boundaries of the mainstream was foolish and instead we should have just gotten a GNU-BSD? By the way, they did try adopting an MIT research kernel called TRIX, which had in-kernel RPC and was partially user mode, but that flunked due to the costliness of porting it from the 68000 at the time.
After certain amount of time passed, there were a number of usable microkernels that could be split off and turned into working systems.
Ah yes, sure. After you're done with a monolithic Unix that the entire industry will settle on, go do a Perl 6 with "we microkernel now" and expect everyone to follow. Ain't happening.
Given all this, I think there's little justification for Hurd to keep trying failed strategy on the very microkernel that gave the rest a bad name. They're better off trying to rewrite parts from scratch on a L4-based kernel or QNX clone. Or joining MINIX 3. ;)
Retroactively improving Mach is actually a more interesting undertaking than just cloning QNX or doing yet another L4 flavor (which they tried, but found serious issues with L4 and object-capability systems: https://www.gnu.org/software/hurd/faq/which_microkernel.html). Also MINIX 3 doesn't really have the translator concept going for it, nor capability-based security through port rights.
If interested, Timeline of Operating Systems is a good start. Just pick summary of anything major (minus DOS's) and dig into references if it interests you.
The capability systems were also forward-thinking with much modern work on OOP, isolation, etc slowly reinventing its methods. Easy extension plus strong security/reliability were their thing.
Here's a paper vezzy-fnord shared with a lot of nice examples with features that defined future work and some still ahead of modern systems:
Lots of old stuff that still beats modern, mainstream stuff on key metrics. Hopefully, you'll dig into some of it and enjoy the "arcane" stuff yourself. It's arcane, but sometimes awesome too. :)
Note: Mainly see design, layering, and assurance activities.
The most modern use of them is CHERI processor and CheriBSD at Cambridge.
Note: Paper references quite a few past uses of them in detail.
A great, modern example in software is Code Pointer Integrity (see 2014 paper link):
So, there you go for tonight. Have fun. :)
It's really nothing special. Copious amounts of time coupled with an interest in system software research that motivates me to read research papers, dive into project wikis and the like.
If you'd like to discuss things further, just reply affirmatively and I'll send you an email.
I've started to read your piece, but I've honestly been interrupted constantly so haven't been able to give it my full attention, which I intend to do as soon as I can!
My disagreement with you over systemd, incidentally, never had anything to do with my assessment of your abilities. It was pretty obvious we both have technical skills :-)
It was part of it. Their main goal, per their site, was to replace UNIX('s) with a GNU OS. They failed that one totally far as I can tell and didn't do too well on others for over 10 years. I pick the 2007 papers as the cutoff point since knowledgeable people (eg you) can look at them and assess if they've eliminated those issues since. If not, the failure has lasted over 20 years. So, it's a failure and Linux was a success for the main goal.
If the goal is just exploration, then I agree that trying to do something with Mach and UNIX would be an interesting challenge. Still not best idea given all the failures up to that point of people trying that exact thing and always due to inherent incompatibilities in their nature. This was true in 1996. But it could be interesting and challenging work from a research perspective that might lead to innovations in handling such things. I've never doubted it would be interesting to read on how they handle everything.
"I wouldn't have expected such a defeatist attitude from you."
Maybe it's me going on 4 hrs of sleep for days in a row due to work and whatnot. I decided to give it a fresh look. I spent past hour reading its original paper, skimmed the papers on its mechanisms, read the 2007 critique in your microkernel link (do read that one), their position paper on fixes, and so on. Brain still tired but here's a semi-fresh take on it.
GNU was heavily invested into UNIX but not proprietary-related BS surrounding it. They wanted to maintain compatibility with their tools, POSIX, and especially a filesystem-centric model but with more flexibility, security, and reliability. Bushnell and supporters decided on a microkernel approach, built around Mach specifically, kept privileged the minimum apps needed, kept as much in the standard library as possible, and tried to rework that combination into something largely compatible with UNIX but different & distributed underneath. All systems up to that point had failed to even emulate UNIX on Mach without compatibility, performance, or security issues. However, they managed to mostly integrate it with better performance, decent compatibility, better (but not good) security, and unknown [to me] reliability. The proxying approach to service construction, used prior in capability & security kernel work IIRC, was also independently invented in Hurd with lots of improvements in usability and consistency. Met their goal for flexibility and debugging in non-kernel, OS modifications.
Now I contrast it to the other work a bit. Work prior to it, esp KeyKOS, managed to decompose the system further with the desired properties, strong security, good reliability, flexibility, and high performance. QNX created an integrated, UNIX-like OS that was fast, real-time, flexible, multi-server, and lacked Mach's weaknesses. They made it distributed & SMP-aware in their post-2000 version. L4 and separation kernels stayed with custom components directly on microkernel and virtualizing away POSIX/Linux/BSD as user-mode process with lightweight spawning (eg Dresden TUD:OS). The latter partly due to perceived impossibility of integration without API-related problems like in old Mach+UNIX work.
Further, many of these assumed and maintained POLA in much of their design while Hurd didn't. Two examples: too trusting of UNIX assumptions, necessitating full enumeration for defensive programming against many unknowns (Walfield & Brinkman 2007); its own assumptions like "you can't harm a process by giving it extra permission" (Bushnell 1996). So, doing the comparison, it's design has serious and often fundamental problems that others avoid which stem from the specific requirements of the authors mandating certain components and interfaces.
Let summarize. It's interesting work given the challenges and what they achieved. It has too many problems that are designed in which alternatives designed out. Mostly due to trying to shoehorn monolithic, insecure UNIX/POSIX API into a multi-server, robust microkernel with tight integration. Hard to say if further work is worthwhile given inability by 2007 to fix fundamental issues caused by mismatches in their goals, Mach, and UNIX/POSIX API. (Have they been since?) Although some precedent existed, the project did seem to make a lasting contribution via their translators. These were effective and UNIX-like enough that they got uptake that led to FUSE and later Rump kernels. Both are being used in many application areas ranging from development to desktop/server use to cloud applications.
So, that's the overall take. It was a failure in many respects, but not a total waste. The translator concept was a signficant win in FOSS with long-term benefits. The battles at mixing two incompatible systems might benefit other work extending UNIX's or even non-UNIX's using their RPC model. People wanting Hurd to succeed should read the critique and solution papers while knocking out as many of these issues as possible. By 2007 & maybe currently, numerous attributes of the system are inferior to other works. Anyone wanting to meet the goals of the project are better off building on other approaches, esp non-Mach, while considering selectively breaking POSIX model or even entirely virtualizing some aspects. That's my current take unless 2007-2015 brought in amazing, fundamental improvements.
Thoughts on the semi-fresh analysis?
Currently, the goal is indeed exploration. RMS and GNU have given up on it and morale is low. Can you blame them?
In fact, RMS' decision to go with Unix was a pragmatic one. He himself was always more a fan of ITS and Lisp Machines (see his October 1986 speech at KTH), but rightfully sensed no one would end up using such a solution, and thus decided a clone of Unix with non-breaking semantic improvements where possible and a Lisp-y userspace was the best way to bridge adoption with wanting to push forward. Hurd satisfied the first, the second unfortunately stagnated as Guile never broke the chasm until very recently now with GuixSD.
QNX was proprietary and didn't have its architectural unveiling until work on Hurd was started. It again did not have the translator concept, but like MINIX 3 had static servers. KeyKOS' main selling point was the single store and the orthogonal persistence, which nominally could be done on Hurd e.g. by integrating Mach VMM policy into a libdiskfs. It was not a priority then. L4 was too spartan for the experiments on object-capability systems that the Hurd devs embarked: https://www.cs.cmu.edu/~412-s05/projects/HurdL4/hurd-on-l4.p...
It's true that security was never a primary orientation of the Hurd, but its security is still better than monolithic Unixes. And do keep in mind QNX and MINIX 3 are not nominally security-oriented, either. It's the communication boundary structure (and in Hurd's case, port rights and auth server) that intrinsically create superior cases.
Walfield and Brinkmann's 2007 critique was done from the perspective of developers themselves. It wasn't a general criticism of the Hurd, but one of specific problems related to resource accounting that other u-kernel and most monolithic kernel designs suffer from. Relative to the state of the industry, it is still a leap forward. Relative to the state of the art, it is not, but then so do QNX and MINIX 3 make sacrifices.
Implementing the POSIX API has nothing to do with the Hurd's problems. And, indeed, Hurd does break the POSIX model in several aspects (e.g. allowing multiple uids per process). But then we must disqualify QNX and MINIX 3 by this token, too.
The fundamental issues with the Hurd then have little to do with the 2007 critique. More pressing is its incompleteness, which hopefully rump integration will finally break its samsara. If all else fails, using rump drivers on a heavily stripped Linux with Mach module emulation to create a true hybrid kernel, coupled with functional package management and system state management via Guix with added on orthogonal persistence and network transparency could be another future direction for the Hurd I might end up exploring. Perhaps tripling both as a multi-user OS, a unikernel and a container OS via subhurds.
The Mach improvements have mostly been surface, but again, I would not see it as a huge bottleneck (only an inelegance) and in the hybrid case it becomes irrelevant beyond as a programming API, which it isn't that bad of one.
I said it was a failure and they should give up on it outside research. So, no, I support their decision. ;)
" But then an explicit goal of the Hurd specifically was freedom from tyrannical sysadmin policy, which necessarily included freedom to host one's own private namespace in a multi-user system that doesn't affect the global namespace."
That makes sense, esp given the time it was made. Clouds, etc are still experimenting with models on that due to demand.
"QNX was proprietary and didn't have its architectural unveiling until work on Hurd was started."
And then they had the details they needed for the next 10-20 years. So, my memes about applying proven methods or not learning from the past may apply here.
"KeyKOS' main selling point was the single store and the orthogonal persistence,"
KeyKOS's main benefits they touted were security/reliability w/ fine-grained POLA, high performance, and flexibility. One could even extend and share things in such a way that untrusted code couldn't swamp CPU or memory. KeyKOS did in many use cases what Mach projects failed to do thanks to its design decisions and complexity. That's despite experienced, brilliant kernel programmers working full-time on it. KeyKOS also had the two benefits you mention. So, building a KeyKOS-style kernel had huge benefits over a Mach kernel. That's why Shapiro cloned it with EROS. I expect porting such attributes to this Mach project will also be (or were) hard.
"It's the communication boundary structure (and in Hurd's case, port rights and auth server) that intrinsically create superior cases."
It's isolation, resource management, and communication. They're necessary for reliability and security as both involve components behaving badly. Also, for reliability, add in self-healing capabilities with at least monitoring and restarting of drivers. The last is a feature but the first three are baked into underlying architecture. The paper noted it lacked one for sure. Further investigation might find problems with the others.
"Walfield and Brinkmann's 2007 critique was done from the perspective of developers themselves. It wasn't a general criticism of the Hurd, but one of specific problems related to resource accounting that other u-kernel and most monolithic kernel designs suffer from."
"Implementing the POSIX API has nothing to do with the Hurd's problems."
We might be reading different papers as there were several. Here's the one I'm referring to:
It's entirely about Hurd except when it's critiquing Mach. It first describes the architecture and mechanisms. Then, it goes with the critiques. I'm ignoring those that show up anywhere and just require a tweak. The biggest, for reliability and security, is that it's trust model is discretionary on app/process level: program inherits user's privileges with no further limits and any reductions are at program's discretion. Capability, MAC models, and even Windows Vista techniques counter threats in that space that Hurd wouldn't challenge.
The next come from UNIX/POSIX compatibility & integration style. The filesystem shows how vanilla UNIX apps might break or require significant debugging due to UNIX assumptions that don't apply. The server allocation problem with open is another but might be fixed if INTEGRITY RTOS's brilliant defence is used: calling process donates its own CPU/memory for external functions. Would that break POSIX/UNIX behavior? Idk. Author follows up, though, by pointing out most things in Hurd have lots of ambient authority and would take modifications. These are usually due to UNIX compatibility. Note that ambient authority, along with its resulting downtime and hacks, we're one of reasons UNIX was ditched for capability architectures by projects that built them.
They then look at Mach. One problem, optimization w/ predictable patterns, is indeed a general problem for microkernels this flexible and dynamic. This flexibility also makes for unpredictable delays. That's why most microkernels are static and/or optimized on critical parts. Mach also lacked secure management of global resources in such a dynamic system. Authors mention KeyKOS/EROS which didn't have that problem. There's others but those were best at this IIRC.
So, the paper and it's problems were mostly specific to Hurd and its usage of Mach. In each case, it's an open problem about how to fix that in Hurd while each are a done deal in one or more competing technologies. Mach and UNIX compatibility either prevent the fix or make it quite difficult. So, these should count as Hurd/Mach problems caused by their specific design(s).
"Relative to the state of the art, it is not, but then so do QNX and MINIX 3 make sacrifices."
They do. Not calling them perfection, either. However, they intentionally make choices that make their job easier at kernel, architecture, and API levels. Good engineering involves making right tradeoffs rather than setting oneself up to fail. That's my problem with Hurd if judging against its goals but in exploration mode they just need to make progress. Or make changes.
Note: I do disqualify the two from the secure attribute due to their compatibilities and QNX's UNIX code got it smashed first time an online presence was publicized. Reliability, performance, resource management, and UNIX/POSIX integration? Among best in class.
"The fundamental issues with the Hurd then have little to do with the 2007 critique. More pressing is its incompleteness"
That's another issue on top of the fundamentals I referenced earlier. We're probably going to learn about some more side effects of them when they add Rump kernels assuming monolithic UNIX to the mix. It's still a good move on their part, though, and would bring lineage full circle.
"f all else fails, using rump drivers on a heavily stripped Linux with Mach module emulation to create a true hybrid kernel, coupled with functional package management and system state management via Guix with added on orthogonal persistence and network transparency could be another future direction for the Hurd I might end up exploring."
Now, that sounds like the hacked together, against-all-odds, balls-to-the-wall solution I like to see! Although above issues will affect multi-user part: specific examples were already cited. Might be fine if they're benign users. So, build the monstrosity and make it work! :)
"I would not see it as a huge bottleneck (only an inelegance) and in the hybrid case it becomes irrelevant beyond as a programming API, which it isn't that bad of one."
In the past, it was a bottleneck but people thought the API was fine. So, it will be interesting to see what happens on containerized, multi-tenant, IPC-heavy workloads. On it vs alternative schemes.
A microkernel design seemed sane from experience in the UK. I don't know if it was technically a microkernel, but the GEC OS4000 Nucleus <https://en.wikipedia.org/wiki/OS4000> supported a system that made VMS VAXen look slow in the 80s. (Latterly, OS6000, with a software Nucleus, still compared well with System V on the same hardware.)
L4 was already tried for Hurd, surely.
Well, one company did build on that work that I can tell. They built on it so well they dropped bombs on the formal verification community. You might be surprised which company that was.
Safe to the last instruction: Automated verification of a type-safe operating system (2010)
I don't know to what extent security was the reason for the hardware nucleus rather than a by-product of the fast system. (I recall French colleagues thought the context switching was slow, thiking it must be in milliseconds, not microseconds.) However, the software versions for the later OS6000 and whatever the 88k implementation was called, seemed OK.
I don't understand the reference to the paper. I don't think there was any formal verification in OS4000 (or the hardware, given the 4190 arithmetic bug), or GC in Nucleus, and the typed assembler doesn't look to have much in common with Babbage. Which company?
Good to see an OS4000 user who appreciates its significance after all this time, anyhow.
Interesting. I never thought about a connection to RC 4000: just figured it was another example of how things were named back then. Could've been some influence.
"I don't know to what extent security was the reason for the hardware nucleus rather than a by-product of the fast system. (I recall French colleagues thought the context switching was slow, thiking it must be in milliseconds, not microseconds.) "
I doubt security was the reason as well. Few were thinking of it then. It was just a benefit of the design.
"I don't understand the reference to the paper. "
The original system designed a core component to make everything else easier to implement and consistent. That was called Nucleus. It had specific functions key to any OS. The rest was built around it.
The Microsoft system designed a core component to make everything consistent and verification easier. That was called Nucleus. It had specific functions key to the OS and many potential others. The rest was built around it.
I just thought whoever designed Verve might have heard of GEC 4000 and copied it to some degree. Or independently invented the concept. GEC 4000 still has advantages, though.
"Good to see an OS4000 user who appreciates its significance after all this time, anyhow."
I've never used it or any machine from 1960's-1970's. Someone asked vezzy-fnord recently why he has so much arcane knowledge about computing and programming. I answered for myself on that as I similarly research old work for the wisdom to be gained:
I have literally hundreds of papers on old stuff with many solving key problems and some doing better than today's stuff by at least one metric. I expressed the problem and potential solution here:
So, referencing Nucleus is just me doing my part. Neat that you got to use the thing for real. How robust was the system compared to others of the time? Did you find the kernel design or process separation to have practical benefit in its day-to-day operation or administration?
Meanwhile, Tanenbaum's grants from the EU total approx $6 million for the express purpose of producing a reliable microkernel and to build an operating system on top of it, all under the auspices of a research university. And as others have noted, most of MINIX's userland is the result of splicing in large parts of the existing NetBSD userland, so the skew in attention that the kernel received is pretty significant, cf GNU's Hurd.
On top of their design decisions that made the project ridiculously hard in the first place.
Note the quote from the head of the project who confessed to the specific mistake. That's not tautological at all: it's what 5-10 seconds of Google would've earned you. It also matched the first idea that came to my head about where I would've started at the time. Linus and Tannenbaum both started with Minix codebase, one indirectly and one directly, to achieve success.
So, Hurd was just Doing It Wrong. Further, that starting with a poor architecture or not building off a good codebase often leads to failure are proven heuristics.
I think the simple architecture is their main benefit, though. They only put in complexity where the most benefit. Then they reused NetBSD's work. A common cheat.
> my lay person's understanding of microkernels
On the other hand, it is stable enough to serve the Hurd wiki. In addition, it has ~81% of Debian building, it can run Xfce and Firefox... not bad. It's much more than MINIX 3, which is solid but lacking in packages.
i got the sense that it was a friendly group. expressing interest, even as someone who just wanted to play around and see what it was all about, was well-received.
 he talked about HURD's advantages for mounting remote iso images, it felt a little too practical not to be something someone experience on his own daily driver.
Nevertheless we read stories here again and again about how to achieve faster server speed - cloudflare stories come to mind - with userspace tcp stacks etc.
Back in 1990, the Amoeba system revealed 2x improvement in throughput and 5x improvement in latency for its RPC over the SunRPC of the time.  Relative measurements for the V-System and Sprite were similar. QNX, a microkernel-based system with high commercial success and a long history (used by over 40 automotive manufacturers) has highly fast IPC by integrating message passing with the CPU scheduler. After years of research, Jochann Liedtke introduced L4 in 1996  using only seven generalized calls with a 20x improvement in speed over prior art such as Mach. Mach, by the way, is the glaring exception to microkernel performance because of complicated message packing and port rights checking. Yet the Hurd developers are doing well in optimizing it, and to this day Mach is used to malign microkernels by people with no background on the subject. OKL4, in turn, has shipped in ~1.5 billion devices by 2012, powering the baseband processor behind nearly every mobile phone.  MINIX 3 further only shows a ~5-10% performance drop relative to monolithic Unixes, this for 2006. 
They never learn.
It was . . . okay. The number of cycles it actually took to do a lightweight message pass was dismaying, though. The newt would have benefited from a less partitioned design for the underlying page and storage management.
I used another microkernel OS on a set-top box. Now that was a misery, but one I attribute more to the uncaring attitude of the company that wrote the STB's firmware than any inherent problem with their OS (a decade later, critical and customer-affecting race conditions are still present in their code). To be honest, their OS didn't help.
I don't think microkernels are bad. Making bad choices about performance that affect performance, battery life and other things that customers care about is bad, and you might have to break some abstractions in your microkernel to address those.
Hmmm STMicro-based? :)
Originally, graphics ran in user mode, but Windows 2000 moved it back into kernel mode for performance reasons. Vista got a new device driver model and a new graphics driver model (WDDM), which has a way for graphics to run partly in user mode. Vista and later also support user-mode device drivers.
The NT Kernel and OS was designed to run DOS, POSIX, OS/2 1.X CLI apps (But not GUI) and 16 bit Windows and 32 bit Windows apps. Later on OS/2 and POSIX support got dropped.
Microsoft OS/2 NT 3.0 was a rewrite of OS/2 for 32 bit systems, before that OS/2 was on 16 bit systems with 1.X and IBM and Microsoft fought over 2.0 standards, and Microsoft stopped support for OS/2 and focused on Windows instead. Then renamed their OS/2 to NT for New Technologies.
The Interesting thing about Windows NT was that Microsoft had plans to port it to different processors and eventually dropped those plans and stuck with X86 processors instead.
It was designed as a modified microkernel.
What you're referring to were what the NT team called "personalities", or more formally "environment subsystems".
A subsystem was an API on top of the native NT API. Win32 was one such subsystem; users were exposed to the Win32 API, and weren't supposed to talk directly to the intentionally undocumented NT API (although some developers did reverse-engineer it eventually). POSIX and OS/2 compatibility was implemented as subsystems, but these were rarely used by anyone, and eventually removed in XP. DOS was not a subsystem, but rather its own thing.
Note that this has nothing to do with microkernels. In a microkernel architecture, things like file systems and device drivers live run as normal processes separate from the kernel. In NT, everything was in the kernel. These subsystems didn't act as part of a microkernel design.
Oh, and NT was not a rewrite of OS/2. It was written from scratch. The project was started back when Microsoft and IBM had a good relationship, and NT was originally planned to as OS/2 3.0.
The didn't plan it, they did it. I was looking after Windows NT on Alpha processors at one job. I never personally saw the NT on PowerPC or MIPS, though they did exist.
Mach had a surprising resurgence in interest over NextBSD implementing a ton of compatibility layers and mangling Apple sources in a weird, misguided attempt to make FreeBSD the new OS X. We'll see how it turns out.
x86 has highly specific support for protection rings and switching between them, as well as things like page faulting and interrupt management, leading to the classic kernel/user split with a kernel as a privileged actor underneath a user mode.
But having just one exclusive, reserved "kernel mode" is starting to look old, which is why there's now so much talk about virtualization and exokernels and so on. The microkernel design certainly seems very elegant, but it looks to me like Intel's architecture was always a stumbling block. You have to wonder about what hardware support you could invent that would make microkernels a better fit.
The page faulting was also on older systems, because putting those things in hardware is a lot faster than doing those things in software, plus controlling memory access really should be a privileged activity. Interrupt handling is in a similar situation, and even there, you still need some process handling the interrupt vector table. It's possible to make most of an interrupt handler a user-level process through page table and interrupt return address hacking, but for the moment, it's unfortunately rare.
As for your desire for a replacement for one exclusive, reserved kernel mode, there have been a few OSes that have tried to break that pattern. OS/2 used Ring 2 of the x86 for drivers, but unfortunately that bit wasn't added to Windows when they were forking NT. Being able to put semi-trusted drivers in a separate area, and perhaps even a user session manager too, could allow for some interesting security experiments that don't rely on (para)virtualization.
Hardware-wise, it would be useful to have hardware contexts, like sparcs have, so that the group of registers a process has can be swapped in and out a lot easier. Context switching is expensive, and building processors that realize that the modern user tends to have more than one task running would be a pretty good performance win.
 "If you created the name with a DCL command, the access mode defaults to supervisor mode. If you created the name with a program, the access mode typically defaults to user mode." http://h71000.www7.hp.com/doc/731final/4477/4477pro_007.html
A modern intel/amd processor lets you virtualize the CPU, MMU, and I/O. Network cards and HBAs can be partitioned into virtual NICs.
Two primary functions of a classic OS are process isolation (i.e. virtual memory) and hardware sharing. But now that these two primary functions have been pushed down into hardware, it has changed the way we think about operating systems considerably.
There isn't much left for a classic OS to do other than provide a common set of APIs for programs to talk through. Yet these days we can statically link even the largest libraries into our exokernels. And hypervisors are capable of using techniques like same-page merging to reduce the memory burden of running many large (exo)kernels at once.
I don't expect the the classic one-OS-running-many-processes model to go away overnight, or possibly ever. But the exokernel model is very compelling for large-scale high performance software services and it will continue to catch on.
It's an architecture where there is no real border between a monolithical kernel and a microkernel. It's just differences on access control policies, to the point that those names lose their meaning.
As always, it's a very interesting architecture. I hope they produce it someday.
Here's architecture for GEMSOS. See design/assurance sections.
Look up SCOMP Final Evaluation Report if you want to see how STOP OS used four rings and had an IOMMU despite that being "invented" recently. ;) The XTS-400 is the Intel version, uses same architecture minus custom hardware, and is still doing its job at hundreds of installations.
Definitely a major performance hit on both GEMSOS and STOP but they were 80's era stuff. Modern separation kernels do most stuff with just user/kernel mode separation with tiny kernels (4-12kloc). LynxSecure claims helps them keep CPU 97% idle with 100,000+ context switches a second. I'd expect old architectures to run even faster with modern techniques.
Performance is not a single metric. There is throughput, latency and then you can screw it all up and make it much harder by demanding guarantees on either of those.
Performance without guarantees is worth very little in quite a few situations.
With modern (buggy) hardware and DMA access, when your driver and/or hardware fails all bets are off. Some hardware may be possible to reboot (much as you'd reinitialize a kernel module in Linux), but sometimes your best course of action is a complete reboot.
As for security, you also need to take a long hard look at the the operating systems your operating system relies on, such as the ones powering your disks, nic, pci-controller etc. There are some potential tricky security interactions with them.
When trying to secure a system, we have reached the point where you have to sometimes as "is this CPU opcode safe?" Sometimes it just feels like modern hardware complexity is reaching some kind of critical mass threshold for "stupid shit"
If you want verifiable hardware, look up the VAMP processor as it has everything from design descriptions to formal proofs of correctness. Not sure about its availability. SPARC and RISC-V are very open with open-source implementations available with Linux and compiler support. So, there's a solution if people ever want to put the work in.
When I say 'just like a Unix', I mean: you log in and start up some terminal windows and it's sh and you can compile and run X11 software with configure scripts and gcc and you can print stuff with lpr and it all just works. It even runs Java --- my copy was completely self-hosting via Eclipse!
Years ago there was a single-floppy QNX demo disk: you booted from this and you got a basic desktop with dialup modem support and a web browser. SINGLE FLOPPY.
Hey, look! I was all set to write a paragraph about how sad it was that you couldn't get the bootable QNX CD any more, but look what I found!
It doesn't even need registration and a login any more! Holy crap, I have to see if this still works...
Edit: looks like it won't install without a license key. I'll see if I can get one...
Edit: apparently I have an account with QNX dating back from 2012, with three hobbyist license keys, one of which makes the installation CD happy. I don't know whether these are still available, though.
Edit: so it installed into a VM in about two minutes flat, and I now have a very old copy of Firefox running. It can't see the network, but that's nothing to do with QNX and everything to do with my inability to set up kvm. I need to find a real machine to run this on.
Edit: it will only install from CD, not from USB (unetbootin doesn't help). And I've lost the power cable for my CD burner. So until I find it, I won't be able to proceed here. Sorry. Still good to know this still exists, though...
Dan Hildebrand's original announcement on comp.os.linux.development.system back in 1997: http://marc.info/?l=freebsd-chat&m=103030933111004
Archived homepage of the QNX Demo Disk: http://web.archive.org/web/20011019174050/www.qnx.com/demodi...
including "How we did it":
and "What people are saying" (to give folks today an idea of the excitement around a 1.44MB GUI OS with networking and Japanese support back in the 90s):
Download links and screenshots:
And fast. I knew the owners of a company that sold X Windows commercially, and their fastest version ran under QNX, using the native QNX message passing facilities. The fact that the QNX kernel on a Pentium was only 8K in size was also mind blowing.
An interesting data point, though: QNX had a very clean and modularized 'microkernel-like' network stack called 'io-net'. But due to throughput issues in some situations, they switched to a new architecture a few years ago called 'io-pkt'. This is essentially the kernel networking code from BSD transplanted to a process in QNX; one advantage of this is that there's a large stock of drivers available to port and a lot of people are familiar with the BSD networking model, but some of the lesser-used corners of it weren't fully debugged when I was doing network protocol hacking, and in general it made me sad.
Anyway, you can definitely run full desktop environments on it, and you can especially run a full tablet environment if you buy a recent Blackberry tablet, as RIM now owns QNX and uses it as their latest Blackberry OS. This was, unfortunately, a step backwards in their openness and embrace of Open Source.
Even in that short time, I got a feel that the OS was quite fast, compared to early Linuxes that I used around the same time, and Windows. Remember reading in the mag that it had that Photon GUI, IIRC, that others have mentioned in comments here. I tried out at least some apps, bash, etc., and they were all snappy.
So, they worked better and keep working better. People just use monolithic kernels for some reason. Just like they took forever to get off of C for reliable, business applications. "Worse is better."
I would not expect an atheist to consider themselves "God forsaken", and I would not be surprised if one was kind of offended by being called that.
(I myself am not an atheist, ftr)
Given that,I am not sure why you would be surprised that people reacted negatively?
Microkernels aren't going away any time soon. :-)
At what point do you conclude that you're:
* running everything in hypervisor X, thus hardware support doesn't matter
* running only servers, thus desktops don't matter
* and thusly, may stand more to gain from an environment that never catered to the above two?
For consumer devices, the Blackberry Playbook was one of the better examples given it was built on QNX microkernel for reliability and nobody was complaining about its performance (i.e. doing two games at once). Lots of phones have microkernels in them to isolate baseband from problems in main OS. Did you know you were using a virtualized OS message passing to other components? I didn't either until I saw the vendor's press release. They might be efficient after all. ;)
Here's one used a lot in safety-critical and security-critical apps with nice security architecture:
The beautiful and efficient MorphOS uses a microkernel:
One that was made for capability-security model with FOSS code:
Note: See their papers and KeyKOS especially, as it was first successful one on IBM mainframes of old.
GenodeOS and MINIX 3 are the best efforts to look at in FOSS today as they're actively maintained. Not at Linux feature set or stability yet given almost no staff haha. Already on bare metal and hosting apps/desktops, though. Just don't think stuff like Hurd is representative of microkernels in general. It's just a failed GNU project. ;)
The original design for AmigaOS was known as CAOS. The CAOS design included resource tracking, which I believe is a useful feature when it comes to implementing memory protection.
If I remember correctly what ended up happening was the microkernel for CAOS was developed in-house (Exec), but contractors were being paid to develop the rest of the OS, and they disagreed with the design decisions of CAOS and wanted to make it more Unix-like. Once the Amiga team realised the contractors had wasted time developing something different from what they intended, they sought out an alternative solution to replace the user-space side of the OS. As a result they contracted a different team to rework an OS called TripOS to use the Exec microkernel (that was originally intended for CAOS), which then became the AmigaOS that Amiga users know.
You can read a bit more here:
MorphOS - quite fast and usable :)
So, out with the old and in with the new.
Multivisor is kind of a combination of the INTEGRITY RTOS and virtualization stuff.
The leader in this kind of stuff was OKL4 like in your reference, though. General Dynamics supplies their stuff now due to acquisition.
(XNU was inherited from NeXT, which was based on Mach 2.5, which was actually not a microkernel at all.)
More importantly, projects like this frequently spread their ideas elsewhere, even if the project itself isn't very useful. Another possibility is that someone gets a bunch of skills from developing in the much more open territory of Hurd and uses that knowledge to contribute in a more limited fashion in the Linux kernel.
Of course, I'm sure that Richard Stallman & Friends would be unhappy with my characterization of Hurd as an academic toy project, but them's the breaks. It's on them to show that this is dompetitive on any level with Linux, and right now that's not the case.
Being competitive with Linux isn't even a project goal.