Hacker News new | past | comments | ask | show | jobs | submit login
GNU Hurd 0.7 has been released (gnu.org)
208 points by vezzy-fnord on Oct 31, 2015 | hide | past | web | favorite | 111 comments



Good to see steady progress :) I hope the L4 port picks up a new round of steam eventually [1] but they squeeze quite a bit out of the Mach design (from my lay person's understanding of microkernels) and having GNU/Guix since this year is also nice.

[1] https://www.gnu.org/software/hurd/history/port_to_another_mi...


Strongly doubt it. The developers have adamantly focused on improving the standard Hurd/Mach over pursuing other microkernels or even compatibility layers on monolithic Unixes. Given how ridiculously little manpower they have, it's only the most practical. Not to mention the fact that rump kernel integration is very slowly but steadily picking up: https://lists.gnu.org/archive/html/bug-hurd/2015-10/index.ht...

Of course, if all else fails, you can just go with the "write a Mach kernel module for Linux" route and make Linux cannibalize itself. The Darling (OS X compatibility layer for Linux) developer has done some work on this for his project, and it's one I might try to pick up for the Hurd if things advance too slowly, or simply as a research curiosity after I'm done with other projects.


IIRC Darling uses GNUStep and CLANG to try and run OSX binaries.

https://www.darlinghq.org/

http://www.darlinghq.org/project-status

I tried it out before in a Lubuntu virtual machine and had to install CLANG and other stuff to compile it with GNUStep and it is very limited in what it can run.

I linked to the project status page because some of the links on the main page are 404 errors.

Here it is on Github if people want to help out: https://github.com/darlinghq/darling

Here are the build requirements: http://www.darlinghq.org/build-instructions


"Given how ridiculously little manpower they have, it's only the most practical."

I disagree given how many microkernel projects by tiny teams have been started or finished before they got to even 0.7. Especially the 2-3 person project that was MINIX 3. I think GenodeOS started as a few people's L4 work, too. Maybe what they're doing is fundamentally wrong and they should do it different while leveraging existing work. Just a thought.


MINIX 3, 2-3 people? You're kidding, it's far more than that. Especially given how many students AST supervises at the Vrije Universiteit.

EDIT: Also, version names in of themselves don't mean much. Hurd is at 0.7 yet capable of running multiple folds more Unix software than MINIX at 3.3.x.


I was going by Tannenbaum's claim about the original work.

"we've only been at it for a bit over a year with three people"

By now, probably a lot more. I'm only talking about original architecture and implementation that possessed the reliability features.


Two things:

a) The MINIX 3 developers had an enormous head start from reusing MINIX 2 in its entirety. In fact, that's how the project began: it was a refactoring of MINIX 2. The same cannot be said for the Hurd, which for the first few years was literally one guy (Thomas Bushnell) writing everything from scratch based on the stock CMU Mach 3.0 he was only getting acquainted with (he was much more familiar with 4.4BSD instead), and that the GNU project spent about 3 years procuring due to its proprietary licensing. Further, the MINIX 3 people later imported NetBSD, again heading on from that.

b) I ran GitStats on MINIX 3 and got a lifetime contributor count of 70, with around 13 major developers. Contrast to Hurd's lifetime of 51 with never more than 4-5 major devs, first few years being only 1.

EDIT: The EU grant no doubt helped, too.


So, they started with a proven codebase, refactored it with some developers + funding, got more developers/funding, and imported other proven code. The result was a working system in a short period of time. So, I might change how I present the project. Nonetheless, sounds like what any well-managed project does and what Hurd could've done. Matter of fact, I was going to suggest they could've reworked BSD 4.4 when I get a holy shit moment doing a quick fact-check on Wikipedia:

"According to Thomas Bushnell, the initial Hurd architect, their early plan was to adapt the 4.4BSD-Lite kernel and, in hindsight, "It is now perfectly obvious to me that this would have succeeded splendidly and the world would be a very different place today"."

That was an epic hit. :) Far as microkernels, the few in existence were tied up in I.P.. So, I wouldn't rag them if they had to do that part custom. Mach was available with painful work. I believe Chorus was, but commercial. Only alternative I would consider was cloning Hydra's capability-secure microkernel. It ran on PDP-11's that UNIX was built on. CMU might have even given up the code since the project was dead. So, possibilities there. Past that, doing a microkernel at that time would be foolish and 4.4BSD integrated with GNU components was obvious choice.

After certain amount of time passed, there were a number of usable microkernels that could be split off and turned into working systems. There had also been numerous published projects... TrustedMach, Distributed TrustedMach, DTOS... by DOD to improve Mach's failures with UNIX compatibility without much success. Several commercial microkernels, including QNX, achieved success in in the market while publishing concrete details on how. Given all this, I think there's little justification for Hurd to keep trying failed strategy on the very microkernel that gave the rest a bad name. They're better off trying to rewrite parts from scratch on a L4-based kernel or QNX clone. Or joining MINIX 3. ;)


Except that the reworked 4.4BSD version likely wouldn't have at all been a microkernel. And remember that much of the reason for the Hurd's u-kernel architecture was in line with FSF goals of user freedom. To get rid of the whole sysadmin/user ring and have the user be master of their own domain, even having private swapped versions of OS services if needed, on top of the flexibility of the translator concept which lets OS services speak any object protocol, including files.

Past that, doing a microkernel at that time would be foolish and 4.4BSD integrated with GNU components was obvious choice.

I wouldn't have expected such a defeatist attitude from you. Really - trying to push the boundaries of the mainstream was foolish and instead we should have just gotten a GNU-BSD? By the way, they did try adopting an MIT research kernel called TRIX, which had in-kernel RPC and was partially user mode, but that flunked due to the costliness of porting it from the 68000 at the time.

After certain amount of time passed, there were a number of usable microkernels that could be split off and turned into working systems.

Ah yes, sure. After you're done with a monolithic Unix that the entire industry will settle on, go do a Perl 6 with "we microkernel now" and expect everyone to follow. Ain't happening.

Given all this, I think there's little justification for Hurd to keep trying failed strategy on the very microkernel that gave the rest a bad name. They're better off trying to rewrite parts from scratch on a L4-based kernel or QNX clone. Or joining MINIX 3. ;)

Retroactively improving Mach is actually a more interesting undertaking than just cloning QNX or doing yet another L4 flavor (which they tried, but found serious issues with L4 and object-capability systems: https://www.gnu.org/software/hurd/faq/which_microkernel.html). Also MINIX 3 doesn't really have the translator concept going for it, nor capability-based security through port rights.


I have honestly started to look at your comment feed. Seriously, how the hell do you hold this much arcane (but amazing!) knowledge in your head?


Both he and I realized that there's lots of answers to modern problems in those old papers, as history repeats a lot in IT. The field has a problem of passing wisdom down to next generations or just generalizing solutions. The constraints and OS/hardware competitiveness also made lots of inherently interesting solutions to problems that you're unlikely to see re-appear outside of embedded or special-purpose. Fun stuff to read.

If interested, Timeline of Operating Systems is a good start. Just pick summary of anything major (minus DOS's) and dig into references if it interests you.

https://en.wikipedia.org/wiki/Timeline_of_operating_systems

The capability systems were also forward-thinking with much modern work on OOP, isolation, etc slowly reinventing its methods. Easy extension plus strong security/reliability were their thing.

https://homes.cs.washington.edu/~levy/capabook/index.html

Here's a paper vezzy-fnord shared with a lot of nice examples with features that defined future work and some still ahead of modern systems:

http://brinch-hansen.net/papers/2001b.pdf

Lots of old stuff that still beats modern, mainstream stuff on key metrics. Hopefully, you'll dig into some of it and enjoy the "arcane" stuff yourself. It's arcane, but sometimes awesome too. :)


Oh, when I say "arcane" it's not a criticism :-) and a little tongue in cheek, I Lo e stuff like this. Hell, I'm reviewing x86 segmented real mode at the moment, just because I can. I'm the last one to criticise someone for doing deep dived into technical material!


Oh, I knew you liked the arcane stuff: it's why I replied. :) Real mode I don't have as much on but segments are powerful in right hands. Even on Atom, they do in 2 cycles what paging isolation takes 8 to do. Called "descriptors" in capability book I referenced so definitely check that out or the dedicated chapter on descriptor architectures. The high-assurance GEMSOS kernel used segments when ported to Intel 286:

http://aesec.com/eval/NCSC-FER-94-008.pdf

Note: Mainly see design, layering, and assurance activities.

The most modern use of them is CHERI processor and CheriBSD at Cambridge.

https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/

https://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-864.pdf

Note: Paper references quite a few past uses of them in detail.

A great, modern example in software is Code Pointer Integrity (see 2014 paper link):

http://dslab.epfl.ch/proj/cpi/

So, there you go for tonight. Have fun. :)


Heh, I'm flattered. I recall we left a rather bitter taste in each other regarding the systemd question (speaking of which, you never got back on my paper).

It's really nothing special. Copious amounts of time coupled with an interest in system software research that motivates me to read research papers, dive into project wikis and the like.

If you'd like to discuss things further, just reply affirmatively and I'll send you an email.


I was quite depressed at the time, I can only apologise for my tone.

I've started to read your piece, but I've honestly been interrupted constantly so haven't been able to give it my full attention, which I intend to do as soon as I can!

My disagreement with you over systemd, incidentally, never had anything to do with my assessment of your abilities. It was pretty obvious we both have technical skills :-)


"Except that the reworked 4.4BSD version likely wouldn't have at all been a microkernel. And remember that much of the reason for the Hurd's u-kernel architecture was in line with FSF goals of user freedom."

It was part of it. Their main goal, per their site, was to replace UNIX('s) with a GNU OS. They failed that one totally far as I can tell and didn't do too well on others for over 10 years. I pick the 2007 papers as the cutoff point since knowledgeable people (eg you) can look at them and assess if they've eliminated those issues since. If not, the failure has lasted over 20 years. So, it's a failure and Linux was a success for the main goal.

If the goal is just exploration, then I agree that trying to do something with Mach and UNIX would be an interesting challenge. Still not best idea given all the failures up to that point of people trying that exact thing and always due to inherent incompatibilities in their nature. This was true in 1996. But it could be interesting and challenging work from a research perspective that might lead to innovations in handling such things. I've never doubted it would be interesting to read on how they handle everything.

"I wouldn't have expected such a defeatist attitude from you."

Maybe it's me going on 4 hrs of sleep for days in a row due to work and whatnot. I decided to give it a fresh look. I spent past hour reading its original paper, skimmed the papers on its mechanisms, read the 2007 critique in your microkernel link (do read that one), their position paper on fixes, and so on. Brain still tired but here's a semi-fresh take on it.

GNU was heavily invested into UNIX but not proprietary-related BS surrounding it. They wanted to maintain compatibility with their tools, POSIX, and especially a filesystem-centric model but with more flexibility, security, and reliability. Bushnell and supporters decided on a microkernel approach, built around Mach specifically, kept privileged the minimum apps needed, kept as much in the standard library as possible, and tried to rework that combination into something largely compatible with UNIX but different & distributed underneath. All systems up to that point had failed to even emulate UNIX on Mach without compatibility, performance, or security issues. However, they managed to mostly integrate it with better performance, decent compatibility, better (but not good) security, and unknown [to me] reliability. The proxying approach to service construction, used prior in capability & security kernel work IIRC, was also independently invented in Hurd with lots of improvements in usability and consistency. Met their goal for flexibility and debugging in non-kernel, OS modifications.

Now I contrast it to the other work a bit. Work prior to it, esp KeyKOS, managed to decompose the system further with the desired properties, strong security, good reliability, flexibility, and high performance. QNX created an integrated, UNIX-like OS that was fast, real-time, flexible, multi-server, and lacked Mach's weaknesses. They made it distributed & SMP-aware in their post-2000 version. L4 and separation kernels stayed with custom components directly on microkernel and virtualizing away POSIX/Linux/BSD as user-mode process with lightweight spawning (eg Dresden TUD:OS). The latter partly due to perceived impossibility of integration without API-related problems like in old Mach+UNIX work.

Further, many of these assumed and maintained POLA in much of their design while Hurd didn't. Two examples: too trusting of UNIX assumptions, necessitating full enumeration for defensive programming against many unknowns (Walfield & Brinkman 2007); its own assumptions like "you can't harm a process by giving it extra permission" (Bushnell 1996). So, doing the comparison, it's design has serious and often fundamental problems that others avoid which stem from the specific requirements of the authors mandating certain components and interfaces.

Let summarize. It's interesting work given the challenges and what they achieved. It has too many problems that are designed in which alternatives designed out. Mostly due to trying to shoehorn monolithic, insecure UNIX/POSIX API into a multi-server, robust microkernel with tight integration. Hard to say if further work is worthwhile given inability by 2007 to fix fundamental issues caused by mismatches in their goals, Mach, and UNIX/POSIX API. (Have they been since?) Although some precedent existed, the project did seem to make a lasting contribution via their translators. These were effective and UNIX-like enough that they got uptake that led to FUSE and later Rump kernels. Both are being used in many application areas ranging from development to desktop/server use to cloud applications.

So, that's the overall take. It was a failure in many respects, but not a total waste. The translator concept was a signficant win in FOSS with long-term benefits. The battles at mixing two incompatible systems might benefit other work extending UNIX's or even non-UNIX's using their RPC model. People wanting Hurd to succeed should read the critique and solution papers while knocking out as many of these issues as possible. By 2007 & maybe currently, numerous attributes of the system are inferior to other works. Anyone wanting to meet the goals of the project are better off building on other approaches, esp non-Mach, while considering selectively breaking POSIX model or even entirely virtualizing some aspects. That's my current take unless 2007-2015 brought in amazing, fundamental improvements.

Thoughts on the semi-fresh analysis?


Not quite. The end goal of the GNU project on the macro level is to replace all proprietary operating systems (not just proprietary Unixes) with GNU. But then an explicit goal of the Hurd specifically was freedom from tyrannical sysadmin policy, which necessarily included freedom to host one's own private namespace in a multi-user system that doesn't affect the global namespace.

Currently, the goal is indeed exploration. RMS and GNU have given up on it and morale is low. Can you blame them?

In fact, RMS' decision to go with Unix was a pragmatic one. He himself was always more a fan of ITS and Lisp Machines (see his October 1986 speech at KTH), but rightfully sensed no one would end up using such a solution, and thus decided a clone of Unix with non-breaking semantic improvements where possible and a Lisp-y userspace was the best way to bridge adoption with wanting to push forward. Hurd satisfied the first, the second unfortunately stagnated as Guile never broke the chasm until very recently now with GuixSD.

QNX was proprietary and didn't have its architectural unveiling until work on Hurd was started. It again did not have the translator concept, but like MINIX 3 had static servers. KeyKOS' main selling point was the single store and the orthogonal persistence, which nominally could be done on Hurd e.g. by integrating Mach VMM policy into a libdiskfs. It was not a priority then. L4 was too spartan for the experiments on object-capability systems that the Hurd devs embarked: https://www.cs.cmu.edu/~412-s05/projects/HurdL4/hurd-on-l4.p...

It's true that security was never a primary orientation of the Hurd, but its security is still better than monolithic Unixes. And do keep in mind QNX and MINIX 3 are not nominally security-oriented, either. It's the communication boundary structure (and in Hurd's case, port rights and auth server) that intrinsically create superior cases.

Walfield and Brinkmann's 2007 critique was done from the perspective of developers themselves. It wasn't a general criticism of the Hurd, but one of specific problems related to resource accounting that other u-kernel and most monolithic kernel designs suffer from. Relative to the state of the industry, it is still a leap forward. Relative to the state of the art, it is not, but then so do QNX and MINIX 3 make sacrifices.

Implementing the POSIX API has nothing to do with the Hurd's problems. And, indeed, Hurd does break the POSIX model in several aspects (e.g. allowing multiple uids per process). But then we must disqualify QNX and MINIX 3 by this token, too.

The fundamental issues with the Hurd then have little to do with the 2007 critique. More pressing is its incompleteness, which hopefully rump integration will finally break its samsara. If all else fails, using rump drivers on a heavily stripped Linux with Mach module emulation to create a true hybrid kernel, coupled with functional package management and system state management via Guix with added on orthogonal persistence and network transparency could be another future direction for the Hurd I might end up exploring. Perhaps tripling both as a multi-user OS, a unikernel and a container OS via subhurds.

The Mach improvements have mostly been surface, but again, I would not see it as a huge bottleneck (only an inelegance) and in the hybrid case it becomes irrelevant beyond as a programming API, which it isn't that bad of one.


"Currently, the goal is indeed exploration. RMS and GNU have given up on it and morale is low. Can you blame them?"

I said it was a failure and they should give up on it outside research. So, no, I support their decision. ;)

" But then an explicit goal of the Hurd specifically was freedom from tyrannical sysadmin policy, which necessarily included freedom to host one's own private namespace in a multi-user system that doesn't affect the global namespace."

That makes sense, esp given the time it was made. Clouds, etc are still experimenting with models on that due to demand.

"QNX was proprietary and didn't have its architectural unveiling until work on Hurd was started."

And then they had the details they needed for the next 10-20 years. So, my memes about applying proven methods or not learning from the past may apply here.

"KeyKOS' main selling point was the single store and the orthogonal persistence,"

KeyKOS's main benefits they touted were security/reliability w/ fine-grained POLA, high performance, and flexibility. One could even extend and share things in such a way that untrusted code couldn't swamp CPU or memory. KeyKOS did in many use cases what Mach projects failed to do thanks to its design decisions and complexity. That's despite experienced, brilliant kernel programmers working full-time on it. KeyKOS also had the two benefits you mention. So, building a KeyKOS-style kernel had huge benefits over a Mach kernel. That's why Shapiro cloned it with EROS. I expect porting such attributes to this Mach project will also be (or were) hard.

"It's the communication boundary structure (and in Hurd's case, port rights and auth server) that intrinsically create superior cases."

It's isolation, resource management, and communication. They're necessary for reliability and security as both involve components behaving badly. Also, for reliability, add in self-healing capabilities with at least monitoring and restarting of drivers. The last is a feature but the first three are baked into underlying architecture. The paper noted it lacked one for sure. Further investigation might find problems with the others.

"Walfield and Brinkmann's 2007 critique was done from the perspective of developers themselves. It wasn't a general criticism of the Hurd, but one of specific problems related to resource accounting that other u-kernel and most monolithic kernel designs suffer from."

"Implementing the POSIX API has nothing to do with the Hurd's problems."

We might be reading different papers as there were several. Here's the one I'm referring to:

http://walfield.org/papers/200707-walfield-critique-of-the-G...

It's entirely about Hurd except when it's critiquing Mach. It first describes the architecture and mechanisms. Then, it goes with the critiques. I'm ignoring those that show up anywhere and just require a tweak. The biggest, for reliability and security, is that it's trust model is discretionary on app/process level: program inherits user's privileges with no further limits and any reductions are at program's discretion. Capability, MAC models, and even Windows Vista techniques counter threats in that space that Hurd wouldn't challenge.

The next come from UNIX/POSIX compatibility & integration style. The filesystem shows how vanilla UNIX apps might break or require significant debugging due to UNIX assumptions that don't apply. The server allocation problem with open is another but might be fixed if INTEGRITY RTOS's brilliant defence is used: calling process donates its own CPU/memory for external functions. Would that break POSIX/UNIX behavior? Idk. Author follows up, though, by pointing out most things in Hurd have lots of ambient authority and would take modifications. These are usually due to UNIX compatibility. Note that ambient authority, along with its resulting downtime and hacks, we're one of reasons UNIX was ditched for capability architectures by projects that built them.

They then look at Mach. One problem, optimization w/ predictable patterns, is indeed a general problem for microkernels this flexible and dynamic. This flexibility also makes for unpredictable delays. That's why most microkernels are static and/or optimized on critical parts. Mach also lacked secure management of global resources in such a dynamic system. Authors mention KeyKOS/EROS which didn't have that problem. There's others but those were best at this IIRC.

So, the paper and it's problems were mostly specific to Hurd and its usage of Mach. In each case, it's an open problem about how to fix that in Hurd while each are a done deal in one or more competing technologies. Mach and UNIX compatibility either prevent the fix or make it quite difficult. So, these should count as Hurd/Mach problems caused by their specific design(s).

"Relative to the state of the art, it is not, but then so do QNX and MINIX 3 make sacrifices."

They do. Not calling them perfection, either. However, they intentionally make choices that make their job easier at kernel, architecture, and API levels. Good engineering involves making right tradeoffs rather than setting oneself up to fail. That's my problem with Hurd if judging against its goals but in exploration mode they just need to make progress. Or make changes.

Note: I do disqualify the two from the secure attribute due to their compatibilities and QNX's UNIX code got it smashed first time an online presence was publicized. Reliability, performance, resource management, and UNIX/POSIX integration? Among best in class.

"The fundamental issues with the Hurd then have little to do with the 2007 critique. More pressing is its incompleteness"

That's another issue on top of the fundamentals I referenced earlier. We're probably going to learn about some more side effects of them when they add Rump kernels assuming monolithic UNIX to the mix. It's still a good move on their part, though, and would bring lineage full circle.

"f all else fails, using rump drivers on a heavily stripped Linux with Mach module emulation to create a true hybrid kernel, coupled with functional package management and system state management via Guix with added on orthogonal persistence and network transparency could be another future direction for the Hurd I might end up exploring."

Now, that sounds like the hacked together, against-all-odds, balls-to-the-wall solution I like to see! Although above issues will affect multi-user part: specific examples were already cited. Might be fine if they're benign users. So, build the monstrosity and make it work! :)

"I would not see it as a huge bottleneck (only an inelegance) and in the hybrid case it becomes irrelevant beyond as a programming API, which it isn't that bad of one."

In the past, it was a bottleneck but people thought the API was fine. So, it will be interesting to see what happens on containerized, multi-tenant, IPC-heavy workloads. On it vs alternative schemes.


I recall that the AT&T/BSD lawsuit was relevant then, especially for Linux as an alternative at the right time.

A microkernel design seemed sane from experience in the UK. I don't know if it was technically a microkernel, but the GEC OS4000 Nucleus <https://en.wikipedia.org/wiki/OS4000> supported a system that made VMS VAXen look slow in the 80s. (Latterly, OS6000, with a software Nucleus, still compared well with System V on the same hardware.)

L4 was already tried for Hurd, surely.


That's a good point. Lawsuit might have factored into the decision although they don't reference it. In any case, I think Nucleus is a good recommendation as I also regularly cite it as a great idea worth emulating. That GEC work integrated just the right functions into the kernel, built entire OS around that for consistency, and had non-software-writable firmware for security. Way ahead of it's time.

Well, one company did build on that work that I can tell. They built on it so well they dropped bombs on the formal verification community. You might be surprised which company that was.

Safe to the last instruction: Automated verification of a type-safe operating system (2010)

http://research.microsoft.com/pubs/122884/pldi117-yang.pdf


[I assume, but don't know, that GEC 4000 was influenced by Brinch Hansen's RC 4000, but all I remember about the then-ancient RC 4000 at the Niels Bohr Institute was that it seemed pretty primitive and the systems language was Algol, c.f. OS4000's structured assembler, and was pretty grotty.]

I don't know to what extent security was the reason for the hardware nucleus rather than a by-product of the fast system. (I recall French colleagues thought the context switching was slow, thiking it must be in milliseconds, not microseconds.) However, the software versions for the later OS6000 and whatever the 88k implementation was called, seemed OK.

I don't understand the reference to the paper. I don't think there was any formal verification in OS4000 (or the hardware, given the 4190 arithmetic bug), or GC in Nucleus, and the typed assembler doesn't look to have much in common with Babbage. Which company?

Good to see an OS4000 user who appreciates its significance after all this time, anyhow.


"I assume, but don't know, that GEC 4000 was influenced by Brinch Hansen's RC 4000, but all I remember about the then-ancient RC 4000 at the Niels Bohr Institute was that it seemed pretty primitive and the systems language was Algol, c.f. OS4000's structured assembler, and was pretty grotty."

Interesting. I never thought about a connection to RC 4000: just figured it was another example of how things were named back then. Could've been some influence.

"I don't know to what extent security was the reason for the hardware nucleus rather than a by-product of the fast system. (I recall French colleagues thought the context switching was slow, thiking it must be in milliseconds, not microseconds.) "

I doubt security was the reason as well. Few were thinking of it then. It was just a benefit of the design.

"I don't understand the reference to the paper. "

The original system designed a core component to make everything else easier to implement and consistent. That was called Nucleus. It had specific functions key to any OS. The rest was built around it.

The Microsoft system designed a core component to make everything consistent and verification easier. That was called Nucleus. It had specific functions key to the OS and many potential others. The rest was built around it.

I just thought whoever designed Verve might have heard of GEC 4000 and copied it to some degree. Or independently invented the concept. GEC 4000 still has advantages, though.

"Good to see an OS4000 user who appreciates its significance after all this time, anyhow."

I've never used it or any machine from 1960's-1970's. Someone asked vezzy-fnord recently why he has so much arcane knowledge about computing and programming. I answered for myself on that as I similarly research old work for the wisdom to be gained:

https://news.ycombinator.com/item?id=10488298

I have literally hundreds of papers on old stuff with many solving key problems and some doing better than today's stuff by at least one metric. I expressed the problem and potential solution here:

https://news.ycombinator.com/item?id=10269245

So, referencing Nucleus is just me doing my part. Neat that you got to use the thing for real. How robust was the system compared to others of the time? Did you find the kernel design or process separation to have practical benefit in its day-to-day operation or administration?


Another notable difference between Hurd and MINIX is that Tanenbaum got millions of funding from the EU to improve MINIX. Having the organizational backing of a research university helps, too.


How is that a difference? Stallman has goten millions from the MacArthur Foundation, Takeda Foundation, etc etc. Neither project is short of funding.


Not true. The awards from MacArthur and Takeda that Stallman received when combined total to just over half a million USD[1][2]. Keep in mind that this is spread across all of GNU and to keep the FSF machine going as well.

Meanwhile, Tanenbaum's grants from the EU total approx $6 million for the express purpose of producing a reliable microkernel and to build an operating system on top of it[3], all under the auspices of a research university. And as others have noted, most of MINIX's userland is the result of splicing in large parts of the existing NetBSD userland, so the skew in attention that the kernel received is pretty significant, cf GNU's Hurd.

1. http://tech.mit.edu/V110/N30/rms.30n.html

2. http://www.gnu.org/press/2001-12-03-Takeda.html

3. https://www.bsdcan.org/2015/schedule/speakers/258.en.html


Which means funding or sponsored developers are critical to the success of OS projects if anything significant is to happen in reasonable amount of time. So, the failure was to match funding rather than features per developer. ;)

On top of their design decisions that made the project ridiculously hard in the first place.


It's not except in terms of features they can produce and bugs they can eliminate. The research I did in response to vezzy-fnord's comment shows that Hurd ignored a proven codebase (4.4BSD) in favor of trying to build on Mach, a problematic microkernel. The Hurd author admitted the first part. They continue down the path that didn't work while Minix 3 adopted a better microkernel architecture built on their proven codebase. So, their stuff is predictably in better shape.


Right, but "they are more successful because they made better decisions" is tautological.


A kind of troll-like remark that offers no counter-points or further information. I'll reply to it anyway. Here's the specifics I gave that they could've applied at the start or later on:

https://news.ycombinator.com/item?id=10486120

Note the quote from the head of the project who confessed to the specific mistake. That's not tautological at all: it's what 5-10 seconds of Google would've earned you. It also matched the first idea that came to my head about where I would've started at the time. Linus and Tannenbaum both started with Minix codebase, one indirectly and one directly, to achieve success.

So, Hurd was just Doing It Wrong. Further, that starting with a poor architecture or not building off a good codebase often leads to failure are proven heuristics.


Yes I suppose you could consider better decisions leading to better outcomes to be "problematic". Much more convenient to ascribe it to more funding i.e. privilege. Now who's trolling?


Hard to imagine a better outcome when Hurd still hasn't achieved original goals, is worse than small monoliths, is behind many microkernels, and its own author said doing BSD instead would've been "ideal." More like worst outcome one can get.


Hey, millions of dollars can always help. It will be in my HOWTO for successful FOSS if I ever write one. :)

I think the simple architecture is their main benefit, though. They only put in complexity where the most benefit. Then they reused NetBSD's work. A common cheat.


The units of measure there are Euro, even, so after the conversion to USD, we're talking about even larger numbers. Not an order of magnitude in difference, though.


the desperate need for a decent "can run as a desktop" userland suite of services for the L4 family of microkernels might actually help get them more manpower working on the project. Other than that yeah sticking to what they know does make sense.


    > my lay person's understanding of microkernels
I expect that's rather _too_ modest!


Not really. I haven't worked on any meaningful kernel and my understanding is just from textbooks and toying around a bit (good old hello world level x86 kernel). I try to keep up to date but a lot of stuff in kernel development and low level hardware land is pretty much over my head.


Do you know something about the GP's experience in this area? (it doesn't seem obviously disclosed)


GP? Assuming you mean the poster of parent comment, no I don't know anything more than you do; I was just amused because "a lay person's understanding of microkernels" is surely "what's a microkernel?"!


Does anyone have practical experience running GNU Hurd as a personal computing environment? I see they have Debian running on GNU Hurd... if I were to install that on my X200 running with LibreBoot (currently running Trisquel Linux) what would my user experience be like?


You probably couldn't get it to run on bare metal, what with I don't think Mach handling message-signaled interrupts for PCI, on top of the lack of USB and sound for desktop functionality. It's strictly developed and used in virtualization currently, and the motivation for rump kernel integration is to finally escape the hellish circle of that and relying on old NetDDE drivers, perhaps finally making it an everyday OS.

On the other hand, it is stable enough to serve the Hurd wiki. In addition, it has ~81% of Debian building, it can run Xfce and Firefox... not bad. It's much more than MINIX 3, which is solid but lacking in packages.


yea, i tried to run it on real hardware a couple of years ago, and the response from the community was "no one does that, use virtualization."

i got the sense that it was a friendly group. expressing interest, even as someone who just wanted to play around and see what it was all about, was well-received.


Since you have the luxury of a FSF-blessed laptop, I would recommend trying GuixSD[0]. Transactional package managers are the best thing since sliced bread.

0: https://www.gnu.org/software/guix/


Note that while Guix, the package manager, has been successfully ported to the Hurd, GuixSD has not. Still some work to do there, but I invite anyone interested to join in and help it make it happen.


Watching videos from https://archive.fosdem.org/2015/schedule/speaker/samuel_thib... it felt like he used as it's main OS[1].

[1] he talked about HURD's advantages for mounting remote iso images, it felt a little too practical not to be something someone experience on his own daily driver.


Since the Linus discussion years ago there is this myth that microkernels are slow. And it's the first post - here again - whenever someone mentions Hurd.

Nevertheless we read stories here again and again about how to achieve faster server speed - cloudflare stories come to mind - with userspace tcp stacks etc.


It is a pitty iokit is not ported with other userland stuff.


From my understanding, the microkernel approach has been deemed impractical for real-world use. I wonder if there is anyone out there using it?


Your understanding is, I'm sorry if this comes across as hostile, absolute nonsense. You should actually consult the literature instead of reading Linus Torvalds rants.

Back in 1990, the Amoeba system revealed 2x improvement in throughput and 5x improvement in latency for its RPC over the SunRPC of the time. [1] Relative measurements for the V-System and Sprite were similar. QNX, a microkernel-based system with high commercial success and a long history (used by over 40 automotive manufacturers) has highly fast IPC by integrating message passing with the CPU scheduler. After years of research, Jochann Liedtke introduced L4 in 1996 [2] using only seven generalized calls with a 20x improvement in speed over prior art such as Mach. Mach, by the way, is the glaring exception to microkernel performance because of complicated message packing and port rights checking. Yet the Hurd developers are doing well in optimizing it, and to this day Mach is used to malign microkernels by people with no background on the subject. OKL4, in turn, has shipped in ~1.5 billion devices by 2012, powering the baseband processor behind nearly every mobile phone. [3] MINIX 3 further only shows a ~5-10% performance drop relative to monolithic Unixes, this for 2006. [4]

They never learn.

[1] http://www.scs.stanford.edu/nyu/03sp/sched/amoeba.pdf

[2] https://homes.cs.washington.edu/~bershad/590s/papers/towards...

[3] http://www.creativemac.com/article/OK-Labs-Software-Surpasse...

[4] http://www.minix3.org/doc/ACSAC-2006.pdf


The Apple Newton used a microkernel inspired by Mach, with a custom MMU that reduced the cost of context switching.

It was . . . okay. The number of cycles it actually took to do a lightweight message pass was dismaying, though. The newt would have benefited from a less partitioned design for the underlying page and storage management.

I used another microkernel OS on a set-top box. Now that was a misery, but one I attribute more to the uncaring attitude of the company that wrote the STB's firmware than any inherent problem with their OS (a decade later, critical and customer-affecting race conditions are still present in their code). To be honest, their OS didn't help.

I don't think microkernels are bad. Making bad choices about performance that affect performance, battery life and other things that customers care about is bad, and you might have to break some abstractions in your microkernel to address those.


> Now that was a misery, but one I attribute more to the uncaring attitude of the company that wrote the STB's firmware than any inherent problem with their OS (a decade later, critical and customer-affecting race conditions are still present in their code). To be honest, their OS didn't help.

Hmmm STMicro-based? :)


68K, actually, and MIPS. Different platforms, same damned OS bugs. Let's hear it for portability :-)


Didn't the Microsoft NT project start off as a micro kernel and then changed course to a hybrid model ?

https://en.wikipedia.org/wiki/Hybrid_kernel


NT was never a microkernel. It has an intenral microkernel-like design, though; it's divided into cleanly separated modules, and modules may only communicate using a special IPC mechanism. But like OS X, the classic pieces (file systems, graphics, device drivers) run in kernel mode.

Originally, graphics ran in user mode, but Windows 2000 moved it back into kernel mode for performance reasons. Vista got a new device driver model and a new graphics driver model (WDDM), which has a way for graphics to run partly in user mode. Vista and later also support user-mode device drivers.


Windows NT 3.1 was based on Microsoft OS/2 NT 3.0, it was originally Microsoft's version of OS/2 with the Windows 3>X GUI added to it instead of IBM's Presentation Manager.

The NT Kernel and OS was designed to run DOS, POSIX, OS/2 1.X CLI apps (But not GUI) and 16 bit Windows and 32 bit Windows apps. Later on OS/2 and POSIX support got dropped.

Microsoft OS/2 NT 3.0 was a rewrite of OS/2 for 32 bit systems, before that OS/2 was on 16 bit systems with 1.X and IBM and Microsoft fought over 2.0 standards, and Microsoft stopped support for OS/2 and focused on Windows instead. Then renamed their OS/2 to NT for New Technologies.

The Interesting thing about Windows NT was that Microsoft had plans to port it to different processors and eventually dropped those plans and stuck with X86 processors instead.

https://en.wikipedia.org/wiki/Windows_NT

It was designed as a modified microkernel.


> The NT Kernel and OS was designed to run DOS, POSIX, ...

What you're referring to were what the NT team called "personalities", or more formally "environment subsystems".

A subsystem was an API on top of the native NT API. Win32 was one such subsystem; users were exposed to the Win32 API, and weren't supposed to talk directly to the intentionally undocumented NT API (although some developers did reverse-engineer it eventually). POSIX and OS/2 compatibility was implemented as subsystems, but these were rarely used by anyone, and eventually removed in XP. DOS was not a subsystem, but rather its own thing.

Note that this has nothing to do with microkernels. In a microkernel architecture, things like file systems and device drivers live run as normal processes separate from the kernel. In NT, everything was in the kernel. These subsystems didn't act as part of a microkernel design.

Oh, and NT was not a rewrite of OS/2. It was written from scratch. The project was started back when Microsoft and IBM had a good relationship, and NT was originally planned to as OS/2 3.0.


> The Interesting thing about Windows NT was that Microsoft had plans to port it to different processors

The didn't plan it, they did it. I was looking after Windows NT on Alpha processors at one job. I never personally saw the NT on PowerPC or MIPS, though they did exist.


You got it wrong (your first sentence), WinNT3.1 is written from scratch by Dave Cutler and his group from 1989 to 1994. Originally there would have been a OS/2 compatible subsystem, and some ideas stem from OS/2 but NT has a lot of more similarities with VMS, the previous OS project of Mr Cutler at DEC.


as an aside, how is Mach doing these days? I suppose the most visible use is OS X and I assume Apple's engineers have evolved it greatly to keep up with the times compared with the earlier versions of OS X?


Apple's XNU is de facto monolithic in nature. It simply reuses OSF Mach primitives to build the BSD Unix layer next to it. Mach IPC is used primarily for service discovery these days, I think, what with launchd being responsible for being the bootstrap server, which maintains name-port bindings for Mach services to call into. This is a side effect of Mach port namespaces being internal to tasks, and hence not having an intrinsic arbiter. In contrast though, the Hurd decentralizes this activity by just assigning a bootstrap port for the client making the RPC call to rendezvous with the server or library in question. These days, OS X seldom uses Mach IPC directly, but rather via XPC.

Mach had a surprising resurgence in interest over NextBSD implementing a ton of compatibility layers and mangling Apple sources in a weird, misguided attempt to make FreeBSD the new OS X. We'll see how it turns out.


By way of example, you can't create a Mach task that isn't a BSD process as well: https://github.com/opensource-apple/xnu/blob/10.10/osfmk/ker...


I think a lot of the common idioms passed around about microkernels stem from the old Torvalds v. Tannenbaum argument [1]. It wasn't that Torvalds was right and Tannebaum was wrong. It was simply that Tannenbaum had reliability and theoretical correctness in mind, while Torvalds was concerned with performance. The less context switching you do, the more performant your computer will be. We can take this to the extreme and look at OSes like TempleOS [2] where EVERYTHING runs in kernel mode. Most people prefer a bit more reliability, and so modern desktop kernels are generally hybrid kernels (with the notable exception of Linux which is, of course monolithic). I don't think there's any real technical reason a microkernel couldn't work on today's hardware which is many orders of magnitude faster than what we had in 1992, but a hybrid kernel is always going to be faster.

[1] https://en.wikipedia.org/wiki/Tanenbaum%E2%80%93Torvalds_deb...

[2] http://www.templeos.org/


I always found it interesting how much Intel's protection ring architecture dictated the direction of operating systems back in the 1980s. (I'm sure Intel didn't even invent it; similar concepts were probably already in place in mainframes?)

x86 has highly specific support for protection rings and switching between them, as well as things like page faulting and interrupt management, leading to the classic kernel/user split with a kernel as a privileged actor underneath a user mode.

But having just one exclusive, reserved "kernel mode" is starting to look old, which is why there's now so much talk about virtualization and exokernels and so on. The microkernel design certainly seems very elegant, but it looks to me like Intel's architecture was always a stumbling block. You have to wonder about what hardware support you could invent that would make microkernels a better fit.


Protection rings are a lot older than x86. They were first used on the Honeywell 6180 for Multics back in the 60s, and used in pretty much every sufficiently large processor. The Vax had two-ring layout, as did the 68k, so the seeds for OS development in the 80s were already pretty soundly planted.

The page faulting was also on older systems, because putting those things in hardware is a lot faster than doing those things in software, plus controlling memory access really should be a privileged activity. Interrupt handling is in a similar situation, and even there, you still need some process handling the interrupt vector table. It's possible to make most of an interrupt handler a user-level process through page table and interrupt return address hacking, but for the moment, it's unfortunately rare.

As for your desire for a replacement for one exclusive, reserved kernel mode, there have been a few OSes that have tried to break that pattern. OS/2 used Ring 2 of the x86 for drivers, but unfortunately that bit wasn't added to Windows when they were forking NT. Being able to put semi-trusted drivers in a separate area, and perhaps even a user session manager too, could allow for some interesting security experiments that don't rely on (para)virtualization.

Hardware-wise, it would be useful to have hardware contexts, like sparcs have, so that the group of registers a process has can be swapped in and out a lot easier. Context switching is expensive, and building processors that realize that the modern user tends to have more than one task running would be a pretty good performance win.


The vax had 4 modes: kernel, executive, supervisor, user. I just read[0] that kernel and executive had implicit SETPRV privilege (and thus access to those modes would be tightly controlled because SETPRV allows bypassing all access controls); and I think I heard a while back that normal programs ran in user mode but the shell-equivalent ran in supervisor mode (and [1] sort of implies this).

[0] http://h30266.www3.hp.com/odl/vax/opsys/vmsos73/vmsos73/5841...

[1] "If you created the name with a DCL command, the access mode defaults to supervisor mode. If you created the name with a program, the access mode typically defaults to user mode." http://h71000.www7.hp.com/doc/731final/4477/4477pro_007.html


You're right. The biggest news for operating systems is the fact that the hardware virtualization features on modern processors allow breaking free from the old model.

A modern intel/amd processor lets you virtualize the CPU, MMU, and I/O. Network cards and HBAs can be partitioned into virtual NICs.

Two primary functions of a classic OS are process isolation (i.e. virtual memory) and hardware sharing. But now that these two primary functions have been pushed down into hardware, it has changed the way we think about operating systems considerably.

There isn't much left for a classic OS to do other than provide a common set of APIs for programs to talk through. Yet these days we can statically link even the largest libraries into our exokernels. And hypervisors are capable of using techniques like same-page merging to reduce the memory burden of running many large (exo)kernels at once.

I don't expect the the classic one-OS-running-many-processes model to go away overnight, or possibly ever. But the exokernel model is very compelling for large-scale high performance software services and it will continue to catch on.


Take a look at the Mill's security presentation.

It's an architecture where there is no real border between a monolithical kernel and a microkernel. It's just differences on access control policies, to the point that those names lose their meaning.

As always, it's a very interesting architecture. I hope they produce it someday.


GEMSOS and STOP OS built highly-secure systems (for the time) on Intel by using the protection rings and segments. Both only put the security kernel in kernel mode, user apps in user mode, and OS services in middle rings.

Here's architecture for GEMSOS. See design/assurance sections.

http://aesec.com/eval/NCSC-FER-94-008.pdf

Look up SCOMP Final Evaluation Report if you want to see how STOP OS used four rings and had an IOMMU despite that being "invented" recently. ;) The XTS-400 is the Intel version, uses same architecture minus custom hardware, and is still doing its job at hundreds of installations.

Definitely a major performance hit on both GEMSOS and STOP but they were 80's era stuff. Modern separation kernels do most stuff with just user/kernel mode separation with tiny kernels (4-12kloc). LynxSecure claims helps them keep CPU 97% idle with 100,000+ context switches a second. I'd expect old architectures to run even faster with modern techniques.


> Torvalds was concerned with performance

Performance is not a single metric. There is throughput, latency and then you can screw it all up and make it much harder by demanding guarantees on either of those.

Performance without guarantees is worth very little in quite a few situations.


Good point. The other argument is also true; Security and correctness is not a single metric.

With modern (buggy) hardware and DMA access, when your driver and/or hardware fails all bets are off. Some hardware may be possible to reboot (much as you'd reinitialize a kernel module in Linux), but sometimes your best course of action is a complete reboot.

As for security, you also need to take a long hard look at the the operating systems your operating system relies on, such as the ones powering your disks, nic, pci-controller etc. There are some potential tricky security interactions with them.


SMM and other such ring "-1" type "services" in modern CPUs make your point quite clear to anyone who digs deep enough.

When trying to secure a system, we have reached the point where you have to sometimes as "is this CPU opcode safe?" Sometimes it just feels like modern hardware complexity is reaching some kind of critical mass threshold for "stupid shit"


That point was reached back in 90's when first security evaluations of Intel architecture were done, found tons of black boxes like SMM, and said to ditch it for security or virtualization. Invisible Things did a good job demonstrating an old risk but people should've ditched it long ago.

If you want verifiable hardware, look up the VAMP processor as it has everything from design descriptions to formal proofs of correctness. Not sure about its availability. SPARC and RISC-V are very open with open-source implementations available with Linux and compiler support. So, there's a solution if people ever want to put the work in.


QNX [1] is probably being the most successful and most mainstream example of a microkernel in real-world use. It's extremely popular for embedded use and/or apps with real-time demands, most famously in nuclear power plants. I've never used QNX myself, but I believe people also run full desktop environments on top of it.

[1] https://en.wikipedia.org/wiki/QNX


I've used the desktop version. It's amazing: it looks, and feels, and operates, just like a Unix, except if you want it it's got this ultra-fast real-time core.

When I say 'just like a Unix', I mean: you log in and start up some terminal windows and it's sh and you can compile and run X11 software with configure scripts and gcc and you can print stuff with lpr and it all just works. It even runs Java --- my copy was completely self-hosting via Eclipse!

Years ago there was a single-floppy QNX demo disk: you booted from this and you got a basic desktop with dialup modem support and a web browser. SINGLE FLOPPY.

Hey, look! I was all set to write a paragraph about how sad it was that you couldn't get the bootable QNX CD any more, but look what I found!

http://www.qnx.com/download/feature.html?programid=19602

It doesn't even need registration and a login any more! Holy crap, I have to see if this still works...

Edit: looks like it won't install without a license key. I'll see if I can get one...

Edit: apparently I have an account with QNX dating back from 2012, with three hobbyist license keys, one of which makes the installation CD happy. I don't know whether these are still available, though.

Edit: so it installed into a VM in about two minutes flat, and I now have a very old copy of Firefox running. It can't see the network, but that's nothing to do with QNX and everything to do with my inability to set up kvm. I need to find a real machine to run this on.

Edit: it will only install from CD, not from USB (unetbootin doesn't help). And I've lost the power cable for my CD burner. So until I find it, I won't be able to proceed here. Sorry. Still good to know this still exists, though...


> Years ago there was a single-floppy QNX demo disk: you booted from this and you got a basic desktop with dialup modem support and a web browser. SINGLE FLOPPY.

Dan Hildebrand's original announcement on comp.os.linux.development.system back in 1997: http://marc.info/?l=freebsd-chat&m=103030933111004

Archived homepage of the QNX Demo Disk: http://web.archive.org/web/20011019174050/www.qnx.com/demodi...

including "How we did it":

http://web.archive.org/web/20011106140711/http://www.qnx.com...

and "What people are saying" (to give folks today an idea of the excitement around a 1.44MB GUI OS with networking and Japanese support back in the 90s):

http://web.archive.org/web/20011106141359/http://www.qnx.com...

Download links and screenshots:

http://marc.info/?l=freebsd-chat&m=103030933111004


And a very nice paper by its wonderful last author

https://cseweb.ucsd.edu/~voelker/cse221/papers/qnx-paper92.p...


They have a license for evaluation & noncommercial use. http://www.qnx.com/legal/licensing/non_commercial.html


I really enjoyed using QNX (I had a job porting Unix programs to QNX---it was not that hard of a job actually). What really blew me away as the inherent transparent networking. I could, from the command line, run a program on my computer, referencing a file on a second computer, pipe the output to a program on a third computer which sends the output to a file on a fourth computer. Really mind blowing stuff.

And fast. I knew the owners of a company that sold X Windows commercially, and their fastest version ran under QNX, using the native QNX message passing facilities. The fact that the QNX kernel on a Pentium was only 8K in size was also mind blowing.


That sounds awesome. How does it compare to RTLinux or Linux with the PREEMPT_RT patches?


I used QNX extensively for a few projects at work, as it has historically been used quite a bit in the "automotive infotainment" world. I was really impressed by the overall clean architecture and documentation of the system, and I much prefer implementing device drivers in the QNX microkernel environment over implementing them in Linux, where interfaces are under-documented and always changing, and it's quite easy to lock up the entire system during driver development.

An interesting data point, though: QNX had a very clean and modularized 'microkernel-like' network stack called 'io-net'. But due to throughput issues in some situations, they switched to a new architecture a few years ago called 'io-pkt'. This is essentially the kernel networking code from BSD transplanted to a process in QNX; one advantage of this is that there's a large stock of drivers available to port and a lot of people are familiar with the BSD networking model, but some of the lesser-used corners of it weren't fully debugged when I was doing network protocol hacking, and in general it made me sad.

Anyway, you can definitely run full desktop environments on it, and you can especially run a full tablet environment if you buy a recent Blackberry tablet, as RIM now owns QNX and uses it as their latest Blackberry OS. This was, unfortunately, a step backwards in their openness and embrace of Open Source.


I used an eval of the desktop version of QNX for a short time. Got it from a CD that came along with a computer magazine.

Even in that short time, I got a feel that the OS was quite fast, compared to early Linuxes that I used around the same time, and Windows. Remember reading in the mag that it had that Photon GUI, IIRC, that others have mentioned in comments here. I tried out at least some apps, bash, etc., and they were all snappy.


To swipe a line from Chesterton: micro-kernels have not been tried and found wanting, they have been found different and wanted trying.


It took decades to get UNIX's reliable enough to trust for day to day operation with labor estimates in hundreds of millions to billions of dollars. Three or so people got that done in a few years with MINIX 3 using micro-kernel architecture and self-healing services. QNX and INTEGRITY RTOS's have been doing that stuff for years in the field with less resources. And QNX-based Playbook was more reliable, responsive, and fast than iPad in its first incarnation. Before that, KeyKOS outperformed IBM's monolithic stuff on their mainframes with better security and full persistence of system state.

So, they worked better and keep working better. People just use monolithic kernels for some reason. Just like they took forever to get off of C for reliable, business applications. "Worse is better."


[flagged]


Downvoted? I thought atheism was A Good Thing with the HN hive mind.


I imagine you got downvoted because this thread is neither about Chesterton nor about atheism. HN culture is generally not very tolerant of threads that deviate into things that aren't at least tangentially related to the main topic-at-hand.


I'm not totally sure what you mean.

I would not expect an atheist to consider themselves "God forsaken", and I would not be surprised if one was kind of offended by being called that.

(I myself am not an atheist, ftr)

Given that,I am not sure why you would be surprised that people reacted negatively?


Because so many on HN are so proud of their atheism. I'm not a conventional believer either. But my description happened to be factual, not an insult.


It can only be factual if you consider god real.


Your phone likely has a L4 microkernel running the radio. There are millions and millions of microkernels out there running all kinds of things, going where Linux fear to tread.


I'm surprised that something so incredibly prevalent is so rarely discussed.


Much of the tech community views this as "settled" thanks to the Linux versus Tannenbaum debate.

Microkernels aren't going away any time soon. :-)


Oh I know - I meant more "running L4 kernels", with the seL4 currently looking quite interesting.

At what point do you conclude that you're:

* running everything in hypervisor X, thus hardware support doesn't matter * running only servers, thus desktops don't matter * and thusly, may stand more to gain from an environment that never catered to the above two?


In his defence of them, Tannenbaum mentions a lot of companies using microkernels for high-reliability applications:

https://www.cs.vu.nl/~ast/reliable-os/

For consumer devices, the Blackberry Playbook was one of the better examples given it was built on QNX microkernel for reliability and nobody was complaining about its performance (i.e. doing two games at once). Lots of phones have microkernels in them to isolate baseband from problems in main OS. Did you know you were using a virtualized OS message passing to other components? I didn't either until I saw the vendor's press release. They might be efficient after all. ;)

Here's one used a lot in safety-critical and security-critical apps with nice security architecture:

http://www.ghs.com/products/rtos/integrity.html

The beautiful and efficient MorphOS uses a microkernel:

http://www.morphos-team.net/intro

One that was made for capability-security model with FOSS code:

https://web.archive.org/web/20070428020436/http://www.eros-o...

Note: See their papers and KeyKOS especially, as it was first successful one on IBM mainframes of old.

GenodeOS and MINIX 3 are the best efforts to look at in FOSS today as they're actively maintained. Not at Linux feature set or stability yet given almost no staff haha. Already on bare metal and hosting apps/desktops, though. Just don't think stuff like Hurd is representative of microkernels in general. It's just a failed GNU project. ;)


I doubt the engineers at Blackberry would agree with you when they chose the QNX microkernel as the basis of their next generation OS. Also I never heard any complaints about AmigaOS’s or Symbian's microkernel design as being impractical.


AmigaOS never implemented any memory protection, did it? So it hugely favours a microkernel design, since there's no expensive context switch to translate a call into kernel mode; everything is user mode.


Whilst you are correct in that AmigaOS didn't implement memory protection, it's possible to argue that wasn't by design.

The original design for AmigaOS was known as CAOS. The CAOS design included resource tracking, which I believe is a useful feature when it comes to implementing memory protection.

If I remember correctly what ended up happening was the microkernel for CAOS was developed in-house (Exec), but contractors were being paid to develop the rest of the OS, and they disagreed with the design decisions of CAOS and wanted to make it more Unix-like. Once the Amiga team realised the contractors had wasted time developing something different from what they intended, they sought out an alternative solution to replace the user-space side of the OS. As a result they contracted a different team to rework an OS called TripOS to use the Exec microkernel (that was originally intended for CAOS), which then became the AmigaOS that Amiga users know.

You can read a bit more here:

http://www.amigahistory.plus.com/caos.html


Quark microkernel in MorphOS https://en.wikipedia.org/wiki/Quark_%28kernel%29

MorphOS - quite fast and usable :) http://www.morphos-team.net/intro

So, out with the old and in with the new.


Does MorphOS have resource tracking and memory protection now? It didn't last time I was following it (about 3 years ago). To be fair, none of the next gen Amiga operating systems (OS4, MorphOS, AROS) had implemented that last time I checked.


The second paragraph of the Wikipedia article mentions user/kernel-mode division and kernel-protected objects. Past that I don't know. Probably none for legacy systems and similar to Darwin (coarse-grained) for new ones. Could be more fine-grained, though.


Both Qualcomm and Apple are known to run variants of L4, a microkernel, on their mobile hardware: https://en.wikipedia.org/wiki/L4_microkernel_family#Commerci...


And Samsung uses INTEGRITY Multivisor in KNOX.

http://www.ghs.com/products/rtos/integrity_virtualization.ht...

Multivisor is kind of a combination of the INTEGRITY RTOS and virtualization stuff.

The leader in this kind of stuff was OKL4 like in your reference, though. General Dynamics supplies their stuff now due to acquisition.


Do you mean for GNU Hurd specifically, or just microkernels in general? Because there are several microkernels in production use if you look beyond desktop OSes.


As other said, QNX is used, in a lot of places. The microkernel issues raised in the 90s have been addressed partially at least (I'm just a noob) in the L4 mkernel family. Minix v3 too IIUC.


Not all microkernels. There are many newer ones which have much better performance characteristics of their predecessors. Mach, however, has rather slow IPC, which would be a problem for Hurd.


Mac OS X is based on the Mach microkernel. I would think that counts as real-world use.


XNU (OS X's kernel) doesn't really use Mach as a microkernel. It supports user-mode device drivers, so you might call it a hybrid microkernel, but practically speaking, it's a monolith just like Linux; everything of practical relevance (file system, graphics, most disk and networking I/O) runs in kernel mode. Some parts (high-level USB, Bluetooth, power management, etc.) run as user-space processes, but they talk to kernel-mode device drivers.

(XNU was inherited from NeXT, which was based on Mach 2.5, which was actually not a microkernel at all.)


Why should I be excited?


"Excited" is the wrong word for this sort of development. "Interested" is probably a better word. Right now, I see this sort of thing as a project for its own sake, and that can still be interesting.

More importantly, projects like this frequently spread their ideas elsewhere, even if the project itself isn't very useful. Another possibility is that someone gets a bunch of skills from developing in the much more open territory of Hurd and uses that knowledge to contribute in a more limited fashion in the Linux kernel.

Of course, I'm sure that Richard Stallman & Friends would be unhappy with my characterization of Hurd as an academic toy project, but them's the breaks. It's on them to show that this is dompetitive on any level with Linux, and right now that's not the case.


Richard Stallman thinks work on HURD is unimportant and nothing more than an academic toy project. He is interested in the ethics and philosophy of software freedom, and the Linux kernel provides that when combined with adequate other completely free software. RMS has repeatedly said that he sees no real value in continuing the HURD project particularly and is much more interested in high priority things like free software replacements for Skype, free firmware, etc.


RMS isn't really involved in the Hurd, in fact he's mostly given up on it. It's being developed by a few enthusiasts, and academic interest in it has been dwindling.

Being competitive with Linux isn't even a project goal.


I just hope it's not infected by systemd.


Fantastic comment, because the kernel is the init process. You should get a medal.




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: