
Whatever happened to the Hurd? – The story of the GNU OS - vorbote
http://www.linuxuser.co.uk/features/whatever-happened-to-the-hurd-the-story-of-the-gnu-os
======
nasalgoat
An oft-quoted phrase that applies here is "the enemy of good is 'better'".

Clearly RMS wanted his idea of perfection, and almost 30 years on, perfection
remains out of reach while "good enough" rules the world on Linux.

An important lesson to learn.

~~~
slurry
It's easy (and legitimate) to blame the failure of Hurd on poor design and
management choices.

I think it's also instructive, though, to observe how _no_ "better than Unix"
project has really attained any success. Some of them have delivered more
workable code than Hurd, but Plan 9, Inferno, Amoeba - none of them have
really caught on.

~~~
InclinedPlane
Interestingly unix derivatives are now some of the most popular operating
systems for modern devices, as iOS, Mac OS, and Android are unix/linux based.

~~~
tonfa
I thought XNU was considered a micro-kernel because of the Mach part.

~~~
loeg
To quote that bastion of Jimmy Wales:

"Although Mach is often mentioned as one of the earliest examples of a
microkernel, not all versions of Mach are microkernels.

The project at Carnegie Mellon ran from 1985 to 1994, ending in apparent
failure with Mach 3.0, which was finally a true microkernel.

Mach and its derivatives are in use in a number of commercial operating
systems, … most notably Mac OS X using the XNU operating system kernel which
incorporates an earlier (non-microkernel) Mach as a major component.

Neither Mac OS X nor FreeBSD maintain the microkernel structure pioneered in
Mach."[0]

[0]: <https://en.wikipedia.org/wiki/Mach_(kernel)>

~~~
chrismaeda
I was one of the last Mach people at CMU. The BSD emulation on Mach 3.0 was
implemented by stubs that made IPCs to the BSD emulation server, which then
made additional Mach API calls to implement the UNIX semantics. This design
was generally half as fast as an in-kernel implementation (e.g. Mach 2.5 or a
commercial UNIX kernel). In the mid-90's we figured out how to make up the
performance gap by refactoring the BSD emulation to put some code in the same
address space as each application (ie in the libc .so), which allowed you to
avoid the RPC to the emulation server in most cases (eg
<http://dl.acm.org/citation.cfm?id=168639>). So I don't think it's fair to say
that microkernel implementations of UNIX are inherently slow; instead I would
argue that the simpleminded approach to UNIX emulation was the real problem.

None of this technology made it into a mainstream Mach distribution because
the band broke up: Rick Rashid and his staff and students went off to work at
Microsoft Research. Brian Bershad took over the project and then moved to
University of Washington. It was also becoming clear in the mid-90's that the
OS didn't really matter much anymore, since the big money was being made in
Internet applications. So most of the hackers who might have worked on new
operating systems during this time ended up working on application servers and
web apps instead.

------
gus_massa
I think that this article underestimate the ability of Linus Torvalds to
manage the Linux developers community. Leadership, dispute resolution,
technical choices, compatibility or new features, and teven flamewars. It's
very difficult to be a BDFL.

The idea is that if Linux were not available, the open source community would
have been working in Hurd and not in Linux. But without Linus perhaps the
community could not exist (new member leave after a few quarrels) or all the
effort is lost in complete rewrites.

------
AndrewDucker
Can someone explain to me if the problem with Hurd was that the basic idea was
impossible, or it was the way they went about it?

i.e. was it the relentless restarting of the project that was to blame, or did
they keep restarting because every way they approached it turned out to be
impossible?

~~~
loeg
Microkernels were an immensely popular idea in academia (Mach, Minix, …) when
Hurd was begun; I don't think the fundamental problem with them — performance
is horrible — had been made obvious yet. Microkernel design is probably a good
idea, but the message passing overhead kills actual microkernel
implementations.

I've heard the Windows NT kernel described as being designed like a
microkernel architecture (separate modules with clear APIs), but with direct
function calls instead of message passing. I don't know if the Linux/BSD
kernels are much different — I've poked at all three, and at a high level they
look very similar.

~~~
haberman
> I don't think the fundamental problem with them — performance is horrible —
> had been made obvious yet.

On the contrary, the idea that performance of microkernels is "horrible" is
the current received wisdom, believed by the majority of programmers without
critical examination, and based on very performance-poor early microkernel
designs like Mach.

The truth is that modern microkernel designs like L4 can perform IPC over 20
times faster than Mach. Another important advance for microkernels are tagged
TLBs which can alleviate the TLB misses that are usually incurred on every
context switch.

Someday, hopefully not too far in the future, someone is going to write a
modern, practical, and free microkernel-based OS that implements all of POSIX
with performance the rivals Linux, but that offers a level of isolation and
modularity that Linux could never offer. When that happens, a lot of people
are going to scratch their heads and wonder why they believed the people who
told them that microkernels are "fundamentally" slow.

I had hoped that HelenOS would become this (<http://www.helenos.org/>), but
its lowest-layer ipc abstraction is async-based, which I think is a mistake.
One of L4's innovations was to use sync IPC, which offers the highest possible
performance and gets the kernel out of the business of queueing. You can
always build async ipc on top of sync without loss of performance; this puts
the queues in user-space which is where they belong (they're easier to account
for this way).

I asked the HelenOS people why they decided to go this way, and this is their
response (one of their points was "the L4's focus on performance is not that
important these days", which I disagree with). [http://www.mail-
archive.com/helenos-devel@lists.modry.cz/msg...](http://www.mail-
archive.com/helenos-devel@lists.modry.cz/msg00338.html)

~~~
jfb
But why POSIX? There's only so much lipstick any one pig can take.

~~~
rogerbinns
Because it gets you an immense amount of existing tools that you won't have to
reimplement. You can get shells, compilers, and numerous utilities (eg cp,
cat, tail, tar, zip, awk). Look at the list of what BusyBox includes to get an
idea of the kind of functionality any system would need to get started.
<http://www.busybox.net/downloads/BusyBox.html>

If you provide terminal emulation then you also get editors. If you implement
ptrace then you get a debugger. If you provide networking then apps can
display remotely (X).

Even if you are developing something unique for your operating system, using
POSIX in that lets you perform some of the development and testing on other
systems that already have working toolchains.

In short having POSIX saves a huge amount of time and effort. That doesn't
preclude you from having other APIs around too. Don't underestimate the
importance of having a functioning system while you replace or augment it with
parts that are your unique value add.

~~~
jfb
Sure, but keep in mind that my ideal would also involve discarding most what a
POSIX compatibility layer would get you. Why cleanroom a nifty kernel and then
turn it into something that's almost exactly like what already exists? If you
want Unix, you know where to find it, as the wag said.

~~~
rogerbinns
The kernel doesn't have to support POSIX. You can do the emulation in user
space. The only tricky part of POSIX is fork, but chances are you won't need
that for compilers etc, just exec.

In any event don't confuse the journey (some POSIX compatibility in order to
advantage of existing toolchains while building your new OS) for the
destination (clean, elegant, new API, world changing, nifty) OS.

And if you are going to have command line tools in your OS, they will need an
API and you may as well pick a useful subset of POSIX.

------
vacipr
For those interested, Debian is planning on releasing a Hurd variant just like
they did with kFreeBSD. <http://www.debian.org/ports/hurd/index>

~~~
Avshalom
There's also ArchHurd <http://www.archhurd.org/>

~~~
vacipr
Thank you.I knew there was something else.

------
DanBC
Can anyone tell me how computing would be different today if micro kernels had
taken off? If everyone working on Linux had been working on Gnu Hurd?

~~~
klrr
Then everyone would have ran BSD.

(J/K)

Micro kernels are very hard to debug.

~~~
gioele
> Micro kernels are very hard to debug.

Microkernels are not harder to debug than monolithic kernels. I'd even say
that they are easier to debug, much easier. (Personal experience in debugging
both.)

The problem with microkernel-based OSes is, as Linus Torvarlds aptly put it,
that they turn well understood memory-protection problems into not-so-well-
studied IPC problems. (The actual quote is «They push the problem space into
_communication_ , which is actually a much bigger and fundamental problem than
the small problem they are purporting to fix.»)

The microkernel is not the real problem here, the big issue is debugging
faulty IPC sequences between the servers that implement the OS services. A
problem that is almost non-existent in monolithic kernel.

HOWEVER, current monolithic kernels are facing growth problems now because of
two aspect: we want fancy remote storage accessed as easily as local storage
(do you want to mmap a file stored in a RAID setup implemented with SATA-over-
ethernet disks?) and the fact that the process model is too leaky and so we
need stronger containers like VMs (that are becoming as much leaky
abstractions as the current processes). All these new features require
communication between various components that were previously though and
implemented as independent. This means that the IPC problems are now creeping
into the world of monolithic kernels.

~~~
kabdib
There were a number of microkernel efforts at Apple in the 80s and 90s.

\- Pink (later known as the money-burning party Taligent) had a 'new kernel'
that was message-passing. They spent a lot of time working on RPC efficiency.

\- The Newton used a message-passing kernel. Not the most efficient thing in
the world, but there was MMU support to do some interesting page sharing /
fault dispatch policy stuff, so you could get IPC-like behavior with faults.
Basically hobbled by a 20Mhz processor with minimal cache, and not very much
RAM at all.

Btw, I didn't notice the Newton being very hard to debug (except that all of
our debugging was printf, or you stared at the disassembled output of CFront).

------
klrr
Note: RMS think Linux is "good enough" and Linux-libre is part of GNU, so
Linux will be used for the in-development official GNU "distro".

------
frozenport
I think there is another story: Not that many people where working on GNU.

~~~
zanny
The story mentions this. The industry had a working Linux and jumped on board.
It is like how the industry jumped on C++, or x64, javascript, or unicode. The
small imperfections in form of these tools didn't, at the time of their
inception, justify the more massive immediate workload of trying to completely
replace them with no backwards compatibility for something without the
blemishes. Competitors weren't mature enough to step in, so the imperfect
progressions of tried methods took the reigns rather than bolder, newer ideas,
that meant shifting some of the inertia of the industry.

I think that might be one of the lesser appreciable legacies of 80s - 00s in
software. We have tools that are now really showing their flaws but we built
the empire on cracked bricks and the unstable foundations force us to throw
more man hours and effort into keeping the whole thing standing than if we
just started with a fresh foundation when the better alternatives presented
themselves, even if they would have taken some more work.

I still _really_ wonder where we would be if we had a C++ with a clean grammar
and Python level readability. Where we didn't try to layer interpreters and
JITs over convoluted C ABI compatibility.

~~~
chipsy
I recommend the "STEPS Toward Expressive Computing Systems" reports for more
current-day thoughts on OS redesign (available here
<http://www.vpri.org/html/writings.php> )

Basically, by designing exactly the necessary language for each layer, they've
reduced the code requirements to reach useful applications by multiple orders
of magnitude.

I honestly wouldn't be surprised if the results of this project eventually
creep into industry.

~~~
qznc
I would be surprised if the results of this project eventually creep into
industry.

It sounds great to write vector and font rendering in a few hundred or
thousand lines of code. On the other hand, others have been writing orders of
magnitude more code in orders of magnitude more time. I cannot believe that
has been due to a wrong choice of language or approach. I much more believe it
comes from supporting real-world standards and requirements. Loading fonts
from various formats, lots of configuration, supporting more and more of
Unicode, doing all that optionally with hardware support. That is the tedious
part. It is not about how to implement Bresenham's line algorithm most
elegantly.

------
lucian303
“My first choice was to take the BSD 4.4-Lite release and make a kernel. I
knew the code, I knew how to do it. It is now perfectly obvious to me that
this would have succeeded splendidly and the world would be a very different
place today."

So true. It's unfortunate AT&T/Unix System Laboratories kept the BSD kernel
code locked up in a lawsuit. Would have loved the article to have had a deeper
insight into that as Stallman had already chosen to abandon Hurd by the time
Linux came around.

It wasn't just that Linux was available, it was that BSD wasn't. It's too bad
that the last few years have seen a decline in FreeBSD / other BSD OS's
especially as it is an amazing operating system still light years ahead of
GNU/Linux is many areas. Not to mention that it's one unified OS rather than
hundreds of GNU/Linux distros.

One can only imagine what would have happened had the BSD code not been tied
up in lawsuits. I bet they would have gone with the mature BSD kernel, leading
to a better OS, and Linux would probably be a footnote in history if that.

Interesting how inferior technology wins a lot more than it loses. That said,
I still love GNU/Linux. :)

~~~
lvillani
> _Not to mention that it's one unified OS rather than hundreds of GNU/Linux
> distros._

I admittedly haven't used any BSD enough to make a well informed opinion but I
was under the impression that BSDs are fragmented at the OS level (i.e.:
different _kernels_ ), while Linux is fragmented at the _distribution_ level
(i.e.: default collection of software, file-system layout, etc).

I imagine that, in addition to there being different kernel flavors there are
also distribution level differences (e.g.: there are subtle differences
between FreeBSD's rc.conf and NetBSD's) so I'm not sure which approach is
better or worse, but I tend to lean on the "one kernel, several distributions"
camp.

~~~
lucian303
There is no OS level fragmentation. Sorry that makes no sense in the BSD
world. FreeBSD is an OS. OpenBSD is an OS. etc. That's not fragmentation. They
are different OS's. It's like saying there's fragmentation between Windows and
OS X.

GNU/Linux is one OS with hundreds of distros. That's fragmentation.

------
drivebyacct2
So if things like Arch Hurd exist... Are we just missing the user interest in
Hurd or are there usability problems with it or?

~~~
mcartyem
It's not enough to build something that's better. You also need to sneak it in
like a Trojan horse into a user's fortress. You can't easily launch a frontal
attack on Linux.

One of the most valuable opportunities with mobile is that current
technologies can become irrelevant. A big success for L4 is that it made it
into mobile phones.

------
derleth
> Linus Torvalds had begun his project to write a UNIX-like kernel for the IBM
> 386.

IBM 386? What in the world is an IBM 386?

~~~
chimeracoder
As someone who's in his 20s and first got interested in programming as a kid
by trying[1] to install Linux on his 386... I don't know how to react to this.

On the other hand, it's ridiculous to think of how far things have come since
then... people who complain about Linux usability and driver issues in 2012
_really_ don't know what they missed!

[1] and failing!

~~~
btgeekboy
I remember having to get out the specifications for my monitor during the
Linux setup process in the late 90s. I don't miss that.

------
sneilan
It sank like a turd.

~~~
muyuu
Turds float.

~~~
dalke
... like a coprolite?

