
GNU HURD: Altered visions and lost promise - bensummers
http://www.h-online.com/open/features/GNU-HURD-Altered-visions-and-lost-promise-1030942.html
======
avar
Hurd's current status should be considered in the context of how the GNU
project is run. In the early days GNU had some paid developers, but now their
projects are run purely on a volunteer basis. Or in the case projects like
GCC, developers that are paid by companies to work on GNU projects.

Hurd is late because there isn't any need for it anymore, we now have plenty
of free kernels. And GNU isn't sponsoring it because of that, just like it
isn't sponsoring a GNU MTA.

But work continues because a few people are still interested. There's a lot of
neat ideas in Hurd. You get a lot of nice things once you move most of the
kernel to user-space.

The best example of this is custom user-mounted filesystems (translators). You
can e.g. mount a remote FTP site on Hurd in ~/ftp without being a superuser,
this is something that GNOME and KDE implement on their own because users
can't do it on other systems without special permissions.

Linux has been accumulating a lot of microkernel-like features over the years.
E.g. dynamic loading of device drivers, FUSE etc. In time it'll probably usurp
every useful Hurd feature worth having.

~~~
rbanffy
Richard Gabriel was right. Unix is good enough and few people care whether
HURD has some nicer features that can be implemented in the wrong layers
through piles of clever hacks.

~~~
avar
Indeed. Worse is better, and I use Linux and not Hurd because it does what I
need even if it doesn't look shiny on paper.

I'm not convinced that Hurd's translator idea is a hack, or that it's doing it
in the wrong layer.

That every program that presents FTP on my system (GNOME, KDE, lftp,
Emacs/TRAMP) has to implement its own VFS is a hack, so is having to use
iptables or another forwarder if I want to run a non-root daemon on port
<1024\. And that I have to recompile and reboot if I want to enable some minor
feature in the kernel.

EDIT: The last two paragraphs are redundant because I misread rbanffy's reply,
and I oddly can't reply to his new one so I'm putting this here.

~~~
eru
I am quite involved with virtualization. The interesting thing is, that some
people are now running their software on the virtualized bare metal--i.e.
using the hypervisor but without an operating system.

This is essentially treating the hypervisor as an exokernel operating system.
And it seems to become more common.

Perhaps we are really coming to exokernels in a very roundabout way.

~~~
rortian
But not at all as a general computing environment. It makes sense once you
have your application finished that you could move it to a kernel that has
just what you need. However, it makes no sense when you start development to
try to guess what calls you might end up using.

~~~
eru
You can use it as a general computing environment.

> However, it makes no sense when you start development to try to guess what
> calls you might end up using.

I agree. That's why you will be using libraries to duplicate what your OS did
for you. The Exokernel guys call this a library OS. Libraries are already
quite good at providing abstractions.

The difference to a conventional architecture is, that the kernel only
provides secure multiplexing between application on the same machine. But the
abstractions from the hardware are provided by some libraries. It's easier to
experiment with libraries than with operating systems.

The MIT-Exokernel-Operating-System page (<http://pdos.csail.mit.edu/exo.html>)
explains this in more detail. If you look at the first picture, and replace
`Exokernel' with `hypervisor', you get modern virtualisation.

~~~
rortian
I'm semi-intrigued, but I'd like to see a sign of life beyond a page from '98.
Not trying to be cruel, just saying that'd I'd like to see what's going on
now.

~~~
eru
The signs of life are the current developments in the virtualization world.
The original MIT Exokernel effort may be dead, as far as I know.

------
lvecsey
I don't think the article mentioned it but there was a point well after the
Linux kernel reached prominence that it was understood the HURD was just a
back burner project. GNU has still had a huge impact on everything from IT
data centers to Apple, and arguably you wouldn't even have initiatives like
Wikipedia without it either. It's like the building block that allowed
everyone else to place a cherry.

~~~
tzs
Certainly they had a huge impact, but if GNU had not been around, I wonder if
something else would have filled the same niches and subsequently had similar
impact?

For instance, the Berkeley people were also working on free stuff around the
same time (indeed, the article points out the BSD was a contender for the GNU
kernel).

I'm wondering if GNU is to free software what, say, Alexander Graham Bell was
to the telephone.

~~~
IgorPartola
There may have been someone else to do all the work that RMS and GNU did, but
the reality is that while everyone else was too busy or too fatalist about the
situation in the late 1980s, only RMS et al stood up and took on the
responsibility for everyone else.

Also, note that Alexander Graham Bell invented the first practical telephone.
RMS is an advocate. Much of his software work is specifically not original: he
was trying to emulate UNIX. He and GNU are more of a founding fathers to the
US than inventors of a device. Yes, eventually someone else might have
declared independence in the western colonies and went to war with Great
Britain, but we admit that it was GW and company.

~~~
IdeaHamster
I'm fairly sure the parent's reference to Bell is with respect to details that
have recently come to light that Bell may have "stolen" all or most of the
technology behind the first phone. That is, Bell's importance has less to do
with his ability to create the technology and more to do with packaging and
popularizing it.

~~~
IgorPartola
I was not aware of that. Nonetheless, it seemed to me that the parent post
implied that if RMS and GNU didn't come to be, then someone else would have
come up and done the same thing, and thus what they did is somewhat less of an
achievement. My post was mostly in disagreement with that idea.

------
IgorPartola
This story is full of irony. My favorite open source moment:
<http://www.youtube.com/watch?v=bDxMJQLXmBE>

~~~
bl4k
closely followed by the #1 hit: <http://www.youtube.com/watch?v=_jMH_WRwOWs>

~~~
blasdel
"like giving the Han Solo award to the Rebel Fleet" -- he was a terrific
speaker a decade ago, even if he did get off the rails quickly raving about
GNU-slash-Linux.

Now he just eats detritus from his foot:
<http://www.youtube.com/watch?v=I25UeVXrEHQ>

------
dasht
I was one of the hacker employees of the FSF back in the early days of the
HURD project. They pay was modest by industry standards but fair (every
employee, hacker or not, got the same pay -- later the formation of a union
was encouraged).

Some very good and lasting work got done back then, in spite of our rather
unconventional work habits. I'm thinking especially of all the work done
laying down the foundation of GNU libc. Roland McGrath got a lot of code
rolling and, perhaps more importantly, established a pretty good standard of
coding conventions and quality expectations.

I did not myself work directly on the HURD but in our small office I did have
chats with McGrath and Bushnell about it. The sentiment around the design was,
I think it fair to say, somewhat giddy. The free software movement was (and
is) all about freeing users from subjugation to those who provide software.
The HURD's microkernel architecture and the structure of the daemons would
securely free users from subjugation to system administrators - each user
could securely invoke a set of daemons to create the operating environment he
or she wished, no special permissions required.

It was well understood back then, and even a point of discussion in academia,
that a microkernel architecture posed some difficult problems for performance
(related mostly to a greater number of context switches as messages pass
between daemons rather than syscalls being handled by a monolithic kernel).
Rashid's work had suggested that this problem was not so terribly significant
after all. And so, at least to me, it felt like the GNU project was not only
doing this shoestring-budget freedom-fighting hacking, but also leading near
the bleeding edge of CS research made practical. Well, that was the theory,
anyway, and we were mighty proud of ourselves and generally excited to be
there.

Not much but some of the hacking of the core staff took place "from home". You
must remember that this before any kind of data over voice or particularly
high bandwidth connection was commonplace - so that hacking was over modem
connected to text terminal. Mostly we hacked in a shared office which, if you
saw it, you'd think "Wow, that's a slightly large closet." We were, at that
time, guests of MIT.

With all due respect for RMS, and I don't think he'd especially disagree with
this (though I could be wrong): he was an absolutely terrible project leader
for the hacking part. As history has shown, his popularity among some
notwithstanding, he's extraordinarily good at the political part of his works.
Leading the technical project? Not so much.

It wasn't so much that he dictated bad technical choices. Even the choice to
use Mach might have worked out. On the contrary, he was relatively "hands off"
in most technical matters, only micromanaging if you really dragged his
attention to some detail. It was more that he lacked any coherent overall
strategy for completing GNU and his broad directives involved underestimations
of the amount of work involved and were sometimes scattered, even bordering on
inconsistent. It just wasn't his strength.

The original vision for the GNU system, at least as I understood it, was to -
sure - grow a unix clone, but then to build a user space that much more
closely resembled that of lisp machines. Emacs (with its lisp extensibility)
was taken to be a paradigm for how interactive programs might work.
Originally, it was even envisioned that the window system would be lisp based.

One early change to the original GNU vision occurred when it became clear that
X11 worked pretty well and was here to stay and would be free software. As a
practical matter: just use that.

Later, as mentioned in other comments here, the ECGS fork of GCC caused issues
- ultimately leading to the displacement of an FSF appointed project leader.
There is some back story to that. The company Cygnus (later acquired by Red
Hat, founded by M. Tiemann et al.) had been advertising to customers that not
only could they develop customized extensions to GCC, but that they could
shepherd those extensions into the "official releases". There was frustration
at Cygnus and some other firms that the FSF branch was not merging these
changes quickly enough or was arguably being too prickly about the nature of
the changes. As nearly as I can tell those sentiments led to the ECGS fork and
RMS was ultimately put in the position of having to choose between "blessing"
that fork or simply losing any claim at all to the future of GCC.

Around this time, I am told but can not myself verify, RMS was also under
pressure from some key FSF advisors or supporters to exit the software
development business and focus on the politics. Whatever the motivation, the
FSF shed most of its in-house development efforts.

The pattern of losing the original GNU vision continued in the controversy
over Gnome vs. KDE. Originally, KDE had licensing issues and did not pass
muster with the FSF as being free software. Those problems have since been
fixed but at the time it led to RMS' proclamation that Gnome would be the
desktop for GNU -- a radical departure from what was originally conceived.
Later, as you may have read, RMS came to describe Miguel as a traitor to the
free software movement.

Somewhere in there - I'd have to look things up to get the timelines exactly
right - Debian took off, in part to try to fill a void in the FSF's leadership
at assembling a complete GNU system. Bruce Perens penned the now famous
"Debian Guidelines".

A small group of relatively wealthy influencers, including Tiemann, met with
Eric Raymond and conjured up the allegedly business-friendly "open source"
notion. The main differentiation they sought from the FSF is that they would
not condemn proprietary software or describe themselves as a freedom movement
- they sought to emphasize the economic advantages of having volunteers do
work for no pay. In my view, their main purpose upon founding was to attempt
to politically marginalize RMS (a project in which they've had some success).

Bushnell moved on to a different stage of his life and, I guess it's fair to
say, a higher calling. McGrath moved on to what I gather is a sweet job for
Red Hat. The GNU project was gutted. Its institutional memory and such
momentum as it may have had was gone. This was in part because RMS was not so
great as a project leader but also, in large part, because the project was
under significant attack.

In my humble opinion, there would be plenty good to come of a resurrection of
the GNU project. I don't necessarily mean a resurrection of the HURD although
I suspect we can do better than the Linux kernel. I do mean a return to a
concentrated effort to build the kind of user space originally envisioned.
While such a project could have enormous social benefit, I don't see any way
to institute it and find support enough to carry it out.

~~~
dkarl
Programmable applications is indeed the way things should go. And the
programmability has to be immediate. Compare the slippery slope of customizing
emacs to the burden of writing an Eclipse plugin. Most of the things that go
by the name "plugin architecture" have a large barrier to entry because of the
complex, heavyweight relationship of a plugin to its host environment.

~~~
stcredzero
_Programmable applications is indeed the way things should go. And the
programmability has to be immediate._

Then you should look at Smalltalk. Just deploy with the compiler and dev tools
in the image. You can quickly visually inspect every object in the image and
write a script against it, and run it instantly.

------
hga
I'm a little leery of history like this that ignores the impact of the AT&T
lawsuit: <http://en.wikipedia.org/wiki/BSD#Net.2F2_and_legal_troubles>

E.g.:

" _Linus Torvalds has said that if 386BSD had been available at the time, he
probably would not have created Linux._ "

~~~
dfox
4.4BSD-Lite mentioned in article as possible base of GNU is outcome of this
lawsuit - BSD without few legally unclear parts - i.e. "not complete kernel in
contrast to Mach"

------
moron4hire
This is a story with a theme, a theme of stubborn refusal to admit failure in
the face of the huge successes of others. Linux showed that open source _can_
develop a vibrant operating system project. The BSDs have shown that a tightly
controlled engineering effort _can_ be achieved within an open source context.
And Apple has shown that you _can even_ make an operating system off of the
Mach lineage! When faced with repeated failure, one has to look at the common
denominators. It's GNU.

~~~
kiba
Emacs doesn't seem to be much of a failure. The GCC is used by everyone.
However, admittedly, they are written by RMS himself.

~~~
hga
The "canon" official FSF GCC effort failed, at least in that it was replaced
by a bazaar model fork:
[http://en.wikipedia.org/wiki/GNU_Compiler_Collection#EGCS_fo...](http://en.wikipedia.org/wiki/GNU_Compiler_Collection#EGCS_fork)

As noted by tl, EMACS suffered a nasty fork, and it bears pointing out that
RMS started with a fully functional version of EMACS written by none other
than James Gosling of Java fame. RMS replaced Gosling's MockLisp with a real
Lisp (both bytecoded), but the guts remained largely the same.

~~~
eru
I wouldn't consider EmacsLisp a real Lisp any longer. At least by modern
standards. But anyway, I agree.

~~~
hga
Is not the big difference dynamic vs. static scoping?

That was a matter of much debate that started to conclude when RMS did GNU
Emacs, in 1984 when Common Lisp was also officially released for the first
time, the latter's biggest breaking change perhaps being static scoping.

Were you thinking of anything else?

~~~
eru
I thought only of the dynamic scoping. I just found out that Emacs Lisp
doesn't have tail recursion optimization, thus dooming you to use side-effects
(because recursion can't be used for loops).

But I also understand that Lisp isn't a family of function languages.
Functional is just a style that's possible and encouraged in Lisp, not
required.

~~~
hga
Well ... Scheme is the only big Lisp that _guarantees_ TCO, while Clojure
provides a recur special form plus a syntax for mutual recursion, since the
JVM as of yet does not directly support TCO.

Obviously some (many?) implementations of Common Lisp provide it, but I'm not
sure it's something you can particularly depend upon (I got out of the Common
Lisp community in ... 1984 and haven't seriously used it since then).

Then again, I myself wouldn't necessarily describe Common Lisp as a modern
Lisp anymore, it's essentially been frozen in amber when it standardized and
it's filled with legacy cruft, perhaps most especially it being a Lisp-2.
Scheme's standardization process has become glacial, but there is one (well,
two, RnRS and the SRFIs). Clojure is much like Lisp Machine Lisp in the early
days, although it should be past the worst of breaking changes by now.

------
benasselstine
"While such a project could have enormous social benefit, I don't see any way
to institute it and find support enough to carry it out."

I'd support that project. And I suspect that many other people would too.

------
logic
MVP.

~~~
steveklabnik
This is one of the reasons my kernel project is an exokernel.

