Hacker News new | comments | show | ask | jobs | submit login
A Debian GNU/Hurd snapshot (lwn.net)
69 points by sciurus 1395 days ago | hide | past | web | 44 comments | favorite



Hurd is one of those pieces of tech that just makes me sad. Performance implications aside, micro-kernels feel so much more "right". It feels like proper software engineering techniques applied to the kernel-level. How cool would it be to be able to, e.g. replace the default scheduler with something that satisfies your niche requirements?

But in Hurd's quarter century of life, it's gone nowhere, while Linux, Windows and OS X have zipped by. It feels like nothing short of an apocalypse will bring the micro-kernel back. I think the only major one in the wild in QNX (Blackberry OS), and that's waning. OS X / NT are no longer microkernels.

"Worse is better" still rings true, in more ways than one.


I wouldn't say QNX is waning... It's used in trains, nuclear power stations, cars, embedded systems etc. If anything, QNX is a good counter-example to your argument because it's one of the few really successful microkernels out there.

edit: a broader list of where QNX is used: http://www.qnx.com/company/30ways/


Interesting. I'm so deeply embedded in the web/mobile world that I forgot there are some very real RTOS requirements that drive QNX uptake to this day.


Remember that microkernels aren't just trying to achieve modularity - they also take a stab at robustness by failure recovery. This is IMO doomed to fail - you are much better off trying to achieve robustness by correctness. Monolithic kernels focus more on correctness and therefore progress much faster.


Most of the kernels that try very hard to achieve correctness (in the embedded / real-time space that would be used in military or aerospace applications) are microkernels. Microkernels have a smaller number of primitives that are more orthogonal, so they are easier to implement correctly and audit them, and you get some amount of preemptibility for free.

Also, as far as I know, the only kernel that has been proven correct with a mechanically verifiable proof is an L4 variant.


proven correct with a mechanically verifiable proof is an L4 variant

seL4, which is microkernel of course

http://www.sigops.org/sosp/sosp09/papers/klein-sosp09.pdf


MINIX seems to be working pretty well these days.


Monolithic kernels don't "focus on correctness". Some people might when writing one, plan 9 for example. But plenty don't, like linux as the most obvious example.


Reading the Plan 9 kernel code hasn't convinced me that its authors focused on correctness. The kernel code is pretty, and its twenty-year-old architecture could pass as only ten years old; but even excluding drivers, there is no shortage of bugs.


You found lots of bugs reading the plan 9 kernel source, and forgot to report them?


Interestingly, important parts of the Linux kernel are proven correct (e.g. the scheduler). But yeah...not the whole thing.


What? When and where has any part of linux been proven correct?


What I mean is that they try to achieve stability by eliminating bugs, not by doing damage control. Proactive vs reactive.


And what I mean is that is simply not true. Monolithic kernels give us unstable systems like windows, linux, BSDs, etc. Virtually ever highly reliable operating system is built on a microkernel.


Exactly - and I don't think OS X was ever truly a microkernel, it used Mach as the base for XNU, but it was always a hybrid kernel even going back as far as the NeXTStep days.

QNX is all that is left and as you say, it's waning -- fast. And ironically, QNX in BlackBerry OS has lost many of the so-called advantages it had in the past, because a large portion of the OS is dedicated to running Android in an emulation layer, thus using a monolithic kernel anyway. sigh

Stallman failed and that's OK. Giving the world emacs and GCC was enough, frankly. He used to always make it seem like Linus just got "lucky" and happened to make a kernel before Hurd was done -- but that wasn't the case at all. Linus got a working kernel working faster and Linux proved to be better than anything Hurd could achieve, full-stop.


> Linus just got "lucky" and happened to make a kernel before Hurd was done -- but that wasn't the case at all.

The funny thing is it's not even the case that there's only room for just 1 production-quality open source kernel. Through all this time the *BSDs have done rather well in comparison. We might just conclude that despite a great body of work that we all respect and owe a lot to, maybe GNU/FSF isn't all that great at managing a project. (See also: Emacs Schism, GCC vs. ECGS, GCC vs. LLVM.)

(Aside: Personally, I booted Debian GNU/HURD on a Pentium II more than ten years ago to try it out. At the time it was slow and had a bunch of drivers ported from Linux 2.0. It sounds like not much has changed, however I never did get X working, so good job there.)


Exactly - and I don't think OS X was ever truly a microkernel, it used Mach as the base for XNU, but it was always a hybrid kernel even going back as far as the NeXTStep days.

Well, it uses it now. (But yes, not as a microkernel.)

You never know how something will come handy — learnt Hurd programming => now not afraid of Mac OS X kernel'ish stuff


Components were very successful with Visual Basic, but (despite the C# description[1]) haven't really been front and center in languages and operating systems. I honestly think it is because we don't have something at the bottom that was built components first. Decoupling designs is hard, and we get systems of objects and not components.

1) 2nd paragraph of introduction http://www.microsoft.com/en-us/download/confirmation.aspx?id...


As time goes on GNU HURD becomes sadder and sadder :/

They can't even seem to be consistently settled on their basics like what micro kernel they are using

https://en.wikipedia.org/wiki/Hurd#Other_microkernels

In the 2000s they experimented with 3 other micro kernels, all of which went "no where". Over half a decade spending their limited dev resources on work that ultimately went no where since they are still apparently using Mach.

and now (from the article):

- GNU/Hurd 2013 is available for the 32-bit x86 architecture only

- Hurd can currently only make use of one CPU (or CPU core). That is to say, it will still run on multi-core and SMP machines, but only utilizing a single processor.

- Hurd generally uses device drivers ported from Linux 2.0 ... So the latest and greatest shiny hardware might cause trouble.

Once upon a time if you could get it to run on your hardware it was at least vaguely competitive, but now the thought of putting it on my 64bit quad core laptop... well... what a waste.

And instead of trying to even keep up, they are still trying to pick the right micro kernel...

"In years past, Richard Stallman was quoted as saying that Linux was a good option to use while the Hurd was still incomplete"


The Hurd authors might find your part-time projects just as sad and pathetic as you find theirs. There's not much I can say beyond that, since you've based your judgement on vague and wishy-washy concepts like going somewhere, being "vaguely competitive", and keeping up. It's no business of yours to criticize the direction of a project you have no involvement or stake in, especially when you can't articulate what direction you'd prefer.


The problem is HURD isn't positioned as a part time product, it's positioned as a replacement to Linux, which is inferior technically, just has more hands on deck right now. I'm not so scornful of the many many many other OS projects out there because they don't generally act like they are better than all that exists and going to naturally supplant it if they can just find the right microkernel... Also very few "side projectS" have been going on for like 25 years... So again my personal side projects are hard to compare to an organization's quarter century effort.

Direction I'd prefer? Working on modern hardware and taking full advantage of it! Was that not clear? That's the whole point. They spend all their time on on designing and redesigning archetecture and their software is now getting less relevant on real hardware. Mostly only makes sense now to run as a VM. Which is probably not what they want for the OS that is supposed to replace Linux....


My sad and pathetic part-time projects don't drag on for 25 years. :)


I wonder what could be done to actually turn the project around. There are articles on HN almost everyday about small project management this and agile that. What might actually help this project?


The first thing they would need is a compelling reason for people to spend their time on the project.


"In years past, Richard Stallman was quoted as saying that Linux was a good option to use while the Hurd was still incomplete"

Linux will remain a good option for a long time.


A lot of people will point and laugh at HURD, calling it a really sad project or whatever. Why? Considering the small number of developers, it has come along spectacularly. Over time, HURD has been improving at a decently consistent rate. I remember a time when it was nearly impossible to get it running even on QEMU. God damn people, sure it's missing features; but those features don't inhibit it running. I, for one, am glad that they decided to get it stable before they get it fast.

I will always wait for the day I can say my computer runs GNU. Nothing else, just GNU.


> I will always wait for the day I can say my computer runs GNU. Nothing else, just GNU.

Well, um... why?


Are you joking?

Linux is named after a guy, why? Because it started out as a single person project. So did many other major open source projects. The reason why so few people still work on HURD is because it's not seen as very promising or interesting or practical.


Oh right, and Linus Torvalds wrote all the millions of lines of code that comprise the Linux kernel. Sure it started out as a single person project, but it certainly didn't stay that way. One of the reasons development on Linux progresses so quickly is because of the vast number of developers.

HURD, to contrast, had a small base initially and has remained that way.


Linus is a good, pragmatic designer and a fantastic manager. Do give him some credit for fostering the kernel community.


I did say "one of the reasons", not "the only reason".


Are the last 2 sentences of my post not visible on your monitor?


Nope. Sorry.


Why? Because in the time HURD has sat around trying to figure out whose code to use and call HURD, other people have actually written useful kernels. Minix for example.


Minix was first released in 87, three years before development on HURD even began.

Yeah, okay, that was just nitpicking. But they're not doing that anymore, so why bother complaining about it anymore? Sure, it's not up to snuff yet, but perhaps it will be decently soon. Development on HURD has really surged in the past year or two, I'd like to say.


The operating system as a whole is GNU and that project started in 1983. Hurd is just the kernel.

It took them 7 years to even name the kernel that hadn't been written yet. I feel like that says something.

And then when an actual GNU-licensed kernel came along, they ignored it and kept plugging away on vapour.


>Minix was first released in 87, three years before development on HURD even began.

Minix wasn't an attempt at a production kernel in 1987. Minix 1 and Minix 2 were purely for teaching. Minix 3 was a completely new project, and wasn't started until 2005.

>but perhaps it will be decently soon

The history of the project and its current state suggest that is unlikely.

>Development on HURD has really surged in the past year or two, I'd like to say.

You might very well like to say that, but that doesn't mean it is true. There are no competent systems programmers working for/with GNU, it is highly unlikely that the random assortment of people chosen for their politics rather than their skillset will be able to turn the longest festering mess of code ever into a production ready kernel.


I tried it long time ago. And it's something magical about the fact that you can have one system (GNU/Linux) and don't have any binaries for other (GNU/Hurd), only sources for mach, hurd, gcc, glibc and can piece by piece (cross-)compile everything and get a working system (with gcc! you compile it by cross-gcc which you compile from your native gcc).

And yes, /dev/null is just translator (think of it as mount point), user-space program discarding stdin. Oh, and / is just a translator giving access to /dev/hdX


Wait, these are all separate processes?

What happens if you crash / ?


The service restarts. What happens if you crash the Linux kernel?

I'm a happy monolithic kernel user, but there are some great ideas in the microkernel model. Just seems to never quite be the best practical choice except in some niches.


I was not criticizing, I was genuinely curious.

Okay then, I donwloaded Debian Hurd and made a VirtualBox image out of it. This should be fun.


Yep, user space. It will be restarted.

(Note that I may lie to you about /, it's somewhat special and I don't remember details).

Edit: oh, and when it crashes it will get handled by crash server (you guessed it, user-space program) which can kill/suspend/coredump it. You can choose even by env variable!


What happens if you crash the crash server?


Dingos eat your baby.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: