Hacker News new | past | comments | ask | show | jobs | submit login
GNU Hurd 0.5, GNU Mach 1.4, GNU MIG 1.4 released (gnu.org)
155 points by Tsiolkovsky on Sept 27, 2013 | hide | past | web | favorite | 43 comments

OK, I'll nerd out here for a second.

I knew some of the QNX guys,

work:~ call qnx Dan Hildebrandt (QNX) 613-591-0931 x204 (RIP 1998)

That's me using my phone database. Dan was one of the few people who were allowed to touch the QNX core. It was tiny, it knew that the "micro" in microkernel meant small. The entire kernel fit in a 4K instruction cache. The whole thing. Hello people! Micro means small.

For the people out there that understand operating system design, these guys got it. And they managed to make a distributed kernel that worked. I worked on an 80286 (yup) with ~10 users logged in and working.

I'll give more details if anyone cares.

All that said, please allow me to vent about Mach. I know nothing about hurd.

Mach was CMU project and it was what happens when you let a bunch of people who know nothing about the real world write wack on an OS (and a VM system). It was happening at the same time I was getting a decent education from Uwisc. So it took me a while to catch up.

I caught up after having worked on a Unix port. I learned a lot, as would any of you in the same situation. Practice != theory.

I went back to grad school to get a PhD and ran into a class where they were pushing Mach and I listened for 15 minutes and then I just couldn't take it any more and stood up and said "it doesn't work like that". The prof and I went back and forth for a few minutes and then I took over the class and taught it.

Blah, blah, blah, I haven't really made the case against mach. I have wife and kids waiting on me, I can come back and try and do better. My view of mach is that it added nothing to the body of OS work. BSD did - they added networking. Sun did - they added VM. QNX did - they added a microkernel. Mach - they added marketing in my opinion and nothing else.

Going to bed but I hope some of the other nerdy OS guys chime in. I suspect they feel the same.

What would be interesting is not a post about how GNU is going to do what has already been done but instead how they are going to do something that hasn't been done.

In one dimension, free vs paid, GNU has won and won big. That's not quite right. Open source has won and won big. Not fair to let the FSF get all the credit (believe me I can go on and on about that).

But in another dimension, innovation, building something the world has not yet seen? Hmm. I'd be very impressed to see that announcement from the FSF. That would be awesome.

Fully agree on QNX: very elegant OS, unique concepts like "syscalls" via synchronous (!) messaging, real-time from the bottom up. We used it for a robot we build in the 90s. Everything from the low-level control to the image processing on some fine 386 boards with QNX-native networked IPC. Too bad that Blackberry was probably the death-spell for it.

Also agree on Mach, although somehow NeXT/Apple seem to have managed to make a very good modern OS out of it - with a lot of Mach cruft hidden in the basement. The original NeXTStep Kernel suffered seriously from memory leakage and instability. Unbelievable it became the basis of iOS much later.

The thing that saddens me about QNX is that it had a real good chance of adoption by nerds at the moment when they opened it up. Neutrino was free (not opensource) at some point. Business reasons dictated otherwise I guess.

Back to micro, if you've ever looked at L4, I'd love to read your opinion of it. That project seems to have splintered off as in several direction, and even though it's apparently successfully used for embedded stuff, all the open research stuff seems to have stalled (edit: apparently not stalled, but active into very specific niches).

QNX was beautiful. I worked with it for very little time, but I loved it. Elegant and well-thought. I also share your feelings about Mach.

Please, do continue when you have the time. Cheers!

Just a bit before I hustle out of here to coach hockey all day (don't have kids unless you want to do the same :)

So Mach's VM was being done around the same time as the SunOS 4.0 release which had Sun's VM system (that implemented mmap() and friends for the first time) as well as Sun's vnode virtualization of the file system. As a side effect almost all of the buffer cache vs page cache mess went away (for those who don't know in the distant past pages were one thing and file system buffers were another. Ponder that for a minute and tell me what's wrong with it. Hint: bcopy()).

I was reading all those papers from Joe Moran and Rusty Sandberg and Steve Kleiman; I really really wanted to work at Sun, to me it was the Bell Labs of the day, so I was slurping in everything I could find to read and think about.

I read the Mach papers and later had a chance to wander through the source code. The best way I can say it quickly is that with some things you don't get it and you wander around for a while and eventually the fog clears and you see the architecture. Sun's stuff was like that, when it snapped into focus it was a thing of beauty. The Mach stuff never snapped into focus, for me. Maybe someone else can see how awesome it is but all I saw was a mess. Yeah, it worked, but so do a lot of messes. The good stuff is architected in a way that leads you to respect and maintain the architecture.

Gotta run, here is betting Bryan C will find this thread and add to it :)

Thanks! Enjoy the hockey games!

QNX is really beautiful and powerful at the same time..

Its equivalent to a haiku poem of OS code and design

I wish it was trully open source.. maybe now that Blackberry .. (we dont know for sure what will happen to it) they maybe could share to us all.. so this nice enginnering effort coudl have a great future?

Anyway, just one of the things on my wish list :)

If you have time, please continue.

I learned C on a PC/XT running QNX. It was that small.

I'm also very interested in what you have to say but I wonder, have you checked on the Mach kernel today? And what is your opinion on GNU Hurd with Mach?

My personal opinion is completely uneducated, merely one of quaint amusement that the GNU kernel is still going strong. Perhaps this is a prelude to GNU taking over in 15 years like Linux took over 20 years ago. For now though, it's nothing more than a novelty to me.

I'm interested in what you have to say. I'm particularly interested in how does it manifest that Mach was a research project and that the researchers knew nothing about practice of writing operating systems.

> And they managed to make a distributed kernel that worked.

So it was great the same way Smalltalk and ITS were great: It was wonderful, but most of us couldn't use it, so it was worthless. Technical acumen doesn't matter if the result isn't out in a usable form.

Will wait for an update on Debian GNU/Hurd.

The arch hurd team recently posted a message saying they were in stealth mode, very busy with their lives but not idle, I hope this will give them thirst to come back on a new release.

I'm actually curios, is this used in production on a mid/large scale anywhere ?

First hand experiences?

If someone answers this in the positive (which I doubt), I'd like to tack on the further question "what was so amazing about this software that made you start using it long enough ago, while it was so far in its very slow to come to fruition infancy, that by the time the software reached 0.5 you already had a large scale deployment" ;P.

Why, GNU, Why did you have to make a kernel?

Ok, I get that the GNU has always wanted to make an OS that they can get credit for (GNU/Linux folks, I'm looking at you), but still.

I suppose that they can make the case for making a non-monolithic kernel, but that doesn't seem necessary to me.

I feel like all the years of dev work on HURD could have been better spent on other projects, but it's not my time :(


Sometimes I think people could better spend their time on something more useful. Then I realise I'm sitting in front of a web browser browsing Hacker News. :)

Why not? There's always room for improvement. Beats another JS app framework.

particularly another poorly-documented, monolithic, over-marketed JS app framework

For one, recall that the HURD project was started before Linux. At the time, there was no free Unix-like kernel they could work on; the rest of GNU had been developed on proprietary kernels.

> I feel like all the years of dev work on HURD could have been better spent on other projects, but it's not my time :(

I don't really think it's nearly as many developer years as you think. HURD isn't that big a project; it's a few people, working on what they want to work on. Most of the actual development effort goes into the Linux kernel.

There's room for more than one free kernel out there. Heck, there's room for a lot. There are still people using and developing FreeBSD, NetBSD, OpenBSD, DragonflyBSD, Darwin, Haiku, ReactOS, and more.

This. If you read RMS' announcement for GNU [1] it had to have a kernel because in 1983 there wasn't a kernel. And an "operating system" was defined as its kernel, its compiler, and the tools around that and using it in day to day work.

So it starts first on a compiler, then to basic libraries, and then to a kernel. I applaud the team for continuing on the path set forth in that announcement.

[1] https://news.ycombinator.com/item?id=6457525

I feel like all the years of dev work on HURD could have been better spent on other projects, but it's not my time

For many years, it's been the effort of a handful of volunteers. Lots of GNU development has continued aside from HURD, so it's not as if HURD development has been a significant drain on getting any other useful work done.


Not GNU projects but relevant and different kernels / operating systems:





I don't consider any of these to be a waste of time in the least; some move quicker than others but ultimately every one of these projects is advancing the state of the art and we may one day see the fruit of this difficult type of work (building kernels from scratch).

Don't forget templeos...as trippy as it is.

Absolutely, TempleOS is an amazing piece of work including its programming language (Holy C).

Completely agreed. I wonder if it will ever 'leave' his hands, as it really is a nice language, a well thought out C improvement (for the uninitiated - http://www.templeos.org/Wb/Doc/HolyC.html ). I'm very versed with who the creator is, and his personality, but honestly think he may be one of the smartest people in the field. It's a shame more people don't take him seriously(and to their point, I don't always blame them).

People tend to not take crazy people seriously. That's kinda sad because the guy is indeed one of the smartest people in the field.

I just hope he gets some help and is able to work his issues out.

I forgot about HaikuOS too - I love what those guys are doing. Can't remember if they wrote the kernel from scratch too?

Sort of. It's based on the NewOS kernel written by Travis Geiselbrecht ( http://newos.org ) who in turned based many of the concepts on his experience as a BeOS kernel engineer. The Haiku project chose the NewOS kernel because it was open and used reasonably similar concepts and goals as the BeOS kernel did, which they'd otherwise need to emulate.

Huh, I always assumed that HURD was just GNU politics, what with the all the GNU tools with linux, while the GNU's original goal was an OS complete with kernel. When I look at it in this light, HURD makes more sense. I do disagree with saying that every OS advances the state of the art though, a lot of them will probably end up as *nix clones. I have respect for anyone who builds a kernel from scratch, but I also think that most of them ad totally batshit insane (Linus included.). They do great work, but I didn't understand why.

Why wouldn't they be advancing it? It might not be academically rigorous in some cases (in some cases it is) but every single project is doing something unique and pushing the sphere of known solutions to problems further out. Very few of the projects even want to be nix clones and are exploring areas that aren't explored by the nixes because they are too big and static now experiment (for good reason! it isn't a fault).

It was very detrimental for the project, but it's a key part of the research/thinking/design. If you watch Hurd demos, you'll quickly see how moving bits outside the kernel gives a lot of freedom to the users. In terms of ideas and performance. Samuel Thibault 2011 (fragile) demo included mounting an iso image from a ftp in a matter of seconds. This pops from the OS design.

As someone else mentioned, the HURD was started before Linux existed back when micro-kernels was all the rage in academia.

Stallman said that by using the mach-kernel they hoped to get finished faster, as it turned out this was a big mistake which he acknowledged as his own and as soon as Linux took off they started focusing on improving the GNU toolchain in conjunction with Linux.

The Hurd is still in development today because there are people who like working on it during their spare-time, just like all the other alternative kernel/os projects out there.

Because a kernel is an essential part of a complete operating system.

So that they can get credit? I don't think the GNU Project had anything to do with achieving reputation, but rather to spread an ideal of free software.

Hurd development has been pretty sporadic in general. Just look at GNU's project catalogue: it's vast. That said, if Stallman hadn't egotistically intervened into Bushnell's original plan to adopt the 4.4BSD kernel, the Hurd could have stood a chance.

But no development is a waste.

I'd argue that from the perspective of an operating system vetting security, a good microkernel implementation with userspace drivers would be a great win. Even when you can vet the entire linux code base (for the most part) and use only the free drivers, you still have multiple attack vectors available to possibly get code running in kernel space. Which is bad.

A lot of people (myself included) probably don't think Linux is the last kernel to rule them all, ever. It doesn't distribute itself well, it is quite large, and its speed is derived from its massive developer staff. Simplifying your kernel doesn't really matter when it has hundreds of active contributors, but what about in a hundred years when all the current Linux devs are gone, and you have to try to get new generations to adopt the code base and maintain it?

There are heaps of open source projects that do the same thing. Count the text editors, window managers, game emulators, audio players, the list goes on forever. This is the _second_ kernel in the same kind of space.

First, actually. :-)


Competition makes everybody better in the long run. Also, sometimes things are just ahead of their time, and don't "come into their own" until later. It's still entirely possible that Hurd will prove to be very important and useful for some class of application(s).

I feel like all the years of dev work on HURD could have been better spent on other projects, but it's not my time

Exactly. The people working on it chose to work on it, and none of us have any standing to tell them that they were wrong for doing so.

It's not wasted. I've seen the design concepts of HURD as an inspiration for other projects and people experimented with them. Even I got some (dumbed down) ideas that I didn't limit to software. Sometimes I think there's some similarity to the SOA approach (pretty everything's a server/service), just that HURD had the ideas earlier. But that might be a too high-level view now.

Thanks for all the answers.

I can understand why people are making HURD, but I'll stick to Linux for now :P.

never knew this was still in developement, It was a great idea but got little attention against linux like some other projects including Haiku

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact