Hacker News new | past | comments | ask | show | jobs | submit login
Examining the Legendary Hurd Kernel (2008) (informit.com)
86 points by vezzy-fnord on July 11, 2015 | hide | past | favorite | 63 comments



It's sad. Neither Hurd nor Mach reached the performance level of QNX. L4 finally did, although L4 now uses shared memory too much. (While it simplifies the kernel to do message passing via shared memory, it makes it easier for one process to mess up another. Queues in shared memory have to be managed very carefully.)

The real issues in microkernel performance are subtle. Message passing and CPU dispatching have to be well-integrated. Otherwise, when you pass a message from one process to another in a subroutine/service like way and get a response, you go to the end of the line for CPU time twice. A good test of a message passing system is to write a client/server which does some trivial computation on the server and then returns a reply. Benchmark it. Then repeat the benchmark with the system CPU-bound. If performance drops by orders of magnitude under load, the message passing / CPU dispatch relation has been botched.

On the security front, the "critique" says "programs are assumed to represent the interests of the user and, as such, are run with the user’s authority." This, of course, comes from the UNIX/Linux model and still gives us trouble. We still have trouble running something like Flash or a browser on a desktop without giving it far too much authority.

(Of course, if you do have a more granular authority model, apps ask for too much, as the mobile world has learned. Grumble of the week: Firefox on Android now demands access to my phone book. Not because the browser needs it, but because their ancillary products, Sync or Pocket, might.)


I keep hearing those stories of "performance level of QNX" since at least 2000. Perhaps it's even true (while I didn't see that during 4.x QNX era, but I blame lack of experience back then). I've seen even various synthetic tests QNX vs L4 couple of years ago. Again QNX source code has been released about 5 years ago if I'm not mistaken. So what was that super performing sauce of QNX which other micro kernels lack. What are those concepts/ideas? Why they weren't re-implemented on L4/Hurd (you can't just borrow, QNS is NOT free software)?


I'd be interested in hearing Animats' opinion. My guess is that it stayed simpler than Mach but not too simple. Right tradeoffs in the kernel along with fitting into the tiniest of cache and [typical of RTOS's] smart choice of instructions for predictable timing. Here's a paper on an older version of it:

http://cseweb.ucsd.edu/~voelker/cse221/papers/qnx-paper92.pd...

In 1992. it could execute in 8KB of cache with the microkernel and interrupt handler. That by itself should buy it some performance. :)


The most important operation in QNX is MsgSend, which works like an interprocess subroutine call. It sends a byte array to another process and waits for a byte array reply and a status code. All I/O and network requests do a MsgSend. The C/C++ libraries handle that and simulate POSIX semantics. The design of the OS is optimized to make MsgSend fast.

A MsgSend is to another service process, hopefully waiting on a MsgReceive. For the case where the service process is idle, waiting on a MsgReceive, there is a fast path where the sending thread is blocked, the receiving thread is unblocked, and control is immediately transferred without a trip through the scheduler. The receiving process inherits the sender's priority and CPU quantum. When the service process does a MsgReply, control is transferred back in a similar way.

This fast path offers some big advantages. There's no scheduling delay; the control transfer happens immediately, almost like a coroutine. There's no CPU switch, so the data that's being sent is in the cache the service process will need. This minimizes the penalty for data copying; the message being copied is usually in the highest level cache.

Inheriting the sender's priority avoids priority inversions, where a high-priority process calls a lower-priority one and stalls. QNX is a real-time system, and priorities are taken very seriously. MsgSend/Receive is priority based; higher priorities preempt lower ones. This gives QNX the unusual property that file system and network access are also priority based. I've run hard real time programs while doing compiles and web browsing on the same machine. The real-time code wasn't slowed by that. (Sadly, with the latest release, QNX is discontinuing support for self-hosted development. QNX is mostly being used for auto dashboards and mobile devices now, so everybody is cross-developing. The IDE is Eclipse, by the way.)

Inheriting the sender's CPU quantum (time left before another task at the same priority gets to run) means that calling a server neither puts you at the end of the line for CPU nor puts you at the head of the line. It's just like a subroutine call for scheduling purposes.

MsgReceive returns an ID for replying to the message; that's used in the MsgReply. So one server can serve many clients. You can have multiple threads in MsgReceive/process/MsgReply loops, so you can have multiple servers running in parallel for concurrency.

This isn't that hard to implement. It's not a secret; it's in the QNX documentation. But few OSs work that way. Most OSs (Linux-domain messaging, System V messaging) have unidirectional message passing, so when the caller sends, the receiver is unblocked, and the sender continues to run. The sender then typically reads from a channel for a reply, which blocks it. This approach means several trips through the CPU scheduler and behaves badly under heavy CPU load. Most of those systems don't support the many-one or many-many case.

Somebody really should write a microkernel like this in Rust. The actual QNX kernel occupies only about 60K bytes on an IA-32 machine, plus a process called "proc" which does various privileged functions but runs as a user process. So it's not a huge job.

All drivers are user processes. There is no such thing as a kernel driver in QNX. Boot images can contain user processes to be started at boot time, which is how initial drivers get loaded. Almost everything is an optional component, including the file system. Code is ROMable, and for small embedded devices, all the code may be in ROM. On the other hand, QNX can be configured as a web server or a desktop system, although this is rarely done.

There's no paging or swapping. This is real-time, and there may not even be a disk. (Paging can be supported within a process, and that's done for gcc, but not much else.) This makes for a nicely responsive desktop system.


Thanks for the detailed reply. I've saved it in case the topic comes up in another forum.

The design is great. I get most of it. Security is a big, use case for microkernels these days. One part that jumps out at me is that receiver inherits sendor's priority and quantum during a transfer of control. I'm guessing the kernel manages the inheritance in storage only it controls, right? Otherwise, I could see issues coming up.

So, it made a robust desktop, eh? I considered getting a copy of it but didn't have time. What do you think about integrating a knock-off of this kernel with MINIX 3 or GenodeOS? They seem to be the furthest ahead of the open, microkernel-based systems at getting to something usable.


This sounds a lot like Android's binder mechanism for inter-process communication.


Binder was created to solve the same problem (Linux IPC was too slow) but uses somewhat different approaches.


Some years ago, I've spent a lot of time studying GNU Mach and Hurd (I've also made some small contributions). I think I can say that I now both pretty well. I even started a project to preserve OSF Mach + MkLinux source code (https://github.com/slp/mkunity), a very cool project for its time (circa 1998).

These days I prefer to do my kernel hacking on monolithic kernels, mainly NetBSD. I've stopped working on Mach, Hurd and other experimental microkernels (there're a bunch out there) because it was becoming increasingly frustrating.

If you'd ask me to define the problem with microkernels with one word, that would be "complexity". And its a kind of complexity that impacts everything:

- Debugging is hard: On monolithic kernels, you have a single image, with both code and state. Hunting a bug is just a matter of jumping into the internal debugger (or attaching an external one, or generating a dump, or...) and looking around. On Hurd, the state is spread among Mach and the servers, so you'll have to look at each one trying to follow the trail left by the bug.

- Managing resources is hard: Mach knows everything about the machine, but nothing the user. The server knows everything about the user, but nothing about the machine. And keeping them in sync is too much expensive. Go figure.

- Obtaining a reasonalbe performance is har... imposible: You want to read() a pair of bytes from disk? Good, prepare a message, call to Mach, yield a little while the server is scheduled, copy the message, unmarshall it, process the request, prepare another message to Mach to read from disk, call to Mach, yield waiting for rescheduling, obtain the data, prepare the answer, call to Mach, yield waiting for rescheduling, obtain your 2 bytes. Easy!

In the end, Torvalds was right. The user doesn't want to work with the OS, he wants to work with his application. This means the OS should be as invisble as possible, and fulfill userland requests following the shortest path. Microkernels doesn't comply with this requirement, so from a user's perspective, they fail natural selection.

That said, if you're into kernels, microkernels are different and fun! Don't miss the oportunity of doing some hacking with one of them. Just don't be a fool like me, and avoid become obsessed trying to achieve the imposible.


It's like the argument about excessive modularity in software design in general: you can split a system into so many little pieces that each one of them becomes very (deceptively) simple, but in doing so you've also introduced a significant amount of extra complexity in the communication between those pieces.

Personally, I think modularity is good up to the extent that it reduces complexity by removing duplication, but beyond that it's an unnecessary abstraction that obfuscates more than simplifies.


The communication would've happened anyway. Now it just happens through a common mechanism with strong isolation. That all the most resilient systems, especially in safety-critical space, are microkernels speaks for itself. For instance, MINIX 3 is already quite robust for a system that's had hardly any work at all on it. Windows and UNIX systems each took around a decade to approach that. Just the driver isolation by itself goes a long way.

Now, I'd prefer an architecture where we can use regular programming languages and function calls. A number of past and present hardware architectures are designed to protect things such as pointers or control flow. Those in production are not, but have MMU's & at least two rings. So, apps on them will both get breached due to inherently broken architecture and can be isolated through microkernel architecture with interface protections, too. So, it's really a kludgey solution to a problem caused by stupid hardware.

Still hasn't been a single monolithic system to match their reliability, security, and maintenance without clustering, though.


>For instance, MINIX 3 is already quite robust for a system that's had hardly any work at all on it. Windows and UNIX systems each took around a decade to approach that. Just the driver isolation by itself goes a long way.

MINIX3 also has hardly any work done WITH it, so I don't think we can compare it to Windows and UNIX systems regarding robustness, unless we submit it to the same wide range of scenarios, use cases and work loads...


I'd like to see a battery of tests to see where it's truly at. Yet, there's still not a MINIX Hater's Handbook or something similar. That's more than UNIX's beginnings can say. ;)


Communication would've happened, but probably between far less actors. So, you have a communication channel which is orders of magnitude slower, and bigger communication needs. Not good.

That said, about the reliability point, I agree with you. If you're building an specialized system, and reliability is your main concern, microkernels+multiservers are the way to go (or, perhaps, virtualization with hardware extensions, but this is a pretty new technology for some industries).

Probably you're going to need to add orthogonal persistence to the mix, to be able to properly recover from a server failure, or an alternative way to sync states, which will also have an impact on performance. But again, you're gaining reliability in exchange.


The communication channel does get slower. The good news is that applications are often I/O bound: lots of comms can happen between such activity if designed for that. One trick used in the 90's was to modify a processor to greatly reduce both context switching and message passing overhead. A similar thing could be done today.

Of course, if one can modify a CPU, I'd modify it to eliminate the need for message-passing microkernels. :)


I think this is a good example of the law of conservation of complexity[1]. You can't reduce complexity, you can only change what's complex. In the case of monolithic kernels versus microkernels, it sounds like going to a microkernel moves the complexity from the overall design into the nuts and bolts of interprocess communication.

[1] https://en.wikipedia.org/wiki/Law_of_conservation_of_complex...


You've just hit the nail right on the head.



If you'd ask me to define the problem with microkernels with one word, that would be "complexity".

The problem with Mach, you mean. All the examples you listed are specific to it.


I don't know about other implementations, but I remember the original design of l4hurd (based on L4Ka), was even more complex. I'd same this applies to all "pure" multiserver designs.


Check out Genode.org, MINIX 3, or QNX. Seem to have gotten a lot more done than Hurd despite being microkernel-based OS's. KeyKOS is one of the best from back in the day with EROS being a nice x86 variant of it. Turaya Desktop is based on Perseus Framework.

Many working systems in production from timesharing to embedded to desktop that are microkernel-based. Hurd and Mach's problems are most likely due to design choices that created problems.


I don't know about the others, but at least both QNX and Minix3 cheated a little, i.e. allowing servers to write directly to other user space programs.

Also, the presence of microkernel+multiserver systems is still quite symbolic in comparison with the monolithic couterparts.


This makes me sad in three ways.

1. Hurd is still not here, yet.

2. With Duke Nukem Forever actually finished, and with Perl 6 looking like it might get finished this year, we kind of run out of good jokes to crack about the Hurd. What will be the next killer-app that will run on Hurd out of the box, once it's finished?

3. Unless I've been misinformed, there are multi-server microkernel systems out there that do work. So it's not like the Hurd was a totally misguided idea. And yet, it's still out there, like cold fusion, artificial intelligence and hypo-allergenic kittens, it's tantalizingly out of reach...

Well, one day


As tired as it may sound, I think there just might be a convenient time for the Hurd to rise, albeit in an unconventional way.

You already have a de facto GNU OS at the moment in the form of the Guix System Distribution (GuixSD), what with it letting you configure the entire system in Scheme, perform transactional upgrades, system state rollbacks, it has its own init daemon and service scripts in Scheme, etc.

At the same time, we've been seeing some efforts in creating tiny builds of the Linux kernel [1] that go as far as to reduce syscalls, VMM algorithms, capabilities, character devices and so forth. There's also the recent Linux libOS effort to create a network stack in userspace as a shared library, though that seems about the extent of it. NetBSD then gives you a userland driver framework with rump kernels.

So what one can do is build a really stripped down Linux kernel, write a Mach IPC compatibility layer, run the Hurd servers on top of it and plug in the resulting product with GuixSD. And you now have the complete GNU system. As far as you'll ever get, anyway.

This has already been discussed before [2], but it's not something that has ever been given any priority. With the current climate, it might be worth a shot.

As to why anyone would want to do this... well, for one you get a full OS that is entirely configurable in Scheme with all of its services running as userland file servers (translators) in a sort of Plan 9-ish way, all the while supporting the Linux API. That's still pretty ahead of GNU/Linux. The Hurd also has features like running multiple instances of itself in user mode (subhurds) that can serve a similar purpose to containers.

It's obvious no one's going to port the Hurd servers away from GNU Mach, so this is the best shot anyone has. Any Hurd or Guix devs in here to comment on this?

[1] https://tiny.wiki.kernel.org/

[2] https://www.gnu.org/software/hurd/open_issues/linux_as_the_k...


Hurd developer here.

I also have high hopes for Guix, and in fact there is a gsoc project to port Guix to the Hurd. Guix tries to empower the user, a goal shared by the Hurd. They are a perfect match.

I do not believe that it is easy to provide the Mach IPC interface in the Linux kernel. The IPC mechanism is tightly integrated into the thread switching and scheduling systems.

Furthermore, I do not see much benefits in doing that, the main point would be drivers, which we want to run as a userspace process anyway. And we are doing that using the DDE framework. Rump is interesting, and it wouldn't even be that hard. It just lacks someone actually doing it.

What people don't realize is how compatible the Hurd is. We are shipping the glibc. Debian/Hurd contains ~80% of the software that is found in Debian/Linux. We have Firefox. The other day I deployed two Django sites on my Hurd development server to work out some issues I had with the Apache configuration I had on the production system. I was curious how it would go. All the Python stuff worked out of the box, I merely had to select a different Apache mpm module, because the default didn't work. Need to rescue a box? Append "init=/bin/bash" to the kernel command line to circumvent sysvinit.

Other microkernel based systems might have more ambitious goals, but many of them sacrifice compatibility to reach them. Some aspects in POSIX are hard to implement in a distributed fashion, for example fork(2). But what good is a fancy system if it requires a huge effort to port applications to it?


Right, the hypothetical tiny Linux kernel would probably be built without driver support and have NetBSD's rump kernel drivers in place of both Hurd's DDE and the monolithic Linux drivers, which keeps the userspace advantage.

It's true that a Mach port is quite tricky (particularly the virtual memory semantics).

Apparently there was an old abandoned attempt to port Mach on top of POSIX, which if someone obtains the source, might be a useful reference to see what went wrong or is incomplete [1].

It's worth noting that the reverse (Linux on top of Mach) was already done by Apple nearly a couple of decades ago as MkLinux, but they ultimately settled on XNU.

[1] https://www.gnu.org/software/hurd/open_issues/mach_on_top_of...


I'm not Hurd supporter but that's an interesting idea. I'd say replace Scheme with Python or something mainstream that's similarly powerful. Adding LISP to stuff usually kills it off unfortunately, with Clojure a freak occurrence. The transactional upgrades and rollbacks feature by itself would be compelling to some audiences.


"Adding LISP to stuff usually kills it off"

What? Oh, you mean like Emacs. Scheme is a beautiful language; it's flexible and trivial to learn. Replacing it with Python would be silly. Besides, Guile Scheme is the designated extension language of the GNU system. Gnucash, dmd (the init system), the Gimp, Guix, gEDA and many more GNU applications all can be hacked in or extended with Guile. In the GNU system Guile Scheme is "mainstream".


"Oh, you mean like Emacs."

No I mean like... surveys the market... nearly everything else. The LISP field is almost non-existent outside of academia and a few companies. Almost nothing mainstream in proprietary or FOSS is built with it. Plus, most programmers hate it. So, using something like in a FOSS app intended to go mainstream is quite a risk.

"Besides, Guile Scheme is the designated extension language of the GNU system."

Oh Ok. That they're pushing people to use an unpopular language at least explains why the apps you cited used it.

" In the GNU system Guile Scheme is "mainstream"."

Nah, most programmers haven't heard of it. Of those that have, I'd bank on only a subset even using it. I'd say it's used by a tiny set of developers, esp extension builders. And for tools who mostly each represent a small amount of developers or users.

You're really making my point for me by redefining the word mainstream to be "barely known software barely making it with a few exceptions." They should consider something that actually is mainstream as an experiment in increasing adoption. Probably some other things to reconsider while they're at it.


You mean like AutoCAD was killed by all the other CAD packages with a more conventional language? I'd like to see such a counterexample. Lisp is only killed by managers who are afraid to hire cheap labour. Like yahoo and their store.


What language is AutoCAD written in? And what percentage of successful products and projects use LISP? That's what I mean. It's the status quo of where LISP is in the marketplace. It's you that have to provide dozens to thousands of counterexampes to even illustrate a trend. They're not there.

Whereas companies adopting PHP, Python, Ruby, Lua, and so on have no time finding help, libraries, or customers. Because they're mainstream and attract such people. See the difference now?


> I'd say replace Scheme with Python or something mainstream that's similarly powerful.

I use Nix, which is a non-Lispy language which Guix is based on, and I've found myself wanting Lisp-like metaprogramming on several occasions; which Python isn't suited for. I might switch to Guix precisely for that.

> Adding LISP to stuff usually kills it off unfortunately

Do you have any examples? Lisp has been around since the 1950s, and Scheme since the 1970s; they seem to be pretty death-proof. Python's only been around since the 1990s, so it's got a shorter track record.


It's a great series of languages with plenty of advantages. There's still a small niche that uses it and one variant that's mainstreaming. Python is a knockoff of it with a decent standard library that grew wildly successful. It's why I recommend it as next best thing.

"Do you have any examples? "

Most articles written on history of LISP and its advantages come with examples of it going away in various companies or not getting adopted in general. Just try to recall how many of the following were done in (and still run on) LISP to see what I mean: processor support; operating systems; compilers; business apps; productivity tools; widely-used libraries; web services; popular FOSS apps or services; mobile apps or services; embedded systems. I'm sure your list in each is going to be tiny compared to the one's that chose alternative platforms.

That's the LISP effect. Both commercial products and FLOSS have continuously fleed from the LISP's. Those that used mainstream alternatives got plenty of success. Just compare... pick a variant... a LISP's success against C/C++, C#, Java, PHP, Python and so on. You can have a great idea but hardly anyone will contribute to it if it's LISP. Clojure's been the only exception, it's tied to JVM, and still is barely a blip [1] on the overall radar.

So, that's what I mean. You can use what increases odds of extension or adoption. Or you can use LISP with difficulty getting commercial or FOSS help. Now, the types of people that can churn out great LISP might be worth the extra effort and money. Jane St has a similar philosophy for Ocaml. Just worth knowing it's going to be an uphill battle compared to the alternatives.

[1] http://langpop.com/


I don't think you are far off. Lisp provides an added complexity with its code is data / data is code philosophy, that makes it harder to use compared to primitive scripting languages like PHP (which will have a hard time to leave its domain).

At the same time Lisp attracts a bunch of people because it is different. Which at the same time hinders its mass adoption, also secures its survival.

> tiny compared to the one's that chose alternative platforms

There are also a few domains where there are several Lisp applications or some which won't go away, even though the domain might be very specialized. For example in Computer Algebra: Maxima, Reduce and Axiom are all still maintained. In music composition there is a bunch of applications. Some new (OpusModus) and some old (PWGL, OpenMusic, Symbolic Composer). There are a few theorem provers used like ACL2 and PVS (Nasa). In CAD several Lisp system were bought off the market (for example iCAD by Dassault), but others still exist (like Genworks, or PTC's CREO elements). There are some of these clusters, small but interesting.

Shortly after the AI winter Lisp was mostly dead and only a few barely maintained implementations have remained. Now more than ten Lisp implementations are being maintained, two large commercial implementations (Allegro CL and LispWorks) are still under further development and new implementations still appear from time to time (like CLASP on top of the LLVM or mocl, a full program to C translator for mobile applications). Thus there are people who can maintain and develop complex implementations and there are some users out there.

It's small compared to PHP or Python, but it seems that there is a sustainable 'market'.


Your post is mostly accurate. Yet, the number of maintained implementations doesn't tell me anything about whether it's mainstream or worth using. Brainfuck language has a whole page worth of variants, implementations, uses, IDE's and so on. I doubt anyone will argue that we should use it for commercial projects. ;) A maintained, active implementation is only a start for a language.

For LISP, the market is small and fragmented. They've always had the problem of siloing off from each other while mostly not building enough standardized functionality to get momentum. The language also has lots of warts from the old days that it should eliminate (Clojure does a bit). They've mostly ignored all this and that's why the language doesn't go anywhere. That Python, inspired by LISP, lacks every one of these problems is probably a reason for its success.

Far as LISP's, I thought about relearning it and taking another go at it. Racket is the most interesting to me in terms of power, available libraries, community, and IDE. My old trick was doing imperative programming in LISP for rapid iteration and then auto-generating equivalent code to be handed off to a C or C++ compiler. Let me macro out the hacks needed for portability, performance, and security in many cases. Plus, interactive development + incremental function compilation is AWESOME.

Thinking on two lines this time. One is to redo my old approach with C and Go as the new targets. That lets me piggy-back on all their momentum with rapid prototyping and zero-to-low-overhead abstractions. Another is to reimplement Python & its standard lib in LISP in a way that integrates code in both. That should give a performance boost, a way into LISP for Python developers, and let me use Python code/libraries. Ton of work going into the latter.

Really busy so not sure if I can do either. What do you think about the options?


> Brainfuck language has a whole page worth of variants, implementations, uses, IDE's and so on. I doubt anyone will argue that we should use it for commercial projects.

Well, I fear Brainfuck does not have commercial vendors and users willing to spend money on that. Lisp has that.

> whether it's mainstream or worth using.

It's not mainstream, but there are people finding it worth using.

> That Python, inspired by LISP, lacks every one of these problems is probably a reason for its success.

Python is not at all inspired by 'LISP' and not even by Lisp.

> My old trick was doing imperative programming in LISP for rapid iteration and then auto-generating equivalent code to be handed off to a C or C++ compiler.

Common Lisp compilers do that C code generation for me. No need to write macros for that.

> Another is to reimplement Python & its standard lib in LISP in a way that integrates code in both.

That has been tried already. Doesn't interest Lisp developers much. For a Lisp user there is little benefit of using a mix of Python and Lisp. Either use Python without Lisp or use Lisp, especially since Lisp has the better implementations and allows a wide range of applications.

> What do you think about the options?

The Lisp to C options exists already in multiple forms (GCL, ECL, MKCL, mocl and a few in-house implementations). Another Lisp to C++ is under development: CLASP.

Python in Lisp also exists.

https://common-lisp.net/project/clpython/


"It's not mainstream, but there are people finding it worth using."

Remember where this discussion started. They're trying to get Hurd mainstream. A person made a suggestion on that. It uses something that failed to mainstream almost every time and is mostly hated by mainstream programmers. So, I suggested not to use it if the goal is making something very popular. In light of that, LISP can have as many tiny communities as it likes: won't help this use case.

"Python is not at all inspired by 'LISP' and not even by Lisp."

My bad. Misremembered: it was ABC.

"Common Lisp compilers do that C code generation for me. No need to write macros for that."

Compiling a functional program to efficient C isn't the same as writing inherently efficient C in a functional language. An amateur like me threw together one in a short time while the other took many PhD's worth of effort to get in good shape. Most of them still don't outperform my old system because they try to solve a harder problem.

"That has been tried already. Doesn't interest Lisp developers much. For a Lisp user there is little benefit of using a mix of Python and Lisp. Either use Python without Lisp or use Lisp, especially since Lisp has the better implementations and allows a wide range of applications."

Makes sense. Appreciate the review. :)

"The Lisp to C options exists already in multiple forms (GCL, ECL, MKCL, mocl and a few in-house implementations). Another Lisp to C++ is under development: CLASP."

"The Lisp to C options exists already in multiple forms"

I think you misread my comment. Your list are all LISP implementations with C or C++ support. My tech was a C/C++ implementation on LISP. A subset, more honestly, as I never needed the whole thing. I macro'd useful patterns to make it more like a BASIC-style 4GL. Those macro's had straightforward mapping to highly efficient C. The rest, much like BASIC, had straight-forward mapping. My style was imperative, the constructs were imperative, the target was imperative, and data types etc were all consistent. The result was a productivity boost over developing in C or C++ with interactive development, better debugging, less coding, automated checks at the flip of a switch, & automated generation of all the source & makefiles.

I haven't seen anyone post something similar in years. So, what do you think of that kind of tool as a compromise between highly efficient, portable C and the safer, higher-level languages that aren't efficient enough for eg operating systems? Is such a thing worth rebuilding until functional compilers are good enough to wholesale replace C/C++ in their niche areas?

"Python in Lisp also exists."

In one paragraph, you say it's pointless and nobody is interesting. Then you give me a link to an implementation in another. Puzzling you are. Thanks for the link, though. :)


> Compiling a functional program to efficient C isn't the same as writing inherently efficient C in a functional language. An amateur like me threw together one in a short time while the other took many PhD's worth of effort to get in good shape. Most of them still don't outperform my old system because they try to solve a harder problem.

Lisp is not necessarily 'functional'. It's also quite imperative.

There have been in the past Lisp compilers which generated 'readable' and 'maintainable' C from Lisp code. The compilers I mentioned didn't really have that goal.

> My tech was a C/C++ implementation on LISP. A subset, more honestly, as I never needed the whole thing. I macro'd useful patterns to make it more like a BASIC-style 4GL.

I remember reading about users writing C in Lisp, with added use of macros to make code generation easier.

> The result was a productivity boost over developing in C or C++ with interactive development, better debugging, less coding, automated checks at the flip of a switch, & automated generation of all the source & makefiles.

Sounds good.

> Is such a thing worth rebuilding until functional compilers are good enough to wholesale replace C/C++ in their niche areas?

Hard to say. I doubt that C/C++ users will use Lisp-based tools - unless its really really good and useful.

> In one paragraph, you say it's pointless and nobody is interesting. Then you give me a link to an implementation in another. Puzzling you are. Thanks for the link, though. :)

'Nobody' is maybe the wrong word. 'Almost nobody' is probably better. The Python in CL implementation exists for some years, but I haven't heard of larger adoption or much further development. It was developed at a time when Python integration seemed to be more important. But it's definitely a nice try and may have some interesting bits in it...


As an aside, you can have a hypo-allergenic kitten today!

Behold, the Siberian Forest Cat: https://en.wikipedia.org/wiki/Siberian_cat

I had one of these, and she was a remarkable cat. Very active, very engaged, loved to play fetch, and unbelievable amounts of fur. My allergic-to-cats friends were totally fine around her, too!


Duke Nukem Forever shipped before Hurd. Duke Nukem Forever itself had some... delays [1]. Yet, it got done before Hurd and had 376,000+ installs. You've put Hurd's situation into perspective in a way I didn't think possible. Just wow.

[1] http://news.softpedia.com/news/What-Happened-Since-Duke-Nuke...


The fun part is that DNF was delayed because of too much money. The HURD didn't need money for that. GNU owns.


I think the team rolling in the money while delivering nothing was probably their end game. HURD's end game was delivering something that worked. In that case, the DNF team owns. :P


Meanwhile, Linux plays a dominate role in the world with a Kernel that is in absolutely everything.

So yeah, Micro-kernels work. But monolithic and simple has been carrying the day for a decade plus.

(and before anyone complains about NT, they lost their Microkernel card when the moved graphics into the kernel. They like XNU on the mac are Hybrid Kernels).


Actually if we're going purely by adoption rates, the most common kernels are probably some variant of L4 (used in baseband processors among other places) and ITRON in miscellaneous electronics, both of which are microkernels and the latter also an RTOS.


Maybe if the likes of Intel and IBM had decided to put their employees on Hurd instead of Linux, the story would be different.


I sit on our company's architecture review board, and we do a pass on every product that the company does. There are three links that I regularly have grounds to give: The "Linux is obsolete" Debate - http://www.oreilly.com/openbook/opensources/book/appa.html Taligents Guide to Designing Object Oriented Software: http://www.amazon.com/Taligents-Guide-Designing-Programs-Obj... And "The Critique" http://walfield.org/papers/200707-walfield-critique-of-the-G...


And yet, the monolithic systems (esp Linux) proved to have all the problems predicted. Shapiro did a nice job showing how ridiculous Linus's claim is in Round 2 of the debate, which you left off:

http://www.coyotos.org/docs/misc/linus-rebuttal.html

Can't help but repeat a main point: there are many high assurance (reliability or security) microkernel-based systems that have been fielded but not a single one has achieved this based on the monolithic model Linus loves. QED.



It would be nice seeing Hurd on L4se now that its source is freely available



I meant the mathematically proved fork of L4


Looks like your homemade mouse needs its buttons debounced.


Firefox..fills up available RAM with page caches. Often, swapping these pages back in from disk can be slower than re-fetching from the Internet.

WTF? I'm not a kernel guy and would like to know why this is. Very, very counterintuitive to me.


Non-SSD disks are really slow for random accesses, like you might need if the cached files are not contiguous, and often congested due to background processes competing for their use. Meanwhile, the server most likely has those files in memory, ready to push them.


Yes, and it gets even worse if it's not the browser's disk cache, but the browser's RAM cache paged by the kernel to the swap.

Paging out RAM is fast, you just write pages to the disk in chunks that are contiguous on the disk. But the contiguous disk chunks don't correspond to contiguous virtual addresses, and when reading them back in, the order is just however the application chooses to access the pages, so it's quite random and hard to predict. And you can only buffer writes, not reads - you have to block reads until you get the data.

25 years ago you might have 4MB of swap, at 4KB page size that's 1000 pages, or 1000 potential individual points you may have to seek to.

Today 4GB of swap is sort of a minimum size that's not entirely useless, and that's 1 million pages, or 1 million potential seek points.

So IMHO putting swap on hard disks with 5-10 ms seek latency was very useful back in the day, but is pretty much obsolete now - if you need swap put it on flash based storage.


Disks are what, 10ms random seek? You'd need a good, nearby server to service in under that time. And that assumes you've got a connection already established. If not, then add another roundtrip. I'm not sure "often" is appropriate here.


That's best-case. (Well, not really. Best case is that your disk isn't doing anything, the data is in one contiguous region, the head is in the right position, and the data is about to pass under the head.)

In actuality, it can be a lot worse. I've seen latencies of 500ms+ before, when the disk is in "seek hell". And right now my laptop has a median (!) latency of ~100ms. Mind you I'm listening to music and copying files in the background. But still.


It's kind of a hyperbole, granted, but most of our assumptions about disk access being faster rely on only one piece of software accessing that disk. With modern multi-core processors, pre-fetching and caching both at the application and the kernel level that assumption is quickly proven to be faulty.

10ms random seek is fine when you're only accessing one thing. But when multiple things are loading and alternating between locations it becomes significantly slower. Windows startup can be a pretty good indicator of this. I've got one system with an SSD (containing the OS) and a spinning platter drive for data. When I start it up, I have to wait for more than a minute before clicking anything or I end up with an unresponsive application for 3+ minutes. It still induces a quiet, resigned rage that such a powerful system can be nigh-crippled on boot.


I'm not sure about the "often" either, but I don't find it absurd. A full page nowadays has 20-30 files, which might not get cached sequentially, so you're talking about a bunch of seeks. If half need to wait for some other application to read its stuff, you're already talking about a big increase in latency; on the other hand, the connection tax is only paid once per hostname.


It's plausible; I've been at a talk where they suggested checking the local cache and requesting it across the network in parallel. I'd thought maybe they were talking about, say, both people on a fast network, but digging into it, that's not the whole story.

Wikipedia [1] has 5,400rpm hard drives having average rotational latency of 5.56ms.

AT&T's real-time latency charts have a few spots with under 5ms latency, and that chart's just the average, I think, so these might be beatable.

[1] https://en.wikipedia.org/wiki/Hard_disk_drive_performance_ch...

[2] http://ipnetwork.bgtmo.ip.att.net/pws/network_delay.html


There was a xkcd or something similar saying this, network round trip time was below hard drive on average.


> Since those days, HURD has acquired the same reputation in operating system circles that Duke Nukem Forever has among gamers.

What reputation does Duke Nukem Forever have among gamers?


Vaporware. Announced 1997. Restarted/largely rebuild multiple times, released finally in 2011.


It’s possible to have a complete and usable system running nothing other than GNU code.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: