This basic idea is something I've applied to a file format that stored objects, but where the costs of serializing and deserializing a whole object graph was prohibitive for a request/response round-trip. For any given object path, foo.bar.baz, a "program" could be compiled that could be interpreted over the file contents and the answer retrieved. All object paths were available for compilation ahead of time (it was a custom language, long story), so this approach could be far more efficient than any serialization story.
Another tool in this space is Native Client; any chance it could be adapted to allow an even smaller, safer x86 subset, so that you could give the kernel pre-compiled 'callbacks' that it could execute on, e.g., receiving a network packet, with 0 context switches? Probably not without a lot of work, since NaCl relies a lot on virtual memory for memory isolation, but it's an idea.
Oh, I didn't interpret it as such either; just some interesting related work :)
Another peripherally-related idea is Active Messages [1,2]. I see it as the analog to UDFs or other rich executable messages for efficient node-to-node communication in large clusters. It's sort of like RPC if you squint hard enough; you send little executable messages that run on a remote node, and can send you back some data if you want. I guess the point is that having rich executable messages allows you to get around long physical latencies between sender and receiver, as well as more mundane intra-node delays due to userspace/kernelspace context switches.
 http://www.cs.cornell.edu/home/tve/thesis/ (particularly chapter 4, it's a bit wordy but has lots of detail).
 http://www.dtic.mil/cgi-bin/GetTRDoc?AD=ADA204210&Locati... (there's lots of other published work on J-Machine from Dally's group, too)
The anti-serialization mechanism also acted as a transactional store - either all instructions succeeded, or the server effectively had no state change (DB transactions etc. were included into a distributed transaction as necessary). No in-memory state. It was a neat architecture (still is, I guess it's probably still in production).
Snoracle says JavaOS itself is obsolete, FWIW.
Given the impressive stability and security record of the AS 400, they had a point.
An interesting aspect of that is what it does to your security model: In the AS/400 world, where applications are compiled to bytecode and then to machine code, the software that compiles to machine code is trusted to never emit dangerous machine code, as there are no further checks on application behavior. In the Lisp Machine world, anyone who can load microcode is God. In Singularity OS, the .Net runtime is effectively the microcode and the same remarks about Lisp Machines apply.
He just had to sneak in a dig at Tanenbaum.
Microkernels are practical -- they just have different trade-offs. Microkernels are preferable if (a) the whole Linux kernel would be too large, or (b) you consider security more important than performance. It's probably also easier to provide realtime services with Microkernels, but I don't know much about Linux' realtime API to know if Microkernels would be much better here.
I think the main reason for the separation is the realtime constraints needed by radio codecs - if your web browsing slows down because you received an email, it's a minor annoyance, but if you stop sending radio packets because you received an email, you probably lose the connection or worse.
It should be possible to do all this on a single CPU (real-time is no problem - just use an RTOS), but it would be expensive and eat a lot of power.
In theory, yes, but the main smartphone OSes aren't RTOSes, and making them such requires nontrivial reengineering.
And the original thread circa 1992: http://groups.google.com/group/comp.os.minix/browse_thread/t...
Berkeley wouldn't cooperate with development on the 4.4BSD-Lite modified kernel, so in 1987 HURD decided to go with the Mach microkernel. But then they waited 3 years for licensing issues to clear up before investing any real effort into it. CMU stopped work on Mach in 1994, so HURD switched to Utah Mach. Utah stopped working on it in 1996. GNU kept working on that one under the name GNU Mach. And then (from Wikipedia): "In 2002, Roland McGrath branched the OSKit-Mach branch from GNU Mach 1.2, intending to replace all the device drivers and some of the hardware support with code from OSKit. After the release of GNU Mach 1.3, this branch was intended to become the GNU Mach 2.0 main line; however, as of 2006, OSKit-Mach is not being developed.
As of 2007, development continues on the GNU Mach 1.x branch, and is working towards a 1.4 release."
In 2004, an effort was started to move to a more "modern" microkernel. L4 was the first and it died almost immediately. Work started toward the Coyotos microkernel, but between 2007 and 2009, focus shifted to Viengoos. But then "As of 2011, development on Viengoos is paused due to Walfield lacking time to work on it. In the meantime, others have continued working on the Mach variant of Hurd."
Thomas preferred the latter, Stallman the former. As events proved, the BSD approach would have been fine (particularly since the legal issues eventually got cleared up), while the microkernel approach ran into much larger unexpected roadblocks than anticipated.
"Windows NT's kernel mode code further distinguishes between the "kernel", whose primary purpose is to implement processor and architecture dependent functions, and the "executive". This was designed as a modified microkernel, as the Windows NT kernel does not meet all of the criteria of a pure microkernel."
FYI, most drivers including video, have been moved back to a split between User Mode and Kernel Mode, rather than being completely kernel mode as they used to be (the Wikipedia article links to something that is very out of date). IIRC, Application IPC is still Kernel-Mode.
(about WDDM, which move most of the display driver functionality back to user-mode)
Don't tell me about aufs, UnionMount, or overlayfs: aufs isn't up to date because it's not mainlined (for no good reason I can find), and UnionMount and overlayfs are too far out of mainline and too much in development to find reasonable packages.
Linus: when you can back up your claims with working code, I'll start listening. ;-)
Speaking from experience, there are a lot of edge-case bugs remaining in aufs. It's usable enough for livecds and whatnot, but trying to use it on a server is a recipe for disaster.
Also, what I don't think union filesystems are a valid argument. They're not real filesystems in the way NTFS/FAT/extfs/reiserfs or whatever are — there's a lot less work to be done, and I could see the falling under the "toy" category that Linus was talking about...
commonly used for livecds, I compress /usr, saves me about 5GB and some I/O and therefore some battery
The key thing is to have simple, clean and well defined semantics, this is much harder when your VFS is polluted by stuff like symlinks and other weird kinds of pseudofiles.
I know Al Viro has wanted to have proper union mounts in Linux for many years, but getting proper private namespaces was hard enough, and now nobody uses them, which is sad (but more the fault of the suid-centric userspace environment than of the kernel, if you have to be root to create a new namespace, it is rather pointless).
Linus insults someone on a forum -> first page on HN !!
Just a thought.
People who think that userspace filesystems are realistic for anything but toys are just misguided.
I use sshfs all the time. Much of the software I use every day only needs to meet "toy" standards to be useful to me. What is Linus on about here?
His point isn't that userspace filesystems are useless, but they are much slower, so the existence of a Fuse filesystem is not an argument for not including a kernel space filesystem in the kernel.
LUKS' throughput is much higher, simply because it does not add at least two more kernel<->userspace transitions on every read and write.
glad i don't work with torvlads.
"that's like saying you should do a microkernel - it may sound nice on paper, but it's a damn stupid idea for people who care more about some idea than they care about reality."
So I look up this microkernel thing and find this old debate: http://www.dina.dk/~abraham/Linus_vs_Tanenbaum.html
It would be great to have a resource that explains the what and the why behind the linux architecture. Also arguments, errr I mean debates, between high caliber smart people would be cool.
There is a book "Just for Fun: The Story of an Accidental Revolutionary" that I ordered, but I have a feeling it doesn't go into the technical depth I would like. I could be wrong, just ordered it, will find out.
Not to say it could never happen, but the tradeoff on performance is just no where near worth it right now.
Still, it seems like a lot of the interesting filesystem ideas have to do with non-root users authenticating to something else over some network connection. This type of thing is simply a more natural fit for user space.
Keeping the high-level stateful protocol stuff out of the kernel is usually a good idea except when performance is all that matters (i.e. there will be full-time developers tuning it and cleaning up the inevitable security and crash bugs).
And Linus acknowledges:
> fuse works fine if the thing being exported is some random low-use
> interface to a fundamentally slow device
In the end, the answer is always to pick the right tool for the job. Comparing Minix and Linux is about as meaningful as comparing a phillips screwdriver with a flathead. They're used for different purposes.
I agree with your last sentence though.
If reliability is that much of a concern, why isn't the system being designed in a way that makes the distinction irrelevant?
about as meaningful as comparing a phillips screwdriver with a flathead. They're used for different purposes.
Is this just that they work with different kinds of screws, or are there cases where it's actually preferable to use a flathead screw+screwdriver instead of Phillips (or Robertson, which for some reason we don't have around here)?
...as meaningful as comparing a phillips screwdriver with a flathead. They're used for different purposes.
is perhaps not the best analogy. They are both for turning screws, and some people do, in fact, have strong opinions on which is better.
Unless you meant to say that a phillips is for turning screws and a flathead is for prying stuff open. :-D
This means that in areas where performance is paramount, the system stability benefits of a microkernel design just isn't worth the loss in speed. In other areas it might be, but as the current OS landscape shows it's not in huge demand. Part of this is obviously that monolithic/hybrid kernels really aren't that prone to crashing (bugs get fixed), which means that in practice these kernel designs offers BOTH performance and stability.
There are at least a few microkernel implementations beside Minix that are in use/more or less stable/shows some viability. (E.g. QNX, L4 family and to some extent EROS family)
They have received nowhere as much attention as something like Linux. So the usability is not really up there.
The only useful thing you can run on L4 is Linux, but that's pointless.
EROS/CapROS/Coyotos never had any useful userspace.
Has the Minix kernel shown significant performance results? (I am not being sarcastic btw, I am honestly asking.)
I know OSX uses something like a micro-kernel and it is a perfectly fine desktop OS, but performance suffers once you get to thousands of threads.
The filesystem, device drivers, IPC, network stack, etc is all in kernel space. It's true that it's built around the Mach microkernel, but like most systems that built around it, Mach is just a core layer in the kernel onion, rather than a microkernel in and of itself with the other major components implemented as user-space servers.
"If the automobile had followed the same development as the computer, a Rolls-Royce would today cost $100, get a million miles per gallon, and explode once a year killing everyone inside."
OSX doesn't use anything like a micro-kernel, it runs Mach (which even microkernel fans ridicule, it is bigger than many monolithic kernels like the Plan 9 kernel), with a monolithic BSD-style kernel directly on top, when your microkernel has (being generous) two components, both huge, I don't think it is like what most people consider a microkernel at all.
It is also fun to remember that people also claimed Windows NT was a microkernel, and then they added the graphics system right into the kernel.
Not a toy.
- New languages are needed for writing distributed/parallel applications
`Needed', no. `Helpful', perhaps. The jury's still out.