My thoughts exactly. I recently posted the link to that talk here on HN somewhere, and now seeing this is really creepy. Especially:
>All kernel components, device drivers and user applications execute in a single address space and unrestricted kernel mode (CPU ring 0). Protection and isolation are provided by software. Kernel uses builtin V8 engine to compile JavaScript into trusted native code. This guarantees only safe JavaScript code is actually executable.
>Every program runs in it's own sandboxed context and uses limited set of resources.
A lot of these ideas seem really good. I like that all processes run in CPU ring 0 and safety is ensured by the VM. It seems like this is more flexible and allows for more easily provable safety guarantees. That said, I think javascript is the wrong way to go, and especially how they're doing it. Javascript carries with it the idea of garbage collection, which is fundamentally incompatible with hard realtime systems because it can create unpredictable pauses. What happens when a driver is communicating with a graphics card over a PCI bus or with an SSD over SATA and suddenly there's a GC pause? It might miss some bytes, and in the worst case scenario, the serial data stream will still be valid and the driver will carry on like nothing happened, returning corrupted data.
I'm somewhat confused about the concurrency model in Runtime, The frontpage says it ditches traditional preemptive multitasking in favor of an event loop, but another pages says it runs a v8 isolate on each core. I'm going to assume that this means on my machine with 8 virtual cores there would be 8 parallel event loops. An eventloop for multiplexing OS processes on the timeline is simply not appropriate for the general usecase. If any process performs a CPU-bound operation, it would block all other processes in the same v8 isolate. This system is also not very good for multicore operations, it's impossible to properly balance to load on each core, leading to poor performance. Traditional kernels can run a single thread on core A one millisecond and core B the next, but even if it's possible to send a process between isolates, it would be a lot less efficient than simply remapping the address space.
I think it would be much better use LLVM bitcode instead of javascript. Instead of JIT compilation, the Kernel compiles the program to machine code when it loads it into memory, verifying that it's not using any unauthorized constructs statically. LLVM bitcode is already being used like this by PNaCl. It offers the same security guarantees as javascript, but without the overhead associated with it.
It doesn't use threads, but programs are still preemptable. Single event loop uses multiple event queues. Every program owns its own event queue and runs in separate V8 context.
V8 supports script interruption for contexts. Engine puts interrupt guard checks into every function and every loop. So its possible to interrupt context even in a middle of infinite loop.
Yes, system doesn't solve multicore balancing problem for applications automatically. The idea is that every app can use available cores on a machine similar to a fixed size thread pool. So it can manually schedule tasks for available cores.
Whats interesting, that this thread pool could provide transparent access to computing power on other physical machines. So this should be pretty scalable.
GC pauses. Yes, this is a problem, but it shouldn't corrupt any data. I think hardware interfaces should provide some kind of error checking mechanism with the option to retry transmission. Worst case - critical parts implemented in native code.
Another idea how to deal with GC pauses is to reserve one core specifically for V8 concurrent GC tasks for every other core. Additionally this core can take care of all interrupt handlers.
> I like that all processes run in CPU ring 0 and safety is ensured by the VM. It seems like this is more flexible
This approach has a few issues which make it impractical for use by a real general purpose OS:
* it moves all work related to safety either to the dynamic compiler (not so bad), when it can statically prove that the code is safe or onto the critical path if it cannot statically prove it safe (which generally causes significant slow-downs). For example a traditional OS will use VM to isolate application memory (where the performance impact comes from the occasional TLB miss), while this sort of approach might require boundary checks to be compiled before many memory instructions
* the problems mentioned above could be mostly avoided by requiring use of a language or bytecode format designed to allow the compiler to use static checks, but then all your applications have to use it (which is what Runtime.JS does)
* it potentially allows more complex interactions between the OS/VM and the application compared to traditional system calls, which is the sort of things that often enough creates hard-to-predict side effects
> allows for more easily provable safety guarantees.
I don't know about that, hardware is normally formally verified, while software normally isn't. While hardware bugs still exist, there are far fewer than in software.
One issue with using LLVM for this is compilation speed. JS engines are really tuned for compilation speed, whereas LLVM isn't (though NaCl's WIP SubZero backend may help here).
LLVM bitcode provides no security guarantees whatsoever though. It has raw unchecked pointers. You can even cast arbitrary integer values to pointers and dereference them. In PNaCl, security is provided by the NaCl layer beneath the LLVM layer.
It's an interesting idea, to have an OS kernel with processes/etc sandboxed from each other with NaCl sandboxes instead of traditional protection mechanisms. I wonder how the overhead compares.
They seem to be rolling their own HAL? Quite terrible if you think about it. Someone should really start a project that strips down Linux to the point that you could use its HAL for projects like these. I've heard things like USB or Ethernet drivers are incredibly hard to implement, let alone graphics drivers.
If we want a future in which it is feasible to design an operating system from scratch we really need some nice way of decoupling HAL implementations from the OS. At the moment we can't even separate graphics drivers from window systems, so perhaps it's a naieve desire.
Well you can use the NetBSD drivers pretty much anywhere you like, see http://rumpkernel.org - slightly more generally useful than the Linux ones as they are more portable and BSD licensed.
An exokernel design would allow for that sort of thing pretty nicely- it strips the kernel down to essentially just the drivers. Everything else could be added either to the kernel in a fork or to user space.
Linux graphics drivers actually already use an interface pretty similar to an exokernel, and new APIs like Mantle and Metal are even closer.
> Someone should really start a project that strips down Linux to the point that you could use its HAL for projects like these.
Firefox OS is based on "Gonk", which is basically the kernel and HAL from Android with some extra bits stuck on. I suspect that could be used again, surely?
You can always work with pintos (http://web.stanford.edu/class/cs140/projects/pintos/pintos.h...) or some other relatively simple 32-bit operating system if you want to get a feel for how to do this stuff from scratch. There's not a lot of point in it with Linux and all. Mostly it's a lot of tricky multi-threaded C programming.
You can forget about Linux in this regard. It is a giant bloated blob where "striping down" is very hard. On the other end there are some promising projects like http://osv.io/ or Minix. Actually Microsoft did a good job on separating graphics drivers from everything else, to the extent when you can just restart the driver when it crashes.
This reminds me of the Tanenbaum–Torvalds debate [0]. It's interesting to see how a microkernel approach is today more useful for derivative projects and a monolithic blob like Linux is good only for distribution as Linux as a whole.
Drivers written in JavaScript? Sounds like an insane hack. For performance uncritical stuff it might be worth using a language with GC, but if it is performance uncritical you should be doing it in userspace anyway.
Except that JSLinx is running an operating system mostly written in C... It's just the virtual machine that is written in JavaScript. Still pretty incredible though.
This project isn't exactly a kernel written in JavaScript either though - it's a bit of Assembly and mostly C++ underneath the JavaScript. It also is awesome regardless but I think they could make the description a little clearer.
JSLinux is actually not compiled with emscripten - it's written directly in JavaScript. There are some more notes on how it works here: http://bellard.org/jslinux/tech.html
A few weeks ago when coreutils was released re-written in rust, I decided to check if Atwood's Law held true for coreutils as well. At the time I didn't encounter a JavaScript version of coreutils, but with a project like this showing up, I guess it may only be a matter of time.
Actually, I think there's a pretty good motivation behind this.
When executing JS code in a browser, it runs inside a safe VM and can't touch any of the memory or variables that don't belong to it. However, the browser itself has to go through a lot of the abstractions and safeguards required by the kernel in order to run. Particularly, every syscall induces a context switch which costs a lot of CPU cycles. This happens on every operations that needs access to hardware, such as reading from a file or from a socket for IPC. Also, the kernel needs to update the MMU registers so it can map its own address space, invalidate the whole TLB, etc etc etc.
The basic idea here, it seems, is to avoid all these costs and build the system so the user can run nothing that isn't a VM. The VM will provide fail-safes and terminate offending code. This allows everything to run in a single address space and under ring 0, thus avoiding context switching.
This is beneficial to (and only to) code that we used to run under a browser or some kind of JS virtual machine (Node.JS servers come to my mind here). Web apps such as editors (Atom maybe?) or any other kind can now run natively and run way faster, which is very important on low-cost low-end machines, without all the overhead that they know pay twice (first to virtualize JS from everything else, and then to virtualize the browser processes). However, the raw performance that can be gained from writing fast C/C++ code will essentially be lost. The more JS code you were running before, the more a naked metal JavaScript OS will benefit you.
It's not clear to me if this will ever be necessary. What I just said is the justification of the idea, just from the top of my mind. Anyone feel free to correct me if I screwed up the argument :)
OK, so this is to generate, essentially VM images to run in a hypervisor? And is useful if you've got something like a node.js app that you want to run?
I can see that. I don't see the argument for "security, reliability and performance." First for oxford-comma reasons, next in terms of comparison with a kernel written in other languages, and finally in terms of JS.
But if you have to use JS, and you want to run something like a serving appliance VM, I can see how this could make things faster. Although calling in and out of native code should be quite fast on a JS engine, and using existing drivers and network stacks written in native code should be substantially faster, more reliable, and more stable than rewriting them in JS.
Because, sadly, everyone is hopping on the JavaScript bandwagon. But it's not as bad as it seems. There is CoffeeScript which is IMHO a considerably better language, and there is ClojureScript which brings the wonders of Lisps into JS. And for simple webapp logic I can tolerate JavaScript. It's also an uncomfortable but true fact that JavaScript is today one of the most-used languages in the realm of forward-driving forces of tech (that would be the startups). Most GitHub repos are in JS, too.
However, the lack of integers will always be my pet peeve, and I really can't imagine how could someone arrive at such a horrible idea.
you can put wings on a motorbike, attach a boat hull on a train, grow a tree in a boat, build a house upside down, have a toilet made of gold, create a ski track in a hot country...
It's useless, expensive, doesn't make sense, doesn't improve technologies, but you can show it to the world, and it will be talked about.
The benefit might be to teach people about using compiler technologies: front end, back end, optimizers, parsers etc, so that they might create some new, useful, well designed language.
My beef with hackers is that they sometime totally lack insight and design goal. That's cool to have access to technology and tinkering, but if you don't explore any new way of doing things, it's a little sad.
> My beef with hackers is that they sometime totally lack insight and design goal.
> ... but if you don't explore any new way of doing things, it's a little sad.
Self involved pretentiousness aside, what are your standards for an insightful, well designed, new hack?
And if a 'hack' doesn't meet these standards, is it worthless to pursue? Should we give up on any curious adventures for fear they might not be insightful enough?
>Runtime.JS uses global non-blocking event loop to dispatch tasks for the whole system. Preemption is supported by design (and by V8), but haven't been implemented yet.
How would you implement preemption in an event loop based system? Can you just stop execution of functions in the middle, run a different function for some time and then continue with the original one?
Yes, V8 supports this for contexts. Each program runs in its own context. Engine puts interrupt guard checks into every function and every loop. So its possible to interrupt context even in a middle of infinite loop.
You would need a JS runtime compiled for your target device, or a standalone JS compiler for your target device.
EDIT: Assumed you meant running JS on-the-metal. If you're running an OS then you could just have a JS interpreter (or V8) running on it.
NodeOS is more like a Node/Linux, which after a while we would just call Linux because it's shorter and catchier ;)
Runtime.JS seems to be a full operating system, not just a suite of tools on top of Linux. If it gets in any usable state it would probably be possible to make a Node implementation that works on Runtime.JS. The node implementation would be pure Javascript and would be wrapping around the low level API's of Runtime.JS.
It would have to. Node's libraries wouldn't be nearly enough to cover all the needs of applications here. Especially since it's a microkernel and things like drivers that perform raw bus I/O have to be built upon the libraries provided.
[1] https://www.destroyallsoftware.com/talks/the-birth-and-death...