>Every program runs in it's own sandboxed context and uses limited set of resources.
It's freakin awesome!
I'm somewhat confused about the concurrency model in Runtime, The frontpage says it ditches traditional preemptive multitasking in favor of an event loop, but another pages says it runs a v8 isolate on each core. I'm going to assume that this means on my machine with 8 virtual cores there would be 8 parallel event loops. An eventloop for multiplexing OS processes on the timeline is simply not appropriate for the general usecase. If any process performs a CPU-bound operation, it would block all other processes in the same v8 isolate. This system is also not very good for multicore operations, it's impossible to properly balance to load on each core, leading to poor performance. Traditional kernels can run a single thread on core A one millisecond and core B the next, but even if it's possible to send a process between isolates, it would be a lot less efficient than simply remapping the address space.
V8 supports script interruption for contexts. Engine puts interrupt guard checks into every function and every loop. So its possible to interrupt context even in a middle of infinite loop.
Yes, system doesn't solve multicore balancing problem for applications automatically. The idea is that every app can use available cores on a machine similar to a fixed size thread pool. So it can manually schedule tasks for available cores.
Whats interesting, that this thread pool could provide transparent access to computing power on other physical machines. So this should be pretty scalable.
GC pauses. Yes, this is a problem, but it shouldn't corrupt any data. I think hardware interfaces should provide some kind of error checking mechanism with the option to retry transmission. Worst case - critical parts implemented in native code.
Another idea how to deal with GC pauses is to reserve one core specifically for V8 concurrent GC tasks for every other core. Additionally this core can take care of all interrupt handlers.
This approach has a few issues which make it impractical for use by a real general purpose OS:
* it moves all work related to safety either to the dynamic compiler (not so bad), when it can statically prove that the code is safe or onto the critical path if it cannot statically prove it safe (which generally causes significant slow-downs). For example a traditional OS will use VM to isolate application memory (where the performance impact comes from the occasional TLB miss), while this sort of approach might require boundary checks to be compiled before many memory instructions
* the problems mentioned above could be mostly avoided by requiring use of a language or bytecode format designed to allow the compiler to use static checks, but then all your applications have to use it (which is what Runtime.JS does)
* it potentially allows more complex interactions between the OS/VM and the application compared to traditional system calls, which is the sort of things that often enough creates hard-to-predict side effects
> allows for more easily provable safety guarantees.
I don't know about that, hardware is normally formally verified, while software normally isn't. While hardware bugs still exist, there are far fewer than in software.
- Lisp based OS (Interlisp, Lisp Machines)
- Oberon and derivatives
So I think at least for home systems it could work.
LLVM bitcode provides no security guarantees whatsoever though. It has raw unchecked pointers. You can even cast arbitrary integer values to pointers and dereference them. In PNaCl, security is provided by the NaCl layer beneath the LLVM layer.
It's an interesting idea, to have an OS kernel with processes/etc sandboxed from each other with NaCl sandboxes instead of traditional protection mechanisms. I wonder how the overhead compares.
- House http://en.wikipedia.org/wiki/House_(operating_system)
- HOP (House parent) http://lambda-the-ultimate.org/node/299 (topic recurse on jnode and others)
Mesa/Cedar at Xerox Parc:
If we want a future in which it is feasible to design an operating system from scratch we really need some nice way of decoupling HAL implementations from the OS. At the moment we can't even separate graphics drivers from window systems, so perhaps it's a naieve desire.
Linux graphics drivers actually already use an interface pretty similar to an exokernel, and new APIs like Mantle and Metal are even closer.
Firefox OS is based on "Gonk", which is basically the kernel and HAL from Android with some extra bits stuck on. I suspect that could be used again, surely?
When executing JS code in a browser, it runs inside a safe VM and can't touch any of the memory or variables that don't belong to it. However, the browser itself has to go through a lot of the abstractions and safeguards required by the kernel in order to run. Particularly, every syscall induces a context switch which costs a lot of CPU cycles. This happens on every operations that needs access to hardware, such as reading from a file or from a socket for IPC. Also, the kernel needs to update the MMU registers so it can map its own address space, invalidate the whole TLB, etc etc etc.
The basic idea here, it seems, is to avoid all these costs and build the system so the user can run nothing that isn't a VM. The VM will provide fail-safes and terminate offending code. This allows everything to run in a single address space and under ring 0, thus avoiding context switching.
It's not clear to me if this will ever be necessary. What I just said is the justification of the idea, just from the top of my mind. Anyone feel free to correct me if I screwed up the argument :)
I can see that. I don't see the argument for "security, reliability and performance." First for oxford-comma reasons, next in terms of comparison with a kernel written in other languages, and finally in terms of JS.
But if you have to use JS, and you want to run something like a serving appliance VM, I can see how this could make things faster. Although calling in and out of native code should be quite fast on a JS engine, and using existing drivers and network stacks written in native code should be substantially faster, more reliable, and more stable than rewriting them in JS.
However, the lack of integers will always be my pet peeve, and I really can't imagine how could someone arrive at such a horrible idea.
It's useless, expensive, doesn't make sense, doesn't improve technologies, but you can show it to the world, and it will be talked about.
The benefit might be to teach people about using compiler technologies: front end, back end, optimizers, parsers etc, so that they might create some new, useful, well designed language.
My beef with hackers is that they sometime totally lack insight and design goal. That's cool to have access to technology and tinkering, but if you don't explore any new way of doing things, it's a little sad.
> ... but if you don't explore any new way of doing things, it's a little sad.
Self involved pretentiousness aside, what are your standards for an insightful, well designed, new hack?
And if a 'hack' doesn't meet these standards, is it worthless to pursue? Should we give up on any curious adventures for fear they might not be insightful enough?
How would you implement preemption in an event loop based system? Can you just stop execution of functions in the middle, run a different function for some time and then continue with the original one?
How do you define an interrupt handler in JS?
No different from any other language.
If I wanted to run $SOMEFILE.js on an msp430 today, how would I go about it?
I don't see any mention of node. Is the plan to implement a whole new suite of low level libraries with different APIs?