I think there will be more projects like this. Apps running in a sandbox, thinking they have the OS to themselves. The next logical step is to have all new apps written in managed code, and use virtualization for the "legacy" native code.
Come to think of it, isn't it supposed to be the purpose of an operating system: letting programs think they own the hardware? Now they can pretend to own the OS too :)
I think we'll get to microkernels, but through evolutionary steps like this rather than ground-up redesign.
Presumably that would mean that operating systems (at least for guests) could become a lot simpler as in most cases the guest is there for one specialized purpose and doesn't need most of the features usually found in an OS.
Both present "virtual machines" to client code. For that matter, so does UNIX, although the UNIX virtual machine has some very complex virtual instructions, like fork(2), exec(2), dup(2), etc. You can crudely map most software platforms on a continuum of abstraction, with something like Python's implicit virtual machine (which is dynamically typed and bound late) at one extreme, and a bare-metal VMM whose interface is identical to that of the underlying hardware at the other.
Both paravirtual hypervisors and microkernels extend the underlying hardware, and they do so at a lower level of abstraction than what we call an OS. In practice, the hypervisors extensions feel more like hardware (they might include device models, virtual memory translation, and interrupt models), while the microkernel's would feel more like software (providing RPC mechanisms, security models, abstractions like "thread" and "process", etc.).
Even in practice the line is grey sometimes. L4 used to call itself a microkernel; now it calls itself a hypervisor.
A hypervisor runs multiple Operating Systems, having each one think that it has access to the whole of the hardware.
A microkernel is one way of writing the kernel of the Operating System, so that each part of it is a separate process, routing messages to each other in a safe manner to get things done, rather than doing direct calls to each others code.
With a hypervisor you wouldn't expect each of the OSes to have any communication with each other at all, whereas with a microkernel you'd expect the different processes to talk to each other a lot.
You can, apparently, repurpose a microkernel as a hypervisor, but I don't know anything about that at all. Presumably the infrastructure is quite similar.
Similar work was done in the OpenTC project (http://www.opentc.net), i.e., running legacy OSs in isolated compartments and multiplex their visual interfaces using a "secure GUI", which labels the windows/interfaces according to certain properties. It also supported OpenGL in the "AppVMs", which is currently omitted in Qubes OS.
Qubes implements Security by Isolation approach. To do this, Qubes utilizes virtualization technology, to be able to isolate various programs from each other, and even sandbox many system-level components, like networking or storage subsystem, so that their compromise don’t affect the integrity of the rest of the system.
It's not exactly a new "Linux based OS", Joanna is a well known security hacker and so if you think about the reasons them you will see that what she's trying to do is a new security computing model for operating systems based on virtualization, just like Google Chrome was a new model for browser security.
Virtualization will be huge in the future, probably we will have common desktop hardware with bare metals hypervisors implemented directly on the hardware in the next 15 years or so, virtualization is already used in datacenters, network appliances and used extensively in the "cloud", how will we use it in the desktop is still open, but I do believe the first application will be running legacy application on modern operating systems and security.