This is an operating system that runs directly on the Xen hypervisor, written in OCaml, which does away with the usual OS abstractions.
What the article doesn't really emphasize enough is the aim of Cloudius is entirely about running the JVM. This of course will guarantee popularity amongst "enterprise" types.
However it's still very similar to running a JVM in a process directly on the host. You could do something similar by running a JVM on the host and using cgroups to confine it.
Cloudius's USP is that existing clouds are already running full guest operating systems, so their OS-v running a JVM fits into this landscape naturally. Architecturally it's nothing new.
I wish Dor & Avi well though :-) (ex Red Hat associates)
In their case, the application runs both as a process in the host and as a guest. It gives the application access to traditional OS APIs, and allows use of processor extensions to directly access virtualized hardware. The benefits of doing this include ability for an application to do low level custom IPC, to use the page tables for garbage collection, to trace the use of system calls much more efficiently, to hook into page faults, and so on. Very cool stuff.
VMs are mostly about solving the same problems that operating systems were there to solve in the first place, only slower.
I personally think that part of the problem is the way software is installed and configured by modern package managers and distributions. It makes people see the installed software as part of the operating system almost. If you want two webservers with different configurations you therefore need two operating systems.
The concept for creating a couple of different users and running the software out of each one seems foreign to modern system admin these days.
Supporting high-availability of guest servers by allowing "live" migration from one host to another is pretty cool as well - either for fail-over or to allow hardware maintenance.
The default position these days in enterprise environments seems to be that a server will be a VM unless you have a really strong case for having dedicated hardware (and that only ever seem to apply for database clusters).
I suspect running properly isolated JVMs directly on host kernel would be faster than going through the hypervisor. I would love to see benchmarking numbers.
But there are two problems on why people keep reinventing the wheel:
- Current solution Isn't Sexy Enough, aka Not Invented Here.
- Current corporate culture drives acquisitions of tech startups based on a) business, i.e. number of users, and b) innovative technology that doesn't exist anywhere else, reinforcing previous point.
All this work to come up to yet another solution to the same old problems (IBM solved machine partitioning around 70s ?) is happening because engineers love to work on their specific pet projects and not solve other people's problems.
Not that there's anything wrong with that.
However in the real world there are self-service clouds which offer cheap, easy to consume VM containers. So having a JVM which runs directly in these containers makes some sense. (The alternative would be to persuade Amazon to let you run your JVM as a host process using LXC or something .. good luck with that)
They are pretty good :)
Hardware virtualization is best viewed as a hack for running multiple operating system kernels at the same time, where each kernel is designed to have a machine to itself. In any sanely designed system, this shouldn't be necessary; multiple processes under a single shared kernel should be good enough.
In the beginning you had
[OS] -> App
Then, people would put those Apps into a VM, the trend going to one VM per app.
[OS] -> [VM] -> [App]
Just to realize that the VM may be too much of an overhead, so now OSv comes along to cut that down, relying on the OS for memory management, task scheduling, etc, effectively ending up with
[OS] -> [translation layer] -> App
So that's just a glorified sandbox, why not just use LXC?
Cheaper, less administrative overhead, less abstraction, less vendor tie in (if you go POSIX for example).
I think that might upset the virtualization proponents though...
If processes are insufficiently isolated, it's the system call interface that's broken, not the isolation model.
It seems to me that virtual machines and containers could be implemented on top of the existing process hierarchy by allowing a parent process to intercept and reinterpret its child process' system calls. Simple example: Want to implement chroot? Intercept all open() calls and prepend the root path (taking care to prevent escaping with '../').
I am just curious as to why people would use virtualization at all if it is possible to accomplish the same thing using regular processes.
"You are absolutely deluded, if not stupid, if you think that a worldwide collection of software engineers who can't write operating systems or applications without security holes, can then turn around and suddenly write virtualization layers without security holes."
Quotas are easy enough to enforce. Most UNIX derivatives (including Linux) have disk and process quotas, some for over 3 decades.
Virtualization seems to be best used for reselling (and overselling) hosts that are smaller than the physical machine and not much else. Migration/failover is a non issue if you know what you are doing and if you need larger machines, it's just more overhead on top of a dedicated host. Plus it's increased administrative cost and more expense as a whole.
In theory, VMs should help reduce the attack surface by a lot. For example, all the system calls in the VM are handled by the guest OS. The actual system calls made to the host should be minimal and can be more easily audited.
[VM] -> [OS] -> [App]
Which is the approach used by Windows Hyper V and VMware vSphere Hypervisor.
I'm curious how you would configure and manage your applications. Like are you able to attach to the input and output streams from the host or would you still get some basic form of bash to manage it?
I've played with CoreOS a bit, but this is a much more radical change.
I love how people are beginning to rethink many of the things that all successful operating systems have had in common so far.
The idea of using virtualization as an inherent layer in the application architecture (ht IBM OS/360) is great for flexibility.
tl;dr amount of code rewritten vs. reused
Right now, only Xen, KVM and EC2 HVM are supported hypervisors. Hopefully an OSX might come with vmware support later.
Instead of having an full OS-like isolated system within a single OS without virtualization, this is using virtualization but avoids having a full OS-like system inside the guests.
If you have OSv working both under virtualbox or vmware on your OSX dev box and under KVM or EC2 HVM on your cloud production environment, then it might be possible to have a docker.io features directly under your OSX dev box.
Also, if you're willing to virtualize something, and want Docker features, why wouldn't you virtualize Docker?
Currently if you want to use docker under OSX you have to run it inside a Linux VM (typically using the vagrant / virtualbox). But the Linux VM is using a bunch of memory on your dev environment. If docker could run the app inside virtualbox + OSv rather than having to use virtualbox + full linux + LXC I assume you would get a more lightweight dev environment (faster boot times and less memory usage).
Wouldn't it be possible to run a minimal POSIX OS inside a container? (genuine question, I don't know the answer to)