To those who just spent the last two years retraining your teams and retooling your infrastructure explicitly for docker (who may show up in this thread embracing and enhancing with a large marketing budget shortly), do take this opportunity to learn the architectural and management/maintenance value of abstraction. ;)
That said, VM's have their place, and docker has the option to switch out backends. It's entirely possible to replace runc with some other tool that starts VM's instead of containers. (That's already happening today with Windows containers.)
Could you use file system snapshots for this? Maybe also for the layers?
That being said, if you're going to go that way,it's to bad that there isn't more inspiration from the past 20 years of OS design. A capability based security/object management interface would nice. I also really like Akaros's VM threads model; IMO that'll be the way we end up running what we currently call unikernels.
Agreed and seL4 comes to mind. It's capability based, quite fast and secure. For that matter, it's also quite small.
Well no, it's not "fundamentally" incompatible, it just doesn't seem very practical. And practicality matters IMHO.
Maybe I'm missing something but it seems the debug mode of these systems would be very similar to how you debug a regular kernel. I.e. you have some GDB stub provided by your hypervisor. That might be fine for GDB, but what about all the tools to observe a running system? How would I ftrace, perf, strace, netstat, tcpdump, poke around in /proc and /sys etc?
Sure there is noting stopping you from developing equivalent tools. But it's not very practical. Building a truly isolated container environment for Linux (something like Zones) would also give you isolation and provisioning speed, but you get to keep all your tooling.
I don't think it's fundamentally impractical either, since a unikernel is simply the parts of the OS that you need compiled into the application binary. Maybe this means it is actually running an SSH server, but I think it will look more like a debug service running in the application (if only because unikernels likely wouldn't have processes or files or other concepts that are designed for a multi-tenant world).
> But it's not very practical.
I think this is the question--do the gains justify the costs. And considering the gains include a reduced attack surface, simpler orchestration, reduced resource allocation, etc. I think this is the case in the cloud market, which is only growing.
> Building a truly isolated container environment for Linux (something like Zones) would also give you isolation and provisioning speed, but you get to keep all your tooling.
You don't get to keep all of your tooling with containers anyway--unless you're rolling your own orchestration and container runtime, you're almost certainly using the tools provided by or building off of your container runtime. This is especially true if your service runs on dozens or hundreds of hosts--you're probably not just SSHing onto prod servers to do your debugging with your ordinary tools; you're relying on something that abstracts over those hosts (of course, there are exceptions, but they're just that: exceptions).
Looks like they've got a solution for dubugging: https://twitter.com/erlang_on_xen/status/641628659657371648
But if we're comparing unikernel-VMs to containers, my guess is that most users are after the "take my $LANG binary and concatenate it with half a Linux kernel"-workflow. And for that general case, most standard tools are off the table.
Obviously Xen will have a long life due to AWS, but I am failing to see anything that machined and kvm don't offer here on the performance oriented side.
There is probably be some use case but the xen time slice model doesn't seem to know enough about frequency binning or the impacts of high energy/heat instructions on them to be very competitive in the applications I can think of.
Dockers main problem is an increasing number of use cases, and their engineering decisions not meeting everyone's needs. To be honest if it wasn't for faux container support needs on windows/osx I am betting that systemd-nspawn would take over that market due to this.
For the most part docker performance is mostly limited by concurrency and amdahl's law (including fs perf). I don't see Xen solving these issues, but there are very real security benefits.
One very real use case I can think of is that any person or process that can launch a container on Docker is effectively given password-less sudo access, particularly because you can't disable the privileged flag. The attack surface on Xen would be much lower in that case but once the exploits start arriving docker could change that choice and reduce their attack surface quite a bit.
I think that these reasons, plus the EOL nature of Solaris "upstream", would easily put off most people in charge of making a long term technology commitment. I know it did for me.
It doesn't help that the name was already in use (by a minimal X11 server).
One problems with unikernels is a lack of debugging/tracing tools (like Dtrace/eBPF): https://www.joyent.com/blog/unikernels-are-unfit-for-product...
See discussions here:
Its also simply not relevant for many cases.
Unikernels appear to be part of the solution...