Yeah, Docker is about to get some enhancements for sure. Maybe some real security improvements, too. You can count on it.
I had heard of it and that it was good. Didn't know he was one of the authors. Even more props. :)
There are two touted benefits of unikernels, performance and security. Performance turns out to be a red herring, as the overhead of an OS vs a Hypervisor turns out to be roughly equivalent (with the OS actually winning in some use cases).
Security is definitely an issue, but it's so abstract. My company is a compliance (a very specific industry's compliance) cloud provider and we have gone with Docker as we get to use the OS as our Hypervisor, which means it is much more extensible and, in our use case, secure as we are able to auto-encrypt all network traffic coming out of the hosts with a tap/tun virtual device.
Two things need to happen to make unikernels attractive. A new Hypervisor needs to get made, one that is just as extensible as an OS around the isolated primitives. It should also have something extra too (like the ability to fine tune resource management better than an OS can). Secondly a user friendly mechanism like Docker needs to happen.
As for 'a user friendly mechanism like Docker' ... well, I hope today's news convinces you that it's in progress.
Had you looked at OSv and Capstan? It is just like Docker in that respect.
That presumes you get to choose. When you have a hypervisor either way (as on IaaS), the only "choice" is between a "bare" hypervisor, or a hypervisor plus an OS doing containerization.
The advantage here would be that Amazon could change their container product to allow only one layer of isolation (user code in a unikernel, inside Amazon-run Xen) instead of two (user code in Docker, inside Amazon-run Linux per customer, inside Amazon-run Xen).
If you look at security through the narrow lens of "being able to break out of your environment" then yes, Hypervisors are more secure, but you have to look at more than just that. The OS allows you to make more than just the host secure, it allows you to make the network secure, ensure that all customers get encrypted disks, etc...
Full disclosure, I work for a company doing this right now (Catalyze Inc.)
Google Compute Engine includes Linux and Windows based virtual machines running on KVM, local and durable storage options, and a simple REST based API for configuration and control.
No Xen there
It sounds like the main benefit to the Unikernel security is the implicit audit of all the services you want to roll into the kernel down to the bare minimum. I imagine that could be done with a more conventional architecture, it's just that nobody ever does.
It's hard to see how you'd get a traditional OS stripped down anywhere close to e.g. the mirage-firewall unikernel (http://roscidus.com/blog/blog/2016/01/01/a-unikernel-firewal...)
NodeOS cut out everything but the absolutely essential parts of the linux kernel. No c libraries, no c compiler, no POSIX utilities, no user space, etc.
It's amazing how much you can cut out and still have a decent platform to build servers.
If all apps ran on its own hypervised runtime instead of a "native" runtime, the world you be a better place.
The overhead of starting a VM is actually in the 2 digit milliseconds and can be done while the initial I/O are performed.
The XDG-App project is trying to make it easier to sandbox Linux desktop and server applications by defining all the interfaces between the sandboxes and the OS. This work, while initially designed for containers, will eventually work on VM too.
I'm not very hopeful given that their CTO is quite open about wanting to embrace-extend-extinguish competing technologies. This move embraces unikernels, and now they are perfectly positioned to go the rest of the way.
The discussion at https://news.ycombinator.com/item?id=10904452 may shed light about my complaint.
Things like Docker and systemd are exactly EEE land grab lock-in plays to redirect "open source" value back towards private entities and unilateral control structures (instead of cross-vendor standards bodies, because standards bodies can't make individuals into billionaires).
How do we as lowly developers with effectively $0 net worth fight companies with billion dollar war chests? Over time, better does win (see: the collapse and accelerating irrelevance of Microsoft), but in the short term it's likely we'll get stuck in another 10 years of platform "dark ages" until we see the light again.
I don't think that's true, and I'm not sure what made you say it. Maybe you're referring to Google here, but I don't think it's true of either company.
I hope this brings Unikernels adoption forward, and boosts a little more OCaml's adoption as well.
There's a reason Docker is so heavily funded by the biggest cloud companies. They're the ones who stand to benefit from specialized Docker container optimized for their own platform. It's a great way to package open source services and leverage the effort of the developer community into centralized profit.
It seems blatantly obvious that Docker is looking to build the app store of devops. I wish them the best of luck, but they are going to face some heavy resistance from open source initiatives. There is nothing about Docker that makes it fundamentally superior to the systems it's based on, specifically the LXC project. When developers finally wake up to the fact that they are sleep walking into a massive walled garden, Docker will lose some of its clout.
I've been involved in the Docker community as a free software developer for a long time. I may have squabbles with some of the people who work at Docker Inc. but many of them are very passionate about software freedom, so I don't think it's fair to say the project is bad because the CTO (who hasn't committed code for almost a year) has views that are unfriendly to free software.
For the record I don't have views unfriendly to free software. But I have upset many people by disagreeing with their design opinions, and as a result not merging their pull requests. Sometimes those people imagine a conspiracy of evil VCs smoking cigars, instructing me to reject their patch for an obscure capitalistic motive that they can't quite express. When in reality the explanation is simpler: I disagree on their opinion, and I have commit rights.
If you point out an example of my supposed hostility to free software, I will be happy to refute it with objective facts. But then everyone is free to believe what they want to believe.
Unfortunately, I think the voices of reason are still getting drowned out in the Docker hype.
I don't know about that. There has been considerable Docker backlash. One example from just a few days ago:
And this comment in particular:
We get into tricky situations when you need C extensions, e.g MySQL or PostgreSQL. Since extensions must be statically linked, you have to decide upfront what you want. Either extension can be compiled in, but including both in the package by default.... So a production Python build is fairly custom right now.
Nothing insurmountable, just haven't gotten the workflow perfect.
ps - shameless plug but we're hiring talented OCaml devs (http://stackhut.com/#/careers)
pps - Congrats to @amirmc and the Mirage OS group!
The cool part is, if the OS is trimmed down enough (ex < 10mb) it's small enough to fit in version control.
Much like we automate build tools to concatenate/minify web assets, it'll be possible to create a build step that takes a webapp as input and spits out a fully functional VM ready to deploy as output.
It completely inverts the deployment process. Instead of building an environment and deploying an app to it, you focus on building the app and deploy it as a VM when it's ready.
As much as physically possible, yes.
 http://unikernel.com/#notice (not to be confused with the community website at http://unikernel.org :)
I've been following the Mirage and rumpkernel lists for a while and its nice to see these hackers getting traction (and money!) for their efforts.
Not too long ago unikernel.org was started, which IIRC was billed as a community driven "one stop shop" for information on the subject, which I assume is independent of the company "Unikernel Systems". Hopefully Docker won't go rogue and start attacking others that use the term "unikernel" by claiming that its trademarked or something like that.
Congratulations Amir et al!
Also, the word 'unikernel' is used like a generic term and as far as I know it's not trademarked.
(edit added later so that we get credit going where it's due: It occurred to me that it was Sebastian Wicki who was campaigning for the no-userspace mode last summer as part of his lowRISC project, and did the initial work to be able to again use Rumprun without "userspace". Apparently my memory of events goes only a few weeks back if I don't think about things carefully ...)
Now, if I'm allowed to summarize rump kernels, I'd say the goal is to build a framework which incorporates enough of the past to allow things to work, but tries to be as flexible as possible so as to enable the future. I'm a firm believer in "there's no such thing as #1", which means you shouldn't produce software components which work only in one type of tool, because it's easy to foresee that right around the corner you'll have to build components for the next tool.
p.s. "Atti"? That one was new, usually it's "Antii" or something like that ;) ;)
Having everything in-kernel (single memory space) with POSIX-y API for applications the right direction ?
or I just brain farted here ?
Anyway, that's the right direction if that's what you want you want to accomplish. Otherwise it's the wrong direction.
POSIX is just the part of UNIX that should have been part of the C runtime, but instead they made it into an optional standard.
Almost ever other programming language with richer runtimes don't have any need to depend on POSIX.
All their APIs for creating processes, threads, accessing file systems, communicating over the network aren't POSIX dependent.
I didn't expect unikernels to gain mainstream notice for at least 6 months to a year.
I posted the links because (at least for me) it was hard to get my arms around what a unikernel was by looking at MirageOS. Largely because I have little knowledge of functional programming, OCaml, etc.
Other than reducing complexity, our distributed database use the virtual memory hardware in a unique way, so a mono kernel was essential.
Having said that, the easiest way to develop such a system is not on the bare metal, it's by running Linux in such a way that in only uses the first 1 or 2 cores, and then running your "custom kernel" on any other cores in the system. Then you can use a normal debugger and utilities during development. It's only when you actually want to put it into production that you can consider not using Linux at all.
Docker is trying to be the next systemd
Pick whichever reason suits your agenda ;-)
My 'editability' timed out on my initial comment before I had a chance to fix that, Antti.
That said, there's a footnote about rumprun, and if Justin Cormack of Unikernel is https://twitter.com/justincormack, its all so close to Antti (Justin and Antti are both of NetBSD). For -me- Antti has been such a big part of my unikernel experience, I figured there'd be more involvement in this project or at least more recognition of his work.
I hope this wasn't acquisition to simply kill unikernel approach.
If they decide to drop containers and go the unikernel route, then a lot of what they already did to this point is no longer relevant. If go the containers direction then unikernel technology won't be too useful, unless using it to create hosts OS that would run the containers, but that is a huge challenge and at that point it no longer would be simple kernel.
A bad analogy could be made to Mac OS 9 and earlier multitasking, and Mac OS X multitasking: the implementations are wildly different but you keep (and gain) customers.
Were the terms of the deal disclosed?
Perhaps Unikernel Systems ran out of money and it was an "acqui-hire"?
Even if the acquisition was a result of Unikernel Systems running out of money, I have no doubt that these folks will continue to work on unikernels, rather than, say, more tools for managing containers.
Its people will be the elements that go to make up a new generation of unikernel companies.
> The result of this is a very small and fast machine that has fewer security issues than traditional operating systems (because you strip out so much from the operating system, the attack surface becomes very small, too).
Obviously traditional operating systems provide a lot of interfaces that represent attack surface, but they're generally able to be secured. On the other hand, much of the operating system actually _implements_ security, so if you throw it out, you're losing that.
A server could be killed and restarted at the first sign of compromise. If you aren't loading GBs worth of modules that'll never be uses, the OS will be able to boot up in a few seconds max.
If you're talking about the on-disk files, that's possible to do without throwing away almost everything -- Solaris does this using read-only zones (ROZR).
If you're talking about the kernel itself; that can never be truly immutable. An OS had to read and write memory.
I also don't understand the claim about "GBs worth of modules"; if you're talking about kernel modules, there shouldn't be "GBs" to start with.
As for security theatre; I hardly think crypto support, auditing frameworks, qualify. So you need to define that as well.
Meaning every restart brings the system back to a fresh state. Most trojans/backdoors persist because they're able to modify the underlying OS to load them on boot. No, immutable OSes are -- by no means -- a new idea. This just uses the approach by default.
Not talking about the kernel specifically. Although there's no reason the bootup time should be slowed down by loading unnecessary HAL, drivers, protocols, utilities that will never be used in a server setup. GBs of modules was referring to the rest of the OS made up mostly of features that will never be used by the server.
Crypto support is one big issue. I'd assume if the OS uses V8 at the base level, it should be able to be packaged with OpenSSL.
Auditing for what exactly? File access? The filesystem is immutable. Network requests? Can ve handles at the network level or a tool can be added to log network requests via an offline logging service.
Security auditing at the OS level isn't required to the same degree it is on other OSes because the attack surface is so small. For instance, you don't have to check for SSH vulnerabilities when there's no SSH. The userspace sandbox isn't necessary because everything that's network-facing already has to run through V8's sandbox.
Application-level vulnerabilities will always be an issue but that's nothing new. This doesn't solve all problems. Just reduces the potential for issues.
Ok, then yes, Solaris read-only zones provide that level of immutability.
A good operating system will be smart enough to only load what it needs; there will be some inefficiency, but protocols, drivers, HAL are actually used in server setups and are not usually the source of significantly increased boot times unless they're misconfigured. Also, with a proper service management facility and packaging, it should be possible to simply omit unnecessary items; services are usually what contribute the most to startup time.
Right, but you generally need more crypto administrative capability than OpenSSL alone provides to manage a sufficiently complex application. (e.g. gnupg, etc.)
Security auditing at the OS level isn't required to the same degree it is on other OSes because the attack surface is so small.
Sorry, I'll have to disagree on that point. Auditing is always important because it allows you to determine what happened and why when privileged operations are involved. The overhead of auditing is almost completely determined by what an administrator chooses to audit and how detailed that auditing is.
For instance, you don't have to check for SSH vulnerabilities when there's no SSH.
Perhaps you misunderstood what I referred to when I spoke of auditing; I don't speak of checking for vulnerabilities -- packaging should generally handle that in combination with metadata and reporting. I was referring to the recording of the assumption of privileged operations, system authentication, and related details.
The userspace sandbox isn't necessary because everything that's network-facing already has to run through V8's sandbox.
Disagree again; the point of having a userspace sandbox is in case something escapes V8's sandbox, which has been done in the past. It's an extra line of defense.
While it reduces the number of problems in the existing space, I personally believe it creates new problems. I'm not sure the tradeoffs are worth it.
I would rather see an immutable container with a minimised set of packages with proper auditing enabled. That's far easier to test and deploy and doesn't create new problems.