Yep, unikernels can get away without the security boundary between user and kernel modes... except if you want any security at all in your installation, because at kernel mode people can break out of the container.
They can also get away without process scheduling and IPC, but if you want any multiprocessing on the application, you'll need to bring it either on the kernel or the userland. Ditto for user management.
They can't really get away without device drivers. Ok, except for devices that you won't use, just like any other kernel.
What is that kernel feature that people take away? Or are they just talking about a stripped down Linux without the device drivers (that fit a floppy disk)?
The same applies to the services. What are those services that people gain so much from removing from containers, and why don't people remove them from VMs and real machines too?
From there, you can start stripping out other things. You don't need a huge init system because all you're doing is running a single application. Your libc can be made tiny, or removed completely, because you can strip it of all the calls your app doesn't need. You don't need to run an sshd, because there is no shell to get into. You can get rid of PAM because there's no accounts to access or authenticate against.
On real machines and VMs, these things are necessary so you can run tests, do cleanup, etc. Unikernels are centered entirely around the idea that those things are someone else's job, and if needed, can be shouted for across the network.
Because you've stripped out everything that could supply you with any other option.
Many common server frameworks build in multiprocessing to the language runtime: Node.js mandates an event-driven callback style, Go has goroutines, etc. So it already exists in userland. Unikernels avoid duplicating that functionality at the kernel level, so the two systems don't fight each other.
The hypervisor proxies device drivers through from the host operating system (running in Dom0) to the guest operating systems.
The idea is to run eg. Node, or Postgres, or memcached on bare metal. So instead of write() trapping into the operating system, it becomes a library function that directly talks to the underlying device driver. The only process the VM runs would be that one application, and you'd rely on the hypervisor to schedule multiple applications on one physical box.
(Sadly, I don't remember the title for a citation here.)
So unikernels are not about containers, and I've been mixing two different approaches. It makes more sense now.
Before you had to rely on say apache for urlrewrite and routing. Now the data and logic has been lifted into languages.
Xen provides very basic (but safe) message/event channels between VMs. Basically this becomes your IPC, and the VMs become user processes.
With a new enough CPU with the correct memory virtualization features, xen can scale to many thousands of domUs. And with each domU only taking up slightly more resources than a typical user process, this is a completely reasonable approach.
Again, your Hypervisor becomes the OS which has to worry about the baremetal, and the VMs replace user processes.
I intend to playing with MirageOS seriously myself next year.
Edit: some dependencies in an example MirageOS app: https://mirage.io/wiki/technical-background#ModularOSLibrari...
I agree with you 100%, but it does make me think. Every Linux installation I've done in the last five or so years has been either under ESXi, or on AWS.
I'm sure I'm not alone in that, and that's a trend that's going to continue to grow.
How much of the kernel is drivers that are absolutely never going to useful in that scenario?
How much room is there for distributions to start supplying a kernel configuration with half of it never built?
A call to the system, you may say.
I managed to skip the whole Angular/Mongo hype, and now that I'm actually getting back into frontend development, it's all about React. Can't say I missed it.
I remember telling my boss at my very first job "Yeah, I want to learn MFC and COM". His response was "Why? By the time you're out of college, everything will be .NET." It turned out he had underestimated: by the time I graduated from college, MFC & COM had been replaced by .NET, which had been replaced by webapps running under J2EE, which were just about to be replaced by Rails & Django.
.NET and J2EE (or C# and Java) never really replaced each other--at best, .NET was MS's attempt to upgrade the MFC/COM stuff to the Java VM world after J++ fell apart. They're more or less the same model, one's just the Windows Server flavor while the other is the Linux/Unix flavor.
I'd say peak J2EE was 2002-2003 - it continued to be a factor up through 2007 or so, and probably still is now but the hype has long since passed it by. .NET had a peak hype cycle around 2000, when it was announced, and also continues to have many satisfied users today but ceased to gain major mindshare right around when Google started ascending in 2004-2005.
React is a nice piece of software, I'm glad to see it introduce FRP to a mainstream audience, and it fixes certain problems around componentization that you will run into if you try to build a big-enough app in vanilla JS. I've been tempted to use it for a couple startup ideas, but then I remember that no app is "big enough" until it has actual users, and getting them is the hard part. So I remain largely agnostic about its usefulness for startups, while believing that it can be quite useful for more established teams.
Size: Each unikernel image contains a kernel specially compiled for the application. A container image properly optimized contains just the application (the kernel is shared by all containers). It’s possible to create Docker containers under 10 MB: see http://blog.xebia.com/2014/07/04/create-the-smallest-possibl... and http://blog.xebia.com/2015/06/30/how-to-create-the-smallest-....
Security: Reusing a well known and largely deployed Linux kernel (without all the surrounding provided by the “larger” OS) sounds at least as secure as compiling its own specialized unikernel.
Compatibility: The compatibility story sounds better for containers because your app will work by default without doing anything.
"Given that unikernels compile only which is necessary into the applications, the surface area is very small."
I think this is a somewhat strange statement. You don't lower the surface area by not compiling in dead code--unless you're offering a way to run arbitrary code, in which case it can help implementing the anti-features the attacker wants. But to me it seems the primary innovation of unikernels is that they are not implemented in C, but in a higher-level language like Ocaml, Erlang, etc. Those languages, if their compilers and runtimes are implemented correctly, don't allow buffer overflows, and have also other features to prevent arbitrary code execution (type system, or secure process isolation). Those kernels have (given the same attention to quality) the potential of being more secure than the Linux kernel by using such languages. Also, by not having to be a general OS kernel, they are simpler to implement, hence reducing the number of bugs likely to be around.
It's not the "compiling into the application", it's the "implementation of the algorithms" that's smaller. Perhaps that's what the original poster meant, but it sounded strange to me.
If unikernal systems do become popular, it's very likely this would not be true because AppX and AppY would likely share a popular library that they've both been statically linked against. (Ex: An HTTP library with TLS). Granted, the footprint of exploitable features would be considerably smaller
I agree more with the the latter point that a compromised system would probably offer very little for an attacker to leverage.
That seems like it might be a legitimate concern.
There's also support considerations to taken into account as well from a vendor perspective.
That might be only for niche cases, though, but others seem to say it can be under 5MB for more general uses, too:
This is a nice talk on unikernels:
We coupled that with a secure repository and blue/green AWS deployments for a great out of the box experience.
Some of the OpenMirage builds are around 25MB.
Docker builds what amounts to giant statically linked binaries including images of an OS as support, while unikernels relegate the OS to library status.
In both cases this is starting to look like the way you deploy to embedded devices: build image, flash image to device. I mean that is some serious full-circle going on there... we've been trying to get away from that kind of thing since computers were invented.
My phone has an operating system that mostly pretends that it's a mainframe. With just a little hardware, I could hook some circa 1978 VT100s up to it and a team of average *nix web backend developers could even be productive.
My "workstation" pretends to be several tens of machines doing all sorts of different things. These boundaries exist for no reason other than to map to organizational concerns -- my CPU does all kinds of computational gymnastics so I can "keep things tidy".
The model is just wrong. Things are immutable that shouldn't be. Things are mutable that shouldn't be. Things aren't distributed that should be. Things aren't durable that should be. Things are shared that shouldn't be. Things are isolated that shouldn't be. Config management, containers, orchestration, stuff like adding BPF to the kernel, Docker, Hadoop, Mesos -- all very cool, but all very much bandaids.
We're working on it but because we're human we can't go straight for the goalpost, we have to play politics, cheat by modeling the future using what we already have today and lead by example.
Sooner or later computation will have been around for long enough that everything will settle on a common boring architecture (how much steam engine innovation has there been recently?) but until then we'll always be using the systems designed to solve the problems of 10-15 years ago.
To paraphrase Rumsfeld, you don't build on the platform you want, you build on the platform you have.
Much of the motivation for microkernel research, too, was for applications to be able to control their workloads at a finer granularity while retaining a full OS environment. Scout and SPIN were such examples. Something as simple as making page replacement and eviction a userland server where each library or application can implement its own policy over the RPC interface, can yield great benefits.
It depends what you mean by "death". Chester's First Law applies. Some code, somewhere, is responsible for the services traditionally provided by an OS.
If anything this is not the death of the OS, it is the death of POSIX as an application API.
It sounds like unikernels are still based around hardware virtualization. That seems like a mistake to me. If your image includes the code to switch an x86 processor from real mode into protected mode, you're doing it wrong (IMO).
As least with conventional OS you have a chance to get in and figure out what's going on. With Unikernels it sounds as if you may end up with either a core dump (how?) or a misbehaving service you can't access to diagnose problems.
Of course, having such a small (and legacy-free) OS makes debugging vastly easier in the first place.
The key for industrial use is extremely robust dev and management tools. Unikernels are thought-provoking but on their own do not appear to be anywhere near enough to solve the problem they are taking on.
At the moment language choice is a blocker for me. I don't want to have to learn OCaml to play with unikernels. LING looks interesting since its based on Erlang. If I could develop on Elixir that would make a big difference.