* Homepage: https://unikraft.org
* Docs: http://docs.unikraft.org
* GitHub: https://github.com/unikraft/unikraft
* Twitter: https://twitter.com/UnikraftSDK
* LinkedIn: https://linkedin.com/company/unikraft-sdk
As the weight of the isolation technology gets smaller, we are going to see an increasing use of it as a form of security, and scalability in application designs. We've seen the first wave with microservices, but we'll see more.
It's very very difficult to hack in real time properties after the fact. Most successful versions of this end up hacking in a virtualization layer to run the code that wasn't designed to be real time in the first place rather than trying to make that code play nice cooperatively with a real time system.
RTLinux is a good example which uses Linux to bootstrap a microkernel which then virtualizes the Linux kernel that booted it.
So that's definitely a possible use case.
Unfortunately, today most graphics hardware relies on drivers for specific versions of specific operating systems.
Mmm? Earlier and every console system generation have a BIOS even a Atari 2600 have it.
Unless I am misunderstanding your comment?
MS-DOS is about files -- and AFAIK most games did not re-implement that part, nor they re-implement BIOS drive handling which was sometimes machine-specific. I certainly remember playing games from Novell Netware shared drive, which would be impossible if they re-implemented filesystem support.
And none of the games I know was taking over the boot process as unikernels do. It was always: start DOS, configure system with MSDOS's CONFIG.SYS / AUTOEXEC.BAT, and only then start the game. Not very unikernel-y at all.
Now BIOS video services were very common. BIOS "int 10" was great at setting video mode, but it sucked for everything else -- even in the text mode, we often read/wrote to framebuffer directly.
Not sure about keyboard -- the hardware is fairly simple, so it would be easy to reprogram... but also BIOS did a satisfactory job handling it, so it was OK as long as you keep interrupts enabled.
And as for mouse and sound card, those did not have any DOS drivers to begin with.
Now that it's matured, people are back to optimizing for performance, and this is one of the ways to do that while maintaining the security that virtualization provides. One thing that seems to remain true is that the pendulum of flexibility <> performance is always swinging back and forth in this industry.
2) Containers share the guest kernel. To elaborate many/most container users are already deployed on top of vms to begin with - even those in private cloud/private datacenters such as openstack will deploy on top since there is so much more existing software to manage them at scale.
3) Platforms like k8s extend the attack surface beyond one server. If you break out
of a container you potentially have access to everything across the cluster (eg: many servers) vs breaking into a vm you just have the vm itself. While you might be inside a privileged network and you might get lucky by finding some db creds or something inside a conf file generally speaking you have more work ahead of you to own more hosts.
4) While there are vm escapes they are incredibly rare compared to container breakouts. Put it this way - the entire public cloud is built on virtual machines. If vm escapes were as prevalent as container escapes no one would be using AWS at all.
Some cloud providers will trust containers to isolate different customers' code running on a shared kernel, but it's not the norm. I think Heroku might be one such. There's at least one other provider too, but frustratingly I'm unable to recall the name edit found it, it was Joyent, who offer smartOS Zones. 
Unikernels have many advantages and one is removing the complexity of managing the guest vm. People really haven't had the option of removing this complexity since creating a new guest os is time-consuming, expensive and highly technical work. When you have a guest that knows it is virtualized and can concentrate on just that one workload you can get a much better experience.
Aren't containers a hack to prevent issues in libc and shared libs from impacting application deployment?
Guest kernels are already "the application", unikernels remove overhead, everything else is basically the same abstraction applied to itself.
AFAIK MirageOS only supports OCaml and thus not any of those applications. That is an intentional design decision to get dead code elimination, global program optimization, etc.
(And FWIW I think "monoglot" OSes are a mistake for this reason; Unix is polyglot and that's a feature, not a bug. Even in contexts where a unikernel would make sense, you may need to inspect the node dynamically.)
Regarding inspecting: The design does not exclude that you could add libraries to enable inspection and monitoring. We are even thinking that this is quite important for today deployments: Imagine an integration with Prometheus or a little embedded SSH shell.
The difference to Unix and what we want to achieve is to utilize specialization. Unix/Linux/etc. are general purpose OSes, that are very good fits for cases where you do not know beforehand which applications you are going to run on it (e.g., end-devices like desktops, smartphones). Unikraft wants to optimize the cases where you know beforehand what you are going to run on it and where you are going to run it. It is optimizing the kernel layers and optionally also the application for the use case.
In my mind unikernels in the cloud specifically don't seem that different from Unix, because you're running on a hypervisor anyway. If you zoom out, there could be 64 OCaml unikernels running on a machine, or maybe you have 10 OCaml unikernels, 20 Java unikernels, and 30 C++ unikernels.
That looks like a Unix machine to me, except the interface is the VMM rather than the kernel interface. (I know there have been some papers arguing about this; I only remember them vaguely, but this is how I see it.)
It was very logical to use the VMM interface for awhile, because AWS EC2 was dominant, and it is a stronger security boundary than the Linux kernel. But I do think the kernel interface is actually better for most developers and most languages.
And platforms like Fly are running containers securely: https://fly.io/docs/reference/architecture/
Apparently they use Firecracker VMs which are derived from what AWS lambda uses internally. So I can see the container/kernel interface becoming more popular than the VMM interface. (And I hope it does).
To me, it makes sense for the functionality of the kernel to live on the side of the service provider than being something that the application developer deploys every time. Though Nginx, redis, and sqlite are interesting use cases ... I'd guess they're the minority of cloud use cases, as opposed to apps in high level languages. But that doesn't mean they're not useful as part of an ensemble, most of which are NOT unikernels.
"MirageOS  is an OCaml-specific unikernel focusing
on type-safety, so does not support mainstream applications,
and its performance, as shown in our evaluation, is sub-par
with respect to other unikernel projects"
For example, it might use an OpenOnload library to watch a ring buffer being pushed packets by a NIC via DMA, processing the packets as they appear and updating a mapped/shared memory region with the results. It might, further, construct packet images based on its results, or what it finds in memory mapped from another process, and use the same library to trigger the NIC to send the packet images.
Regular, scheduled processes may watch the mapped memory and do less timing-constrained work, such as logging events, performing file system operations, or even operating a UI, and maybe queue requests into more mapped memory polled by the isolated process.
This mode of operation is quite common at high-speed stock trading firms, or to control specialized equipment with stringent timing requirements, without need to deploy a realtime OS.
Usually these systems make heavy use of ring buffers to keep processes decoupled, placed in "hugetlb" 2MB-sized memory pages to minimize or even eliminate memory-map cache contention, by mapping files in /dev/hugepages.
All the regular system facilities still work. The program is built with the ordinary linker. You can attach your regular debugger to the process, at startup or after it has been running for a month.
For more details, check out the library for details: https://github.com/unikraft/lib-python3
Think of it as if Node.js could be directly installed on a PC and no Linux is needed.
For someone who doesn't know that much about unikernels, is it possible (or a good idea) to run a multi threaded / multi process application on there? I'm thinking of something like a python app + nginx reverse proxy, or a Go App using the built in http server?
How do you avoid problem of creating a unikernel that appears to work, until you exercise some different code path at runtime that needs some modular OS feature you didn't think to include in the unikernel?
(I have no idea if this is a real problem, just seemed like it could be from skim reading the basics)
It's clear that bigger players, such as Red Hat are interested in the topic of unikernels, and that cloud providers are preparing for this future too .
> ...and boot in around 1ms on top of the VMM time (total boot time 3ms-40ms).
Firecracker is trying to minimize "VMM time", while the unikernel is minimizing guest overhead.
In general, the clouds want the unit of deployable compute to be as small as possible, because it means they can make more money packing more customers into the same machines.
AFAIK, today, you can already build your unikernel app into an AMI and run it on EC2. No?
Is there a service which would be more useful for unikernels than this?
One thing would be for there to be an official AWS unikernel SDK - basically, a rump operating system that has everything you need to run nicely on EC2, and nothing else.
I can almost do this on AWS at the moment, I think, though I haven't tried yet and it looks like a big learning cliff from here. Something to make this easier would be a huge win.
PTC and Aicas on Java's case for example, or nanoFramework for .NET.
Having said this, we have on-going work to integrate Unikraft into Kubernetes. We are very excited about this work and will be making noise and providing extra details about this very soon :)