Hacker News new | past | comments | ask | show | jobs | submit login

Bryan may certainly be right (I neither know him nor much about unikernels), but some parts of his argument seem incredibly weak.

  The primary reason to implement functionality in the
  operating system kernel is for performance...
OK, this seems like a promising start. Proponents say that unikernels offer better performance, and presumably he's going to demonstrate that in practice they have not yet managed to do so, and offer evidence that indicates they never will.

  But it’s not worth dwelling on performance too much; let’s 
  just say that the performance arguments to be made in favor 
  of unikernels have some well-grounded counter-arguments and 
  move on.
"Let's just say"? You start by saying the that the "primary reason" for unikernels is performance, and finish the same paragraph with "it’s not worth dwelling on performance"? And this is because there are "well-grounded counter-arguments" that they cannot perform well?

No, either they are faster, or they are not. If someone has benchmarks showing they are faster, then I don't care about your counter-argument, because it must be wrong. If you believe there are no benchmarks showing unikernels to be faster, then make a falsifiable claim rather than claiming we should "move on".

Are they faster? I don't know, but there are papers out there with titles like "A Performance Evaluation of Unikernels" with conclusions like "OSv significantly exceeded the performance of Linux in every category" and "[Mirage OS's] DNS server was significantly higher than both Linux and OSv". http://media.taricorp.net/performance-evaluation-unikernels....

I would find the argument against unikernels to be more convincing if it addressed the benchmarks that do exist (even if they are flawed) rather than claiming that there is no need for benchmarks because theory precludes positive results.

Edit: I don't mean to be too harsh here. I'm bothered by the style of argument, but the article can still valuable even if just as expert opinion. Writing is hard, finding flaws is easy, and having an article to focus the discussion is better than not having an article at all.




I have no dog in this fight. That is, I'm neither financially nor emotionally invested in Docker or any sort of container technology, nor their Solaris OS thing, nor CoreOS, nor any unikernel. I read this blog not knowing who he is and very little about Joyent so I held no previous malcontent against the man (edit: nor do I now, in case that clause implied otherwise). That being said - you weren't even remotely harsh.

He made a bunch of wild claims without backing it up even with simple links to buqtraq/securityfocus/whatever providing evidence that hypervisors are inherently additive to your 'surface area' by which you can be attacked. He also, as you mentioned, failed to provide even cursory benchmarks, much less cite any 3rd party, academic, peer reviewed analyses. Thirdly, he asserted a false choice between unikernels and on-the-metal. There's nothing stopping you from firing up a heterogeneous environment, using unikernels when they perform well, and containers when the situation dictates them. So yeah, you weren't too harsh - IMO, your post was more well-balanced and thought out than his entire blog post. But hey, who knows, maybe he intentionally wanted to be incendiary so we'd all be talking about his company's product (in some capacity at least) on a slow Friday afternoon.


Unikernels, to me, are basically a resurrection of the type of operating system exemplified by Mac OS 9, 8, 7... No memory protection; programs just duking it out against each other with pointers in the same arena, like gladiators. But to invoke an image of disciplined violence in the Roman Empire is too kind; really, this is more like a regression to the Stone Age.

Right there aren't supposed to be multiple applications in there? But eventually there will. Things only start small, as a rule.

Look, we have had pages and memory management units since the 1960's already. Protection was considered performant enough to be wortwhile on mainframes built from discrete integrated circuits on numerous circuit boards, topping out at clock speeds of some 30 mHz. Fast forwarding 30 years, people were happily running servers using 80386 and 80486 boxes with MMU-based operating systems.

Why would I want to give up the user/kernel separation and protection on hardware that blows away the protected-memory machine I had 20 years ago.


That would be true if applications ran on only one computer at a time. But nowadays applications run across many computers - sometimes tens of thousands of them. These applications don't need the operating system to protect processes from each other, because they are not running on the same computer.

Now that hypervisors are a mature commodity, this model is practical at smaller scale too: instead of running in separate physical computers, process run in separate virtual computers.

In short: unikernels make way more sense if you zoom out and think of your entire swarm of computers as a single computer.


That would be true if computers ran only one unikernel at a time, but you need to protect each unikernel from each other


Computers do run only one unikernel at a time. It's just that sometimes they are virtual computers. Remember that virtualization is increasingly hardware-assisted, and the software parts are mature. So for many use cases it's reasonable to separate concerns and just assume that VMs are just a special type of computer.

For the remaining use cases where hypervisors are not secure enough, use physical computers instead.

For the remaining use cases where the overhead of 1 hypervisor per physical computer is not acceptable, build unikernels against a bare metal target instead. (the tooling for this still has ways to go).

If for your use cass hypervisors are not secure enough, 1 hypervisor per physical machine is too much overhead, and the tooling for bare metal targets is not adequate, then unikernels are not a good solution for your use case.


Yes, and processes in a traditional OS have hardware support too through the MMU and things. At the end of the day something needs to schedule processes/vms, and something needs to coordinate disk writes and what I'm driving at here is that whether you call them processes and a kernel or vms and a hypervisor, you arrive at the same thing, except operating systems are mature at that task, and hypervisors are not.


Why do you need to have both a kernel and a hypervisor, though? You've got two levels of abstraction that do the same thing, and just like with M:N threading over processes, they often work at cross purposes.

For most cloud deployments nowadays, the hypervisor is a given. Given that, why not get rid of the kernel?


Except you don't get rid of the kernel, they're not called unikernels for nothing I presume.

Surely, you should be saying: why do you need all of a kernel and hypervisor and an app when you could subsume the app into the kernel and just run the hypervisor and the kernelized app (or single-appified kernel, call it what you want).

I'm having a hard time seeing the benefits given the obvious increase in complexity.

What features of a full-fat OS do unikernels retain? If the answer is very little because hypervisors provide all the hardware access then it would be fair to say that hypervisor has become the OS and the traditional kernel (a Linux one in this case I presume) has become virtually * ahem * redundant.


> hypervisor has become the OS

A hypervisor is an OS. A rose by any other name.


> I'm having a hard time seeing the benefits given the obvious increase in complexity.

What? It's simpler. You remove all the overhead of separating the kernel and app. And that of running a full-featured multiuser, multiprocess kernel for the sake of a single app.

> What features of a full-fat OS do unikernels retain? If the answer is very little because hypervisors provide all the hardware access then it would be fair to say that hypervisor has become the OS and the traditional kernel (a Linux one in this case I presume) has become virtually * ahem * redundant.

Yes, that's entirely fair.


>hypervisor has become the OS and the traditional kernel (a Linux one in this case I presume) has become virtually * ahem * redundant.

That's the point of unikernels right there :)


I think we could spend a long time discussing the relative strengths and weaknesses of hypervisors and traditional operating systems. It's definitely not a one-size-fits-all situation (which is kind of what you're implying).

In any case, I was not arguing that hypervisors are superior to traditional operating systems. I was simply pointing out why the comparison of unikernels to macos8 and calling it a "regression to the stone age" was missing the point entirely, because of the distributed nature of modern applications.


The distributed nature just means that if attackers find one exploit, they can apply it repeatedly to the distributed application, to give themselves an entire botnet.

All code is privileged, so any remote execution exploit in any piece of code makes you own the whole machine (physical or virtual, as the case may be). A buffer overflow in some HTML-template-stuffing code is as good as one in an ethernet interrupt routine. Wee!


> The distributed nature just means that if attackers find one exploit, they can apply it repeatedly to the distributed application, to give themselves an entire botnet

That may or may not be true... In any case it's completely orthogonal to unikernels. Distributed applications, and any security advantages/disadvantages, are a fact of life.

> All code is privileged, so any remote execution exploit in any piece of code makes you own the whole machine (physical or virtual, as the case may be). A buffer overflow in some HTML-template-stuffing code is as good as one in an ethernet interrupt routine. Wee!

I'm afraid you're parroting what you learned about security without really understanding it. Yes, an exploit will give you access to the individual machine. But what does that mean if the machine is a single trust domain to begin with, with no privileged access to anything other than the application that is already compromised? In traditional shared systems, running code in ring0 is a big deal because the machine hosts multiple trust domains and privileged code can hop between them. That doesn't exist with unikernels.

Add to that the tactical advantages of unikernels: vastly reduced attack surface, a tendency to use unikernels for "immutable infrastructure" which means you're less likely to find an opportunity to plant a rootkit before the machine is wiped, and the fact that unikernels are vastly less homogeneous in their layout (because more happens at build time), making each attack more labor-intensive. The result is that the security story of unikernels, in practice and in theory, is very strong.


You're assuming here that there aren't and never will be exploits that break out of the hypervisor. This is not the world we live in. In literally exactly the same way that you can break out of an application in to kernel space, you can break out of a guest VM in to hypervisor space. VM guests are processes, and hypervisors are operating systems. We've switched the terminology around a bit, but in doing so we've given up decades of OS development


> You're assuming here that there aren't and never will be exploits that break out of the hypervisor. This is not the world we live in.

Really? Here's what I wrote in this very thread, just above your message: If for your use case hypervisors are not secure enough, 1 hypervisor per physical machine is too much overhead, and the tooling for bare metal targets is not adequate, then unikernels are not a good solution for your use case. [1]

At this point I believe we are talking past each other, you are not addressing (and apparently not reading) any of my points, so let's agree to disagree.

[1] https://news.ycombinator.com/item?id=10956899


Well hopefully your VMs are at least as well isolated as a linux process is.


On the contrary, you could argue that there is _more_ isolation, insofar as the multiple applications will be separate unikernels on the same hypervisor, and that the hypervisor will enforce a stricter separation between VMs/unikernels than your OS will between processes.


YEs, sometimes I wonder why UniKnernel isn't called the resurrection of DOS.


Because we're not using DOS?


held no previous malcontent against the man

That does not sound practical at all.


Writing is Nature's way of letting you know how sloppy your thinking is -- Guindon

From what I understand of MirageOS an impetus for the "security theatre," is that they believe libc itself is a vulnerability. Therefore no matter how secure their applications are they will always be vulnerable. They will be vulnerable to the host OS as well as any process it is running and the vulnerabilities they expose. It's not security by obscurity but a reduction in attack surface which is a well-known and encouraged tactic. I don't see any prepositional tautology there.

Yes they will still be reliant on Type-1 hypervisors... and Xen has had its share of vulnerabilities in the last year. That's a much smaller surface to worry about.

The other benefit is that jitsu could be one interesting avenue to further reduce the attack surface. Summoning a unikernel on-demand to service a single request has demonstrated to be plausible. Instead of leaving a long-running process running you have a highly-restricted machine summon your application process in an isolated unikernel for a few milliseconds before it's gone.

The kinds of architectures unikernels enable have yet to be fully explored. The ideas being explored by MirageOS are by no means new but they haven't been given serious consideration. They may not be "ready for production," yet but given some experimentation and formal specification it may yet prove fruitful.


> Summoning a unikernel on-demand to service a single request has demonstrated to be plausible. Instead of leaving a long-running process running you have a highly-restricted machine summon your application process in an isolated unikernel for a few milliseconds before it's gone.

From a "far enough" pov (but not too far...), how is that system different from a kernel running processes on-demand? Why any replacement for the libc would contain less vulnerability? Same question for replacing a kernel with an "hypervisor".

I feel I still don't know enough on these subject to think this whole game in the end consist in renaming various components, rewriting parts of them in the process for no real reason. But maybe this is actually it.


Why don't you read http://www.skjegstad.com/blog/2015/08/17/jitsu-v02/ and decide for yourself?

The gist of it is that for a typical DNS server you can boot a unikernel per-request to service the query instead of leaving a long-running process going on a typical server. You can boot these things fast enough to even map them to URLs and create a unikernel to serve each page of your site.

It's hard to approach it from the perspective of what you already know to be true and reliable. It's still experimental and we've only begun to explore the possibilities.


Honestly to my untrained eyes, it stills looks like a weird operating system except more complicated, all of that to implement features you could have implemented more simply with a less crazy architecture that does not involve launching a program as if it was the complete system of a standalone computer.

Now I'm not saying that it is bad to experiment, but if you don't come up beforehand with actual things to test that were not possible on current modern systems or at least substantially more difficult to do instead of quite the opposite, it's not a very useful experiment but just a curious contraption.


> Why any replacement for the libc would contain less vulnerability?

Because it would be written in better languages.

> Same question for replacing a kernel with an "hypervisor".

Because the hypervisor implements a much smaller, better specified API than a kernel does.


That's because his argument is orthogonal to the performance and security arguments. His argument is basically even if unikernels are faster and even if they are just as secure, they are still operationally broken because you cannot debug them.

He doesn't need to present a great argument against security or performance. There doesn't even need to be such an argument. If you've ever spent six months trying to find out why a content management system blows up under the strangest of conditions, even when you have a full debug stack, you understand why that argument may be able to stand alone.

The place where his argument falls down, IMO, is, like others have said, in assuming that everything is binary: everything is unikernel or it is not. And that's just silly.


His argument is basically even if unikernels are faster and even if they are just as secure, they are still operationally broken because you cannot debug them.

I personally agree that this would be a stronger argument, but unfortunately it's not the argument he's making. Instead, he's "pleading in the alternative", which is less logical, but can in some situations can be more effective. The classic example is from a legendary defense lawyer nicknamed "Racehorse" Haynes:

“Say you sue me because you say my dog bit you,” he told the audience. “Well, now this is my defense: My dog doesn’t bite. And second, in the alternative, my dog was tied up that night. And third, I don’t believe you really got bit.” His final defense, he said, would be: “I don’t have a dog.”

It maps excellently: "As everyone knows, unikernels never have a performance advantage. And even when they are faster, they are always terribly insecure. And even after people solve the security nightmare, they're still impossible to debug. But what's the point in spending time talking about something that doesn't even exist!"

http://www.abajournal.com/magazine/article/richard_racehorse...


The Racehorse example isn't the best example of arguing in the alternative, because the first three "alternatives" are fully compatible with one another; you could easily argue that all three were true. The real alternative branch is "my dog doesn't bite, and in the alternative, I don't have a dog".


The place where his argument falls down is... that you can actually debug unikernels. I do it almost everyday.

So if the performance and security arguments are just distractions, and the core argument that they're "undebuggable" is just baldly incorrect, then what's left?


It would be a great argument if it were true. But while he mentions rumprun, he doesn't seem to have noticed that it can do all the things he claims unikernels can't do. Nor is there a claim that the current methods are necessarily ideal; it is an exploration of what else is possible and how to make it work in practice.


What it means is that it doesn't matter if they are faster or not because the OS isn't the bottleneck. The bottleneck is the framework or the user application in most of the cases.


one does not typically find that applications are limited by user-kernel context switches.

the OS isn't the bottleneck.

Curious, then why are we seeing articles here all the time on bypassing the linux kernel for low latency networking?


You don't bypass the kernel, you bypass the TCP stack of the kernel and this is for very specific applications.


Bypassing the kernel entirely is pretty normal in HPC applications. Infiniband implementations typically memory-map the device's registers into user-space so that applications can send and receive messages without a system call.


This is not bypassing a kernel. This is called Remote Direct Memory Access (RDMA) and there is still a kernel.

FYI most of the devices inside your computer work through DMA.


On in particular is network capturing via libpcap.

It's basically an alternative driver that comes with additional capabilities. Such as capturing promiscuously, and filtering captures.


Just curious, how does a unikernel solve network latency?


The issue GP talks about comes from the cost of context-switching on a syscall (going into "kernel mode", performing the call, then going back into "application mode"). There's no context switch in a unikernel.


Well, unless you count the hypervisor context switch, which you do.


And if you need super high performance, you can run a unikernel on bare metal.


I guess in the glorious future everyone will be using SR-IOV.


Are you assuming SRV-IOV passthrough (which has its own performance profile) ? Because normal virt -definitely- hits a context switch when it goes from unikernels virtual NIC to real NIC, if not twice.


Unless you are running a load balancer, static http server or in-memory key-value store. Those aren't exactly fringe use cases.


Even in those examples, it applies mostly to the subset of users who have very large numbers of very simple requests.

The ratio of kernel to userland work is bad if you're receiving an entire network request just to increment a word in memory but usually quite tolerable if, say, you're shoveling video files out and most of the time is spent in something like sendfile() or if your small request actually requires a non-trivial amount of computation.


If you're doing nontrivial computation that absolutely dominates the kernel overhead (almost by definition). But how frequent is that? A huge amount of programming boils down to CRUD and very basic transformations, maybe a little bit of fanning out/in, all of which involve minimal actual compute.


Developers of such software might care about unikernels, users, probably less.


This is a correct approach from a developers point of view, however, it is out of scope from the operators point of view. You can always improve your software stack as a developer. These are not exclusive, and are different, should not be conflated.


That, and you're giving up a lot to gain that marginal increase in performance.


the perf gains are just as a possible without a unikernel, most high perf net apps at this point are onto userspace net stack to avoid interrupt+switch cost ala intel dpdk, an opensource example would be https://github.com/SnabbCo/snabbswitch/blob/master/README.md but tons of others in commercial spaces.

ie. as per what bryan said there's plenty of counter examples on perf


It's indeed not worth dwelling on performance (even if it is one of the main arguments for unikernels) if your main topic is that they are poor from a security standpoint.


You missed the whole section where he compared unikernals on Xen vs. Linux (or Solaris) on metal. Unikernals have to run on Xen so either way, the best you're going to do is have one-level of abstraction (OS or Hypervisor) between you and your application.


Rumpkernel can run on bare metal, too.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: