
Unikernels are unfit for production - anujbahuguna
https://www.joyent.com/blog/unikernels-are-unfit-for-production
======
nkurz
Bryan may certainly be right (I neither know him nor much about unikernels),
but some parts of his argument seem incredibly weak.

    
    
      The primary reason to implement functionality in the
      operating system kernel is for performance...
    

OK, this seems like a promising start. Proponents say that unikernels offer
better performance, and presumably he's going to demonstrate that in practice
they have not yet managed to do so, and offer evidence that indicates they
never will.

    
    
      But it’s not worth dwelling on performance too much; let’s 
      just say that the performance arguments to be made in favor 
      of unikernels have some well-grounded counter-arguments and 
      move on.
    

"Let's just say"? You start by saying the that the "primary reason" for
unikernels is performance, and finish the same paragraph with "it’s not worth
dwelling on performance"? And this is because there are "well-grounded
counter-arguments" that they cannot perform well?

No, either they are faster, or they are not. If someone has benchmarks showing
they are faster, then I don't care about your counter-argument, because it
must be wrong. If you believe there are no benchmarks showing unikernels to be
faster, then make a falsifiable claim rather than claiming we should "move
on".

Are they faster? I don't know, but there are papers out there with titles like
"A Performance Evaluation of Unikernels" with conclusions like "OSv
significantly exceeded the performance of Linux in every category" and
"[Mirage OS's] DNS server was significantly higher than both Linux and OSv".
[http://media.taricorp.net/performance-evaluation-
unikernels....](http://media.taricorp.net/performance-evaluation-
unikernels.pdf)

I would find the argument against unikernels to be more convincing if it
addressed the benchmarks that do exist (even if they are flawed) rather than
claiming that there is no need for benchmarks because theory precludes
positive results.

Edit: I don't mean to be too harsh here. I'm bothered by the style of
argument, but the article can still valuable even if just as expert opinion.
Writing is hard, finding flaws is easy, and having an article to focus the
discussion is better than not having an article at all.

~~~
shin_lao
What it means is that it doesn't matter if they are faster or not because the
OS isn't the bottleneck. The bottleneck is the framework or the user
application in most of the cases.

~~~
dsp1234
_one does not typically find that applications are limited by user-kernel
context switches._

 _the OS isn 't the bottleneck._

Curious, then why are we seeing articles here all the time on bypassing the
linux kernel for low latency networking?

~~~
shin_lao
You don't bypass the kernel, you bypass the TCP stack of the kernel and this
is for very specific applications.

~~~
elihu
Bypassing the kernel entirely is pretty normal in HPC applications. Infiniband
implementations typically memory-map the device's registers into user-space so
that applications can send and receive messages without a system call.

~~~
shin_lao
This is not bypassing a kernel. This is called Remote Direct Memory Access
(RDMA) and there is still a kernel.

FYI most of the devices inside your computer work through DMA.

------
vezzy-fnord
Bryan Cantrill seems to have some personal interest in denigrating OS research
(defined as virtually everything post-Unix) as all being part of a misguided
"anti-Unix Dark Ages of Operating Systems". He has expressed this sentiment
multiple times before, and places a great deal of faith on Unix being a
timeless edifice which needs only renovation. Naturally, he regards DTrace and
MDB to be the pinnacles of OS design in the past 20 years and never stops
yapping on about them, this article being no exception. It's his thought-
terminating cliche.

He voiced all this here [1], and so I countered by listing stuck paradigms in
traditional monolithic Unixes, as well as reopening my inquiry on Sun's Spring
research system, which he seems to scoff at, but over which I am impressed by
the academic research it yielded. He has yet to respond to my challenge.

[1]
[https://news.ycombinator.com/item?id=10324211](https://news.ycombinator.com/item?id=10324211)

~~~
cthalupa
There's a lot of stuff Solaris got right, long before the Linux world did, and
in some ways, it's still catching up.

DTrace? Sure, there's a plethora of dynamic tracing tools in linux, but it's
honestly just now starting to catch up with eBPF. If eBPF stack traces make it
into 4.5, that might be the first time I can look at Linux dynamic tracing and
go "Yep, it's arrived"

Systemd certainly apes quite a bit of it's functionality from SMF (and,
personally, I'd argue SMF still does basically everything better...)

ZFS? Still the king of filesystem/logical volume management. Btrfs might catch
up someday

Zones? Again, still ahead of the linux equivalent in functionality. And lx
branded zones are awesome.

I'm not saying nothing needs to advance ever again in the world of OS
research, but I think we need to be honest about how much we owe to Sun for
having created the modern template for a great amount of functionality that
Linux is just now having come into existence, and it's not like things at
Joyent, OmniIT, Nexenta, and other Illumos developing shops have stagnated
either.

I often disagree with Bryan on specific points, but I think you do him a
disservice. While many of the things Sun did might be viewed as an incremental
update to existing concepts, it's still something that the Linux community has
yet to catch up with.

Anyway, even when he is wrong, I think he often brings up a viewpoint that
results in some interesting discussion.

~~~
hinkley
Joyent is claiming that they are getting pretty impressive numbers out of
running docker images on Solaris, by using some sort of funky shim layer to
run Linux in a zone.

I also like to point out that Java, another Sun project, was a big impetus for
a bunch of OSes to get properly functioning pre-emptive threading implemented.
At the beginning were dark times, and even Solaris was no cake walk. C has
copied the Java Memory Model for shared state concurrency, and most of the
best papers on Garbage Collection were written in the mid to late nineties,
often targeting Java. Subsequently in the 00's, escape analysis made pretty
giant leaps forward, leading to the memory system you have in Rust today.

I think people who argue about the best data models for shared state
concurrency often forget that they're taking us-vs-them stances on conventions
that were _all documented by the same individual_ , the Late, Great, Sir Tony
Hoare.

There have really only been two breakthroughs since then. Transactional
memory, which is still in a state of discovery and who knows when we'll get
forward motion on that (especially since Intel's hardware support went belly
up), and object ownership based on escape analysis, as used internally in
recent JVMs, and explicitly in a few new languages, like Rust.

Now if only they'd pushed type theory forward, instead of regressing and
taking the rest of us with them...

~~~
pja
_Late, Great, Sir Tony Hoare_

Tony Hoare is very much still alive to the best of my knowledge!

~~~
Einherji
Yup, in fact I met him recently at a conference, looked great.

~~~
chris_wot
It's quite possible that he meant that he believes he has a time management
problem, but finds it irrelevant and endearing due to his greatness.

------
chubot
Big upvotes for this article. I'm glad it was written, because I've seen
nothing but hype for Unikernels on Hacker News (and in ACM, etc.) for the last
2 years. It's great to see the other side of the story.

The biggest problem with Unikernels like Mirage is the single language
constraint (mentioned in the article). I actually love OCaml, but it's only
suitable for very specific things... e.g. I need to run linear algebra in
production. I'm not going to rewrite everything in OCaml. That's a nonstarter.

An I entirely agree with the point that Unikernel simplicity is mostly a
result of their immaturity. A kernel like seL4 is also simple, because like
unikernels, it doesn't have that many features.

If you want secure foundations, something like seL4 might be better to start
from than Unikernels. We should be looking at the fundamental architectural
characteristics, which I think this post does a great job on.

It seems to me that unikernels are fundamentally MORE complex than containers
with the Linux kernel. Because you can't run Xen by itself -- you run Xen
along with Linux for its drivers.

The only thing I disagree with in the article is debugging vs. restarting. In
the old model, where you have a sys admin per box, yes you might want to log
in and manually tweak things. In big distributed systems, code should be
designed to be restarted (i.e. prefer statelessness). That is your first line
of defense, and a very effective one.

~~~
bahamat
> The only thing I disagree with in the article is debugging vs. restarting.
> In the old model, where you have a sys admin per box, yes you might want to
> log in and manually tweak things. In big distributed systems, code should be
> designed to be restarted (i.e. prefer statelessness). That is your first
> line of defense, and a very effective one.

But if you never understand why it was a bad state in the first place you're
doomed to repeat it. Pathologies need to be understood before they can be
corrected. Dumping core and restarting a process is sometimes appropriate. But
some events, even with stateless services, need in-production, live,
interactive debugging in order to be understood.

~~~
Guvante
> But some events, even with stateless services, need in-production, live,
> interactive debugging in order to be understood.

The question then becomes if it is reproducible since "debuggable when not
running normally" seems to be the common thread of unikernels, such as being
able to host the runtime in Linux directly rather than on a VM.

I think it if you try a low level language these kinds of things are going to
bite you, but a fleshed out unikernel implementation could be interesting for
high level languages, since they typically don't require the low level
debugging steps in the actual production environment.

In either case unikernels have a lot of ground to cover before they can be
considered for production.

------
Hoff
Interesting article. Rather than arguing what can or cannot be done or what
might or might not work, here's some code, and some history.

Here's full-mixed-language programmable, locally- and fully-remote-debuggable,
mixed-user and inner-mode processing unikernel, and with various other
features...

This from 1986...

[http://bitsavers.trailing-
edge.com/pdf/dec/vax/vaxeln/2.0/VA...](http://bitsavers.trailing-
edge.com/pdf/dec/vax/vaxeln/2.0/VAXELN_2.2_Brochure_1986.pdf)

FWIW, here's a unikernel thin client EWS application that can be downloaded
into what was then an older system, to make it more useful for then-current
X11 applications...

From 1992...

[http://h18000.www1.hp.com/info/SP3368/SP3368PF.PDF](http://h18000.www1.hp.com/info/SP3368/SP3368PF.PDF)

Anybody that wants to play and still has a compatible VAX or that wants to try
the VCB01/QVSS graphics support in some versions of the (free) SIMH VAX
emulator, the VAX EWS code is now available here:

[http://www.digiater.nl/openvms/freeware/v50/ews/](http://www.digiater.nl/openvms/freeware/v50/ews/)

To get an OpenVMS system going to host all this, HPE has free OpenVMS hobbyist
licenses and download images (VAX, Alpha, Itanium) available via registration
at:

[https://h41268.www4.hp.com/live/index_e.aspx?qid=24548&desig...](https://h41268.www4.hp.com/live/index_e.aspx?qid=24548&design=cs)

Yes, this stuff was used in production, too.

~~~
iheartmemcache
Not only was it used in production, from the first-hand anecdotal accounts
I've heard, the VAX/VMSclusters were near-z/OS level of reliability. For a
brief time, it was used both in mission-critical environments as well as in
academic institutions (basically two of the three large markets that existed
during that era).

Every 10 years, the same thing gets re-invented. Take network block
devices/clustered sharing. VMS had high-availability and each node you joined
could use it's local disk as an aggregate resource. In the 90s you had
AndrewFS and CODA (CMU's golden age IMO). Then Linux had the whole DRDB era
which gained traction about 10 years ago right around the time Hadoop was
gaining traction. OpenStack has Cinder. 10 years from now we'll have something
else.

Anyways, great points and good post. VAXstations are available on ebay for
pretty cheap, but I'd personally go with a hobbyist OpenVMS Alpha license
running on ES40. I threw a setup together a few years back and it was neat.
Thanks for the data-sheet, my father will get a huge kick out of it.

~~~
Hoff
There are presently OpenVMS servers and clusters in production in a number of
locations, and new configurations are being installed — primarily for existing
applications, obviously.

The most recent OpenVMS release shipped in June 2015, and the next release is
due to ship in March 2016.

There's a port to x86-64 underway, as well.

For those looking for hardware for hobbyist use, used Integrity Itanium
servers are usually cheaper than used working Alpha and VAX gear, and newer —
working VAX and Alpha gear has become more expensive in recent years. Various
VAX and Alpha emulators are available, either as open source or available to
hobbyists at no cost.

------
ChuckMcM
Well that is pretty provocative :-) Bryan might be surprised to learn that for
its first 15 years of its existence NetApp filers were Unikernels in
production. And they out performed NFS servers hosted on OSes quite handily
throughout that entire time :-).

The trick though is they did only one thing (network attached storage) and
they did it very well. That same technique works well for a variety of network
protocols (DNS, SMTP, Etc.). But you can do that badly too. We had an
orientation session at NetApp for new employees which helped them understand
the difference between a computer and an appliance, the latter had a computer
inside of it but wasn't progammable.

~~~
thockingoog
Hi Chuck!

At risk of speaking for Bryan, I think the difference between a NetApp and a
unikernel-in-a-hypervisor is the sharedness of it. Without taking a position
on Bryan's article (it's always entertaining to read his thoughts), I think
his point is that that advantages of a unikernel are largely washed away, and
the disadvantages are emphasized.

While Bryan is somewhat bombastic (more fun to read), there's a lot of smart
in this article, I think.

~~~
ChuckMcM
Some of the original "unikernel in a shared environment" research and
development was done on IBM's VM system in the 70's. Their motivation was that
there were many customers who used a dedicated application on what was then a
mainframe to process their data (this would be analogous to their unikernel)
and they wanted to consolidate their hardware, so they got a bigger mainframe
(like an IBM 370 at the time) and they would run all of these dedicated
applications in their own logical partition (LPAR). It brought three huge
benefits to the table (error containment, hardware consolidation, and backward
compatibility). Because IBM had experienced the same effect that we're seeing
today, their new mainframes in the 70's were so much more powerful than the
ones from the 50's and 60's, and for them what was worse the ones from the
50's and 60's running their dedicated apps were still fine. So they developed
a way to make computing more cost effective for their customer and at the same
time opened the market further for their mainframes.

Today, a typical dual core 8GB x86 machine can, as a dedicated machine run a
lot of things. At the same time the evolution of open systems have brought
"continuous configuration integration" into the mainstream, all major OSs from
OS X to Windows to Linux have weekly, sometimes daily, reconfiguration events.

And while the number of the changes in aggregate are high, the number of
changes on particular subsystems are low. Unikernels answer the need of
creating a stable enough snapshot of the world to allow for better
configuration management. Look at the example of the FreeBSD system taken down
after 20 years. Some services can just run and run and run.

Image isolation is a thing, and you can only be as good as your underlying
software and hardware can make you, but it can also be a big boost to
operational efficiency if that can simplify your security auditing and
maintenance.

So my take on Bryan's article was that he came at the argument from one
direction, which is fine, but to be more through it would help to look at it
from several directions. What was worse was that he made some assertions (like
never being in production) before defining precisely what he means by a
Unikernel which leaves him wide open to examples like NetApp and IBM's VM
system to counter his assertion.

The nice thing about computers these days is that many of the problems we
experience have been experienced in different ways and solved in different
ways already, and we can learn from those. The Unikernel discussion is not
complete with looking through the history of machines which are dedicated
appliances (from Routers, to Filers, to Switches, to Security camera
archivers)

Like most things, I don't think unikernels are a panacea but they also aren't
the end of the world and have been applied in the past with great success.

------
derefr
> Unikernels are entirely undebuggable

I'm pretty sure you debug an Erlang-on-Xen node in the same way you debug a
regular Erlang node. You use the (excellent) Erlang tooling to connect to it,
and interrogate it/trace it/profile it/observe it/etc. The Erlang runtime _is_
an OS, in every sense of the word; running Erlang on Linux is truly just
_redundant_ , since you've already got all the OS you need. That's what
justifies making an Erlang app a unikernel.

But that's an argument coming from the perspective of someone tasked with
maintaining persistent long-running instances. When you're in that sort of
situation, you _need_ the sort of things an OS provides. And that's actually
rather rare.

The true "good fit" use-case of Unikernels is in immutable infrastructure. You
_don 't_ debug a unikernel, mostly; you just kill and replace it (you "let it
crash", in Erlang terms.) Unikernels are a formalization of the (already
prevalent) use-case where you launch some ephemeral VMs or containers as a
static, mostly-internally-stateless "release slug" of your application tier,
and then roll out an upgrade by starting up new "slugs" and terminating old
ones. You can't really "debug" those (except via instrumentation compiled into
your app, ala NewRelic.) They're black boxes. A unikernel just statically
links the whole black box together.

Keep in mind, "debugging" is two things: development-time debugging and
production-time debugging. It's only the latter that unikernels are
fundamentally bad at. For dev-time debugging, both MirageOS and Erlang-on-Xen
come with ways to compile your app as an OS process rather than as a VM image.
When you are trying to _integration-test_ your app, you integration-test the
process version of it. When you're trying to _smoke-test_ your app, you can
still use the process version—or you can launch (an instrumented copy of) the
VM image. Either way, it's no harder than dev-time debugging of a regular non-
unikernel app.

~~~
yunong
> You don't debug a unikernel, mostly; you just kill and replace it

Curious as to how you would drive to root cause the bugs that caused the crash
in the first place? If you don't root cause, won't subsequent versions still
retain the same bugs?

There are bugs that can only manifest themselves in production. Any system
where we don't have the ability to debug and reproduce these classes of
problems in prod is essentially a non-starter for folks looking to operate
reliable software.

~~~
lmm
With unikernels you get a lot more consistency. E.g. I once saw a bug that
came down to one server using reiserfs and another using ext2. But there's no
way to have that problem with a unikernels.

But sure, you need a debugger. So you use one. I'm not sure why the author
seems to think that's so hard.

~~~
cyphar
> But sure, you need a debugger. So you use one. I'm not sure why the author
> seems to think that's so hard.

The author wrote and continues to contribute to DTrace, which is an incredibly
advanced facility for debugging and root causing problems. GDB (for example)
doesn't help you solve performance problems or root-cause them, because now
your performance problem has become ptrace (or whatever tracing facility GDB
uses on that system).

The point he was making is that there are problems with porting DTrace to a
unikernel (it violates the whole "let's remove everything" principle, and you
couldn't practically modify what DTrace probes you're using at runtime becuase
the only process is your misbehaving app -- good luck getting it to enable the
probes you'd like to enable).

~~~
lmm
You can't modify them from within your app, sure. You modify them from a
(privileged) outside context. Allowing the app to instrument itself that
thoroughly violates the principle of least privilege the author was so fond
of.

------
geofft
It may well be the case that unikernels as currently envisioned by unikernel
proponents are impossible to make fit for production; it may also well be the
case that there exists a product that is closer to a unikernel than current
kernels, that is quite production-suitable, and unikernels are fruitful
research to that point.

For instance, you could imagine a unikernel that did support fork() and
preemptive multitasking, but took advantage of the fact that every process
trusts every other one (no privilege boundaries) to avoid the overhead of a
context switch. Scheduling one process over another would be no more expensive
than jumping from one green (userspace) thread to another on regular OSes,
which would be a _huge_ change compared to current OSes, but isn't quite a
unikernel, at least under the provided definition.

Along similar lines, I could imagine a lightweight strace that has basically
the overhead of something like LD_PRELOAD (i.e., much lower overhead than
traditional strace, which has to stop the process, schedule the tracer, and
copy memory from the tracee to the tracer, all of which is slow if you care
about process isolation). And as soon as you add lightweight processes, you
get tcpdump and netstat and all that other fun stuff.

On another note, I'm curious if hypervisors are inherently easier _to secure_
(not currently more secure in practice) than kernels. It certainly seems like
your empirical intuition of the kernel's attack surface is going to be
different if you spend your time worrying about deploying Linux (like most
people in this discussion) vs. deploying Solaris (like the author).

~~~
_wmd
> you could imagine a unikernel that did support fork() and preemptive
> multitasking but took advantage of the fact that every process trusts every
> other one

No need to imagine, this is exactly how Microsoft Singularity worked (it
benefited from a language expressive enough to make that trust possible)

~~~
geofft
Yeah, Singularity is an amazing existence proof of lots of cool stuff in the
unikernel-ish space (though, sadly, not quite of anything being suitable for
_production_ ).

Is Singularity a unikernel? More specifically, would the unikernel.org folks
consider a production-ready kernel inspired by Singularity and targeting a
hypervisor to be a unikernel? The 2013 paper's introduction section contains
the sentence, "By targeting the commodity cloud with a library OS, unikernels
can provide greater performance and improved security compared to Singularity
[4]," so I'd imagine no. But I don't see any expansion on that point, so I
suspect it was added to appease a reviewer.

------
bcg1
This article is mostly FUD I think.

It comes off as a slew of strawmen arguments ... for example the idea that
unikernels are defined as applications that run in "ring 0" of the
microprocessor... and that the primary reason is for performance...

All of the unikernel implementations he mentioned (mirageos, osv, rumpkernels)
all run on top of some other hardware abstraction (xen, posix, etc) with
perhaps the exception of a "bmk" rumpkernel.

We currently have a situation in "the cloud" where we have applications
running on top of a hardware abstraction layer (a monolithic kernel) running
on top of another hardware abstraction layer (a hypervisor). Unikernels
provide a (currently niche) solution for eliminating some of the 1e6+ lines of
monolithic kernel code that individual applications don't need and introduce
performance and security problems. To dismiss this is as "unfit for
production" is somewhat specious.

I wonder if Joyent might have a vested interest in spreading FUD around
unikernels and their usefulness.

~~~
mwcampbell
Your argument in favor of unikernels assumes that we're stuck with hardware
virtualization as the lowest layer of the software stack. What if cloud
providers offered secure containers on bare metal, under a shared OS kernel?
That's what Joyent provides. So yes, Joyent has a vested interest in calling
out the problems with unikernels. But I think their primary motive is that
they truly believe containers on bare metal are a superior solution.

~~~
jclulow
Speaking for myself, that's exactly why I work at Joyent. I believe in OS
virtualisation (whether you call them zones or containers) for multi-tenancy,
in high quality tools for debugging both live (DTrace) and post mortem (mdb),
and in open source infrastructure software (SmartOS, SDC, etc).

I also believe that as an industry and a field, we should continue to build on
the investments we've already made over many decades. The Unikernel seems, to
me at least, to be throwing out almost everything; not just undesirable
properties, but also the hard-won improvements in system design that have
fired so long in the twin kilns of engineering and operations.

~~~
oxryly1
Isn't it possible, therefore, that Unikernel and Joyent's virtualisation serve
different purposes and are definitely not meant to be interchangeable?

------
_wmd
I think the problems with this article are well covered already. Just a
suggestion for Joyent: articles like this are damaging to your excellent
reputation, would suggest a thin layer of review before hitting the post
button!

Some additional meat:

\- The complaint about Mirage being written in OCaml is nonsense, it's trivial
to create bindings to other languages, and in 40 years this never stopped us
interfacing our e.g. Python with C.

\- A highly expressive type/memory safe language is not "security through
obscurity", an SSL stack written in such a language is infinitely less likely
to suffer from some of the worst kinds of bugs in recent memory (Heartbleed
comes to mind)

\- Removing layers of junk is already a great idea, whether or not MirageOS or
Rump represent good attempts at that. It's worth remembering that SMM, EFI and
microcode still exist on every motherboard, using some battle-tested
middleware like Linux doesn't get you away from this.

\- Can't comment on the vague performance counterarguments in general, but
reducing accept() from a microseconds affair to a function call is a difficult
benefit to refute in modern networking software.

~~~
2trill2spill
> an SSL stack written in such a language is infinitely less likely to suffer
> from some of the worst kinds of bugs in recent memory (Heartbleed comes to
> mind)

While you are right about OCaml being safer than C, Heartbleed was a pretty
lame bug, it doesn't even give an attacker remote code execution. Something
like CVE-2014-0195 is far more dangerous than Heartbleed but it didn't have a
marketing name and large amounts of press coverage.

~~~
wolf550e
A bug in DTLS will not get attention because people don't run DTLS.

~~~
2trill2spill
Yea probably a poor choice of a OpenSSL vulnerability, I was assuming this was
on by default even when using TLS like lot's of other OpenSSL features but
then I found this line, "Only applications using OpenSSL as a DTLS client are
affected."[1]

CVE-2012-2110 is probably a better choice.

[1]:
[https://www.openssl.org/news/secadv/20140605.txt](https://www.openssl.org/news/secadv/20140605.txt)

~~~
dsp1234
_CVE-2012-2110 is probably a better choice._

From the openssl advisory[0], "In particular the SSL/TLS code of OpenSSL is
_not_ affected.".

[0] -
[https://www.openssl.org/news/secadv/20120419.txt](https://www.openssl.org/news/secadv/20120419.txt)

------
ewindisch
I'm happy for this article because it does hit some points on the head. Other
points are deeply entrenched in Bryan's biases, but I can't really fault him
for that.

In particular, I am suspicious of the idea that unikernels are more secure.
Linux containers make the application secure in several ways that neither
unikernels nor hypervisors can really protect from. Point being a unikernel
(as defined) can do anything it wishes to on the hardware. There is no
principle of least-privilege. There are no unprivileged users unless you write
them into the code. It's the same reason why containers are more secure than
VMs.

Users are only now, and slowly, starting to understand the idea that
containers can be more secure than a VM. False perspectives and promises of
unikernel security only conflate this issue.

That said, I do think the problems with unikernels might eventually go away as
they evolve. Libraries such as Capsicum could help, for instance. Language-
specific or unikernel-as-a-vm might help. Frameworks to build secure
unikernels will help. Whatever the case, the problems we have today are not
solved or ready for protection -- yet.

This blog post was clearly spurred by the acquisition made by Docker (of which
I am alumnus). I think it's a good move for them to be ahead of the
technology, despite the immediate limitations of the approach.

------
pyritschard
The essential point the lengthy article makes revolves around debugging
facilities for unikernels. While mostly true for MirageOS and the rest of the
unikernel world today, OSv showed that it is quite possible to provide good
instrumentation tooling for unikernels.

The smaller point about porting application (whether targetting unikernels
that are specific to a language runtime or more generic ones like OSv and
rumpkernels) is the most salient, it will probably restrict unikernel
adoption.

For docker, if only to provide a good subtrate for providing dev environments
for people running windows or Mac computers, it is very promising.

~~~
anttiok
What porting? We have quite a few pieces of software in rumprun-packages which
require ZERO (0) porting or patches to function as unikernels, e.g. haproxy,
mpg123 and php. Feel free to check them out for yourself if you don't want to
take my word for it.

------
uxcn
I think Bryan Cantrill and Joyent are doing a number of interesting things,
but this reads more like an ad than a genuine critique of Unikernels.

    
    
        The primary reason to implement functionality in the
        operating system kernel is for performance: by avoiding
        a context switch across the user-kernel boundary,
        operations that rely upon transit across that boundary
        can be made faster.
    

I haven't heard this argument made once. There are performance benefits
(smaller footprint, compiler optimization across system call boundaries,
etc...). However, the primary benefit is not performance from eliminating the
user/kernel boundary.

    
    
        Should you have apps that can be unikernel-borne, you
        arrive at the most profound reason that unikernels are
        unfit for production — and the reason that (to me,
        anyway) strikes unikernels through the heart when it
        comes to deploying anything real in production:
        Unikernels are entirely undebuggable.
    

If this were true, and an issue, FPGAs would also be completely unusable in
production.

~~~
nickpsecurity
"If this were true, and an issue, FPGAs would also be completely unusable in
production."

BOOM! And kernels. And ASIC's. And so on. Yet, we have tools to debug all of
them. But unikernels? Better off trying to build a quantum computer than
something that difficult...

~~~
uxcn
It's not the same as using something like DTrace on a live system, but he's
describing it as though eliminating the flexibility implies some sort of event
horizon.

This also bothered me...

    
    
        virtualizing at the hardware layer carries with it an inexorable performance tax
    

Hardware has been adding a lot of virtualization support over the last decade,
and it's generally been a net performance gain as far as I'm aware. I could
see things like IO scheduling potentially being an issue. Although, IO
performance on top of operating systems in general does not particularly
inspire confidence.

~~~
nickpsecurity
Next time you see that, point out that the hardware is itself basically
virtualized to monolithic OS's with microcode, shared I/O, and multiplexed
buses. A version could work for unikernels if that worked for UNIX.
Performance issues come more from how it's applied than the concept itself.

~~~
jroesch
I think you guys might find Arrakis interesting:
[https://arrakis.cs.washington.edu/](https://arrakis.cs.washington.edu/) it
won Best Paper at OSDI '14 and demonstrates a possible way to better use
things like virtio.

~~~
nickpsecurity
That was interesting. Thanks for the link. I'll read the full papers later.
Meanwhile, both Arrakis and FlexNIC are now in my collection. :)

------
seliopou
First, let's put aside the start of the blog post, which consists entirely of
empirical questions. Each potential adopter of unikernels will have to figure
out for themselves wether their specific use-case justifies the cost and
benefit of this particular technology, just like all others.

Putting that aside, debuggability is an obvious and pressing issue to
production use-cases. Any proponent of unikernels that denies that should be
defenestrated. I haven't come across any that do.

How to go about debugging unikernels is unclear because it certainly is still
early days. However, I don't think the lack of a command-line in principle
precludes debuggability, nor does it my mind even preclude using some of the
traditional tools that people use today. For example, I could imagine a
unikernel library that you could link against that would allow for remote
dtrace sessions. Once you have that, you can start rebuilding your toolchain.

P.S. Bryan, where's my t-shirt?

~~~
mcguire
Exactly. The JVM has dandy debugging tools, but nothing command line-ish is
any of them.

------
zobzu
From TFA: " _At best, unikernels amount to security theater, and at worst, a
security nightmare._ "

As a security engineer, that's a good one sentence summary from my point of
view of unikernels, since, forever.

I think the reason why unikernels are being developed is due mostly to
ignorance, and if any of them is successful, it will morph into an OS that is
closer to Mesos, Singularity, or even Plan9. That's faster, safer, more
logical, etc.

~~~
viraptor
I'm not sure how this is different from containers security. Provided you
strip it down properly and not "my container is whole system + my binary", how
is the exposure different exactly?

Both will prevent persistence, both are restricted outside, not internally. If
anything I'd say that reduced number of devices give you lower attack surface
over hypercalls (unikernels) than having direct access to all the syscalls
(container).

What's the huge difference and where's the theater?

~~~
cyphar
If you look at proper OS virtualization implementations (like Zones and
jails), where syscalls that aren't safe just don't work, then the difference
is more apparent.

------
ori_b
The key thing to realize, I think, is that if you're using virtualization, a
unikernel is nothing more than a process that uses a very strange system call
API.

~~~
lmm
Sure. Unikernels are just a way to a) use an extremely restricted system call
API to enforce separation between processes b) have my processes be VMs for
memory-safe languages for safety c) not implement a user/kernel mode
distinction within a single application because it's just overhead at that
point.

~~~
ori_b
> _a) use an extremely restricted system call API to enforce separation
> between processes b) have my processes be VMs for memory-safe languages for
> safety c) not implement a user /kernel mode distinction within a single
> application because it's just overhead at that point._

But the second two points are already covered by just writing an application
and _not_ running it in a virtual machine. Remember, your VMs are already
running on an OS.

And the first -- I'm not convinced that qemu and x86 is all that much more
restricted than a well jailed process. Given the complexity of the PC, and the
number of critical Xen/KVM/... vulnerabilities, it certainly isn't trivial to
emulate securely.

Note, there is one advantage to unikernels, and that's lower overhead access
to network hardware than you get with the socket API. This advantage is also
available with netmap.

~~~
lmm
> And the first -- I'm not convinced that qemu and x86 is all that much more
> restricted than a well jailed process. Given the complexity of the PC, and
> the number of critical Xen/KVM/... vulnerabilities, it certainly isn't
> trivial to emulate securely.

Xen has their own priorities and I have my views on their code quality. The
interface is that much smaller that it should be much more possible to
implement securely than the unix API (which isn't even well-defined). People
elsewhere are talking about seL4; I hope we'll one day see a formally verified
hypervisor. I don't think we'll ever see a formally verified unix container.

In the long run you're right, there's ultimately no difference between
compiling my OCaml to a binary that runs on a secure formally verified
microkernel and building it into a unikernel that runs on a secure formally
verified hypervisor. But I can take steps towards the latter now - I can
deploy unikernel systems to EC2 today, and while they may not be more secure
than deploying processes to Joyent today, they're using an interface that
should make it possible to run them on a more secure environment.

~~~
rwmj
You can use seccomp to reduce the scope of the Linux system call API.

I think the real point here is that AWS exists, and so virtualization is the
new baremetal, but the advantage over baremetal is the range of "hardware" is
much more limited. You have virtio / XenPV disks, not twenty different SCSI
devices you need to write drivers for and debug. Therefore writing interesting
kernels directly to the virt layer and running those in the cloud makes sense.

------
pcwalton
It's not by any means the main point of the article, but: I'm not sure citing
the Rust mailing list post on M:N scheduling is proof that it's a dead idea.
The popularity of Go is a huge counterexample.

~~~
im_down_w_otp
+Erlang, +Pony, +Clojure (also STM waves hello), +Haskell

I also found that reference highly dubious.

I mean the reasons Rust abandoned it were quite legitimate. As a systems
language with originally segmented stacks that performed poorly and were thus
removed, which mitigated anything resembling the "lightweight" promise of
lightweight-tasks, and then combined with the overhead of having to maintain
two distinctly different IO interaction interfaces due to a lack of
unification between the standard runtime and the libuv based M:N scheduler...
I mean of course Rust needed to punt that semantic out of the core runtime and
move it to a domain where that kind of functionality could be implemented as a
library/framework instead. Otherwise writing consistent IO libraries for Rust
would be a massive pain in the ass.

The point of Rust isn't to be an opinionated framework that provides a set of
prescriptive models for solving problems like Erlang/OTP does. The point is to
be a generic systems language that you could use to build a new Erlang/OTP
shaped thing with.

I realize I'm quite literally preaching to the head of the International Choir
Association right now though. ;-)

------
chris_wot
I think Cantrill is doing a massive favour for those who are pro-Unikernels -
he's essentially trolling them and will force them to come up with responses
to some of the issues he's making.

Given how invested Joyent is in their current positions, I can see why
Unikernels may seem a threat, but none of the things Cantrill has raised as
concerns seem insurmountable.

------
zmanian
Op seems to misunderstand the following:

1\. Your hypervisor is the security boundary. 2\. Unikernel design lets you
maximize the security benefits of AppSec and LangSec by removing the large OS
surface area.

~~~
chubot
No, all these were addressed in the article.

On 1) " Hypervisor vulnerabilities emphatically exist; one cannot play up
Linux kernel vulnerabilities as a silent menace while simultaneously
dismissing hypervisor vulnerabilities as imaginary"

This is obvious, but it's true that people somehow think hypervisor
vulnerabilities don't exist. Amazon and Linode just rebooted a bunch of
machines because of a Xen vulnerability. Could something like Xen be more
secure than the Linux kernel? Maybe. That's a good question, and one I haven't
seen the answer to.

I actually doubt it because emulating hardware is probably full of more "C
tricks" than kernel code, but I'm not an expert here. Another complication is
that when you use a hypervisor, you always have Linux _in addition_. You
generally use its drivers for the real hardware. It would seem that Linux +
hypervisor is less secure than Linux alone. But there are probably some
mitigating factors.

2) "And to the degree that unikernels don’t contain much code, it seems more
by infancy (and, for the moment, irrelevancy) than by design."

I agree that unikernels are smaller at the moment mostly because they don't
have a lot of features. (Writing in a memory safe language helps, but it also
hurts! Because I need to run linear algebra in production, etc.) When you run
them on a hypervisor, they actually rely on Linux for real drivers and such.
So you're not actually getting rid of the code -- you're moving it around.

~~~
lmm
>Could something like Xen be more secure than the Linux kernel? Maybe. That's
a good question, and one I haven't seen the answer to.

It ought to be, because the attack surface is zillions of times smaller (x86
spec vs all the system calls offered by linux).

>you always have Linux in addition. You generally use its drivers for the real
hardware. It would seem that Linux + hypervisor is less secure than Linux
alone

Doesn't have to be Linux - but even if it is, users have no way to exploit
most Linux vulnerabilities because the hypervisor (hopefully) doesn't make
most possible system calls.

~~~
cyphar
> >Could something like Xen be more secure than the Linux kernel? Maybe.
> That's a good question, and one I haven't seen the answer to.

> It ought to be, because the attack surface is zillions of times smaller (x86
> spec vs all the system calls offered by linux).

Bullshit. Either your Hypervisor is a user mode program (it uses syscalls) and
therefore requires a full kernel underneath all of the other abstractions
you've got (which makes it slower). Or it's like KVM, where it's kernel mode
and thus doesn't need syscalls to break the host (does the phrase "floppy
driver local root" mean anything to you?).

~~~
lmm
I'm talking about the complexity of what the kernel-level thing is
implementing. It's pretty easy to make a secure Hello World program, even if
it's running in kernel mode. It's arguably impossible to make a secure
implementation of the linux API, which is not even formally documented. The
x86 spec is not simple but at least there is a spec. Floppy driver local root
is a good example - the linux kernel suffers from that, but a hypervisor might
well not even implement a floppy driver.

------
toast0
I'm not likely to run a unikernel anytime soon, but I wanted to respond to
this:

> And as shaky as they may be, these arguments are further undermined by the
> fact that unikernels very much rely on hardware virtualization to achieve
> any multi-tenancy whatsoever.

Multi-tenancy is needed in some cases, but I don't need it, we use the whole
machine, and other than the one process that does all the work, we only have
some related processes for async gethost, monitoring/system stats processes,
ntpd, sshd, getty.

------
readams
One of the things that seems to really fall flat is the claim claim that the
security is bad for unikernels. The comparison point though is not a
traditional OS running in a hypervisor but a container running on the host OS.
In that comparison I think unikernels are emphatically more secure than what
you get on Linux, and have essentially all of the same advantages of
containers (plus a few extra ones).

For Joyent of course they have a book to talk up and they want to sell you
their own solution which looks more like containers than a hypervisor. The
Joyent solution is I think undoubtedly very interesting and well-considered
but I have a suspicion that they've hitched their wagon to the wrong horse and
Linux will keep winning.

------
PaulHoule
I dunno.

For a long time the dominant programming environment for IBM mainframes has
been VM/CMS, where VM is something like VirtualBox and CMS is something a lot
like the old MS-DOS, i.e. a single process operating system. Say what you like
but it was a better environment than anything based on micros until you
started seeing the more advanced IDEs on DOS circa 1987 or so.

Now the 360 was a machine designed to do everything, but it's clear the
virtual memory in most machines is an issue in terms of die size, cost, power
consumption and performance and I wonder if some different configuration in
that department together with a new approach to the OS could make a
difference.

------
Animats
Can you run the unikernel under a debugger when testing? Can you get crash
dumps? Stack backtraces?

Unless you're running your unikernel on bare metal, it's still running under
an OS. It's just that the OS is called a hypervisor and is less bloated than
most OSs.

~~~
joveian
Here is Antti's response for rumprun:
[https://news.ycombinator.com/item?id=10719055](https://news.ycombinator.com/item?id=10719055)

That works even on bare metal if the network works. Similarly, there are
remote system calls that can be used for netstat and such. I don't think there
is a good way to secure them in many environments at this point, but adding
spipe might not be too hard.
[https://github.com/rumpkernel/rumpctrl](https://github.com/rumpkernel/rumpctrl)

There is also frankenlibc where you can run the application as an actual
userland process (but still with the rump kernel) under a variety of operating
systems. I think applications that run on one (frankenlibc or rumprun) should
be able to run on the other without changes (other than build related).
[https://github.com/justincormack/frankenlibc/](https://github.com/justincormack/frankenlibc/)

I'm not sure if NetBSD's kernel debugger or crash dumps work with rump kernels
at this point. I haven't played with them myself yet, just keeping an eye on
what other folks are doing. And yes, hypervisors also have debugging tools.

------
patrickaljord
Nothing like a 15 paragraphs corporate blog to explain why a technology is bad
and unfit for production to promote said technology. Microsoft used to do the
same with linux and now we have this
[http://blogs.technet.com/b/windowsserver/archive/2015/05/06/...](http://blogs.technet.com/b/windowsserver/archive/2015/05/06/microsoft-
loves-linux.aspx)

------
hughw
Isn't it a feature of (some) unikernels, that you can fire one up to respond
to some request, and tear it down, in milliseconds? If so, running an AWS
Lambda-like service with all the isolation you get in a HVM seems desirable
for some situations. The isolation provided by a Docker container might not be
good enough. It's a feature whose benefits, for some applications, might
balance the debugging costs the article outlines.

------
jorge-fundido
"Unikernels are entirely undebuggable."

I'm confident this will be addressed eventually. Anyone have a sense of what
that will look like? Something like JMX? Something like dumping core,
restarting, and analyzing later?

~~~
nulltype
The most basic is, of course, printf debugging to a console or logging
service. You could also connect over a network socket and inspect the current
state of the unikernel, or do any sort of debugging things. This sounds the
same as remote debugging now, the only difference is that the debugger agent
would have to be implemented as a library (or hypervisor feature) rather than
as a separate process. Erlang is supposedly fancy enough that you can do this
sort of stuff with just the builtin tools.

It also seems like you could debug the unikernel like any other process if you
run it as a VM on your machine. Except of course you can run any, say, x64
unikernel, instead of only programs built for your OS.

------
kriro
I think he brushes by the security argument too quickly. Unikernels are
(typically) smaller with less attack surface and more importantly it's easier
to reason about them. I'd argue that this ability to keep more of the entire
OS in your head at any given time improves security on a high level of
abstraction.

------
ccostes
Reading through the article I feel like the author and I are describing
different things when we use the term unikernel, which is surprising because
we both have experience with the same unikernel: QNX. I'm not very familiar
with the other examples, but my QNX application definitely does have processes
that I can see using top, htop, etc., and interfaces with system hardware
using the QNX system calls; all things the article describes as not being
features of unikernels.

Either the article is written in the context of writing kernel software, which
wouldn't have much of an impact on my decision to run my application on a
unikernel OS or not, or QNX is a far outlier from other unikernel OS's and
that's why I'm so confused.

~~~
jsnell
QNX is not a unikernel, it's a microkernel. Unikernels do not have an
abstraction for unprivileged, memory-isolated and independent processes. The
application is by definition in the kernel.

~~~
ccostes
Ah, thanks for the correction. The controversy around unikernels makes a lot
more sense now.

------
erichocean
Simple explanation for the article: Bryan is "talking his book".[0]

Joyent doesn't sell unikernel services, hence _unikernels are bad_. Color me
shocked. Is it me, or has Joyent become less than upfront about their motives
over the last few years? I don't require everyone to embrace "don't be evil"
or whatever, but I always get a "righteous" vibe from Joyent employees that
seems at odds with their actual behavior. Maybe they feel under siege or
whatever, and are reacting to that? The whole thing is vaguely _off_ somehow.

[0]
[http://www.investorwords.com/8436/talking_my_book.html](http://www.investorwords.com/8436/talking_my_book.html)

------
Philipp__
Wow. This thread went just as I thought it will... The HN way.

I hadn't any experience with unikernels (still student), but there are few
concerning things about them. And the main thing is that those things that are
concerning are at it's core.

I have only respect and admiration for Mr. Cantrill, but this post felt kinda
strange. After reading the last paragraph it sounded like and ad. Maybe they
got scared of Docker possibly expanding and taking part of their cookie. I
don't know, but these discussions were interesting to read at least...

~~~
kapilvt
well.. they sort of embraced docker to market their platform which was already
a better docker.. ie they took solaris zones + crossbrow network, and stuck
the docker api on it (in nodejs.. well no ones perfect ;-), then did a
brilliant engineer/hack on lxzones for solaris kernels to let you run linux
userspace on it..

re the hacker news way.. i'm still waiting for the hackers to come out and
redeems this article's comments. [edit] haven't finished reading all comments.

------
jupp0r
The main use case for unikernel apps (the way I see it) is running language
specific VMs like Beam, MRI or the JVM almost directly on bare metal and
getting rid of all the complexity of OSes. The idea is to make it easier to
debug, optimize and tune applications by removing traditional OSes complex
kernels from the equation. The real argument for security (that the author
omits) is derived from that: 20 million less lines of code in the stack that
you deploy.

------
woah
So can someone intelligently contrast a hypervisor with an OS? Both allow
multiple applications to run on some hardware, what are the major differences?

~~~
shykes
You could argue that hypervisor _is_ a special type of OS.

Compared to a "traditional OS":

\- hypervisor is much smaller, and as a result reduces security risk

\- hypervisor is younger and has less compatibility burden, so typically
carries less technical debt, can improve faster.

\- hypervisor exposes lower-level interface, so more work is left to the app
developer (or the build toolchain, which is where unikernels come in).

\- hypervisor has no support for legacy APIs such as posix or win32. As a
result almost no applications target it, which makes it a hidden piece of
plumbing and makes it harder to replace the traditional OS, even when it's
redundant from a strictly technical point of view.

~~~
rumcajz
Good point. The advantage that I am looking to get from unikernels, one that
no one seems to mention, is that the whole area of functionality that's above
hypervisor but below posix/win32 is suddenly open to play with for application
developers.

~~~
pjmlp
Unikernels follow an old approach from high level languages in the 60, 70 and
early 80's, with their full stack approach.

Think Burroughs, Lisp Machines, Mesa/Cedar, Lillith, Spin OS, Oberon,...

Basically the language runtime is the OS and access to lower layers is done
via the type system.

~~~
silentOpen
> An operating system is a collection of things that don't fit into a
> language. There shouldn't be one.

~ Daniel Ingalls, co-creator of Smalltalk
[http://web.archive.org/web/20070213165045/users.ipa.net/%7Ed...](http://web.archive.org/web/20070213165045/users.ipa.net/%7Edwighth/smalltalk/byte_aug81/design_principles_behind_smalltalk.html)

~~~
pjmlp
Yep.

C and C++ are the outliers here.

Although C++ is moving into a richer runtime, thus increasing the likelihood
to write OS independent code without relying on third party libraries.

Even C, if POSIX had been part of ANSI C, would fall into this scenario.

In a way, UNIX can be seen as C's original runtime.

------
kev009
I tweeted to him to research IBM's zTPF before writing this, I guess it
conflicts with the narrative he's telling though. In general, I agree with his
sentiments, but there are no absolutes, only trade offs here. You can, for
instance, hook a debugger into the kernel or through the hypervisor. And
debugging hardware looks a lot like debugging a unikernel in that sense.

------
Mojah
In case anyone is still struggling with the concept of a 'unikernel', I found
this article to help in clearing it all out: [https://ma.ttias.be/what-is-a-
unikernel/](https://ma.ttias.be/what-is-a-unikernel/)

------
sengork
For what it's worth:

[https://www.google.com.au/trends/explore#q=unikernel%2C%20li...](https://www.google.com.au/trends/explore#q=unikernel%2C%20linux%20containers%2C%20solaris%20zones&cmpt=q&tz=Etc%2FGMT-11)

------
dicroce
If the definition of unikernals doesn't include hypervisors then arguments
against unikernals specifically attacking hypervisors are only viable against
unikernals with hypervisors.

~~~
cyphar
How do you intend to run more than one piece of software on a beefy server?
Face it, you _need_ hypervisors if you want to actually use your production
machines.

------
dustingetz
So you need good enough logs that you can debug production without ssh to
production (since there is no ssh, bash, ps etc)? Don't we already have this?

~~~
cyphar
Why do so many people on HN think that nothing but logs can allow you to debug
very complicated systems? The only type of "log" which might actually help you
is a coredump, but what if the bug isn't fatal? What if your software just
sucks (is slow or returns an incorrect result)? How are logs going to help you
there, for those cases you just need dynamic instrumentation to coerce the
system to find out what it's doing at every level of the stack.

~~~
dustingetz
How about don't write shitty software. Just because joes flower shop wants
shitty custom database software doesn't mean I can't push the state of art and
what's possible.

~~~
cyphar
> How about don't write shitty software. Just because joes flower shop wants
> shitty custom database software doesn't mean I can't push the state of art
> and what's possible.

"I write perfect software, which will always either crash or run perfectly."
Given the fact that runtimes like Python and Node have had problems in this
field, I'm calling bullshit.

But that glosses over the fact that you might need to run some software that
you didn't write (shocker, I know). If that software has non-fatal failure
modes, logs won't help you. Logs won't help you with most fatal failure modes
anyway (you need coredumps in most hard cases).

Thinking that "stop writing shitty software" is a solution to the existence of
bugs is just harmful to anyone who has to manage your software. If you think
your software had never failed, that's because the people debugging it didn't
want you in the room at the time.

------
jksmith
>"There are no processes, so of course there is no ps, no htop, no strace —
but there is also no netstat, no tcpdump, no ping! And these are just the
crude, decades-old tools."

So does this mean something like a Symbolics machine or an Oberon machine
can't be debugged, or does this mean that the unikernel has to be debugged at
a higher level by the application(s) it's dedicated to?

~~~
im_down_w_otp
I have exactly zero trouble live debugging, tracing, profiling, or "topping"
via etop my Erlang unikernels.

I also have zero trouble embedding statsderl and custom lager backends into
them so that all metrics and log messages are forwarded to aggregation
infrastructure.

------
nevir
TL;DR for those reacting to the title, but not reading the entire article:

Unikernels are young, and lack tooling/robustness that we have in more
traditional approaches. They are not production ready _yet_ , but will likely
become a prominent way of building and deploying applications in the future.

~~~
jacobparker
Er, I think you should re-read it :)

> Unikernels are unfit for production not merely as implemented but as
> conceived: they cannot be understood when they misbehave in production — and
> by their own assertions, they never will be able to be.

~~~
nevir
Ha, you're right

------
0xdeadbeefbabe
> it is “all OS”, if a crude and anemic one.

Crude or anemic? The program does what you want or it doesn't. Quit trying to
make it a human.

Edit: If the author can believe programs are crude or anemic he clearly likes
to look at them from a high level, but you need a low level view to get
excited about unikernels.

Edit: What?

~~~
cyphar
The author is a kernel developer. He cowrote DTrace and worked at Sun as a
kernel developer while they developed Zones and ZFS. I don't see how a kernel
guy could only see programs from a high level.

~~~
0xdeadbeefbabe
That is confusing. Maybe he just doesn't like unikernels.

~~~
cyphar
> That is confusing. Maybe he just doesn't like unikernels.

This is true. He has expressed hatred of them for a very long time. I'm not
sure why (this article appears to explain it, but I don't feel particularly
convinced by some of the points).

------
cbd1984
How are unikernels unfit for production? CMS has been in production longer
than most here have been alive.

------
MCRed
This article seems to miss the point. Not that Unikernels seem useful for
running in VMs, not on bare metal. Thus you get the isolation of a true VM
with a container like performance & resource usage.

~~~
misterbisson
What if you could have isolation that's as good as or better than a VM in a
container?

~~~
lmm
Then I'd be interested. But Solaris is x million lines of C code and the
system call attack surface is huge, so I really don't think Joyent can offer
that. Fundamentally if the author believes that you need a full traditional
unix userland inside the container for debugging then they're never going to
be able to offer that level of isolation.

~~~
cyphar
Not inside the container. Containers are transparent to the host (Zones
especially). You have tooling in the kernel, and you take coredumps when a
container process dies.

~~~
lmm
See that's the model I'd expect to take - but that model makes perfect sense
in the unikernel world too. So the author must be claiming that you need the
debugging tools _inside_ the container, otherwise the point about not having
tcpdump etc. available makes no sense.

~~~
cyphar
The problem with applying the same logic to hypervisor'd unikernels is that
hypervisors make the guest system opaque. So how could you meaningfully run
DTrace on that system?

