

Towards Heroku for Unikernels: Part 2 – Self Scaling Systems - amirmc
http://amirchaudhry.com/heroku-for-unikernels-pt2/

======
josh2600
The things I hear from the Unikernel crowd sound a lot like the stuff I hear
from Docker, although I note that part of where Amir seems to be going is
where Docker came from[0]. I personally think that immutability is an
important component of building modern distributed systems (things that are in
production should be as immutable as possible; we've even experimented with
reverting machines to initial state on a rolling basis, for example). If you
have stateless services, it makes sense to constantly bring up boxes that are
at the initial state as memory leaks or other errors compound over time.

At Terminal we do this by having RAM-perfect snapshots (think VMWare style
snapshots, but without a hypervisor) and rolling new instances from an initial
state. The snapshotting works by taking the RAM-state, CPU cache and Disk
state at a given moment and committing it to Disk for later restoration. Once
you have the primitive of being able to treat machine state like a file
system, you get a lot of properties that you might not otherwise have access
to (like being able to bring up machines with state in the time it takes to
read the RAM from disk).

I am overall quite bullish on Unikernels, but I do think there's quite a bit
of distance to cover between where we are now and where companies will feel
comfortable trusting their infrastructure to MirageOS.

I am particularly interested in running MirageOS without a hypervisor, but I
understand why that's not yet possible (please do correct me if I'm wrong). If
MirageOS is running in a container somewhere, I'd like to see it.

[0][http://devops.com/blogs/docker-leaving-immutable-
infrastruct...](http://devops.com/blogs/docker-leaving-immutable-
infrastructure-2/)

~~~
amirmc
Given what you've described, I think you might be interested to look further
at Irmin [1]. It's not quite ready for prime-time but certainly stable enough
to kick the tires.

Regarding commercial uptake, the nice thing about the library approach is that
companies get to pick and choose the components they want (without even going
'full Unikernel'). For example, I'm aware of the cohttp library having
commercial users. The real issue is legacy code but I mostly sidestep that
when I write (but we're well aware of it).

Interesting that you mention MirageOS in containers. I don't see any reason
the two can't be compatible but it would good to hear more about what you'd
like to achieve (or how you'd expect it to be in terms of workflow).

[1] [http://openmirage.org/blog/introducing-
irmin](http://openmirage.org/blog/introducing-irmin) (I think it's time we put
together an overview page with all the links).

~~~
josh2600
So the overall goal is to reduce the footprint of each users
machine/application as much as possible within a larger aggregated pool of
resources. I see Unikernel as one potential way of reducing the amount
storage/memory each user or application needs.

With containers, you can do RAM deduplication under some circumstances, and
you can get a lot higher resource utilization doing that, but I think we can
always do better, and so that's kinda why I have my eye on unikernels (also
because unikernel seems like a reasonable way of squeezing more performance
out of systems in some cases).

------
jacques_chester
Heroku doesn't get enough credit for their technical achievements.

I work for a company - Pivotal - which is, indirectly, a Heroku competitor.
We're the main contributors to Cloud Foundry, which is a PaaS that can be
deployed to private and public clouds.

PaaSes are more than "deploy and wire the app"; there's actually an eye-
blistering amount of complexity below the hood.

If you're prepared to ignore legacy, sure, all that complexity goes away. But
so does a lot of accumulated capital in the form of existing software.

That being said, I think unikernels are a very promising line of attack on
service isolation that warrant further effort. I spent some time last week
poking around the Cloud Foundry codebase looking for places to inject changes.

My first impression[0] was that Diego would need some additions. That's still
true, but I think that the logic changes are going to be less about the Diego
Executor (which mostly knows how to drive Garden-managed containers) and more
about introducing a new Stager backend[1]

I also now think that there'd need to be a Xen backend for Garden. So long as
you implement the generic Garden API[2], Diego can use that to create and
manage what it thinks of as containers.

During my free time last week I deliberately went looking for "architecture-
busters", use cases and application platforms that strain or break how the
Cloud Foundry architecture works.

Unikernels are one of the best examples I came up with. They severely stretch
the architecture at staging time. But they also show the strengths of the
Diego/CF architecture, which is in fact totally agnostic of containerisation
strategies. That's why Diego allows CF to run buildpacks and mount Docker
containers currently. It's why .NET applications are close behind and why
unikernels are actually a legitimate possibility.

[0]
[https://news.ycombinator.com/item?id=9264241](https://news.ycombinator.com/item?id=9264241)

[1] [https://github.com/cloudfoundry-
incubator/stager/tree/master...](https://github.com/cloudfoundry-
incubator/stager/tree/master/backend)

[2] [http://godoc.org/github.com/cloudfoundry-
incubator/garden](http://godoc.org/github.com/cloudfoundry-incubator/garden)

