
Docker Built-In Orchestration Ready for Production: Docker 1.12 Goes GA - samber
https://blog.docker.com/2016/07/docker-built-in-orchestration-ready-for-production-docker-1-12-goes-ga/
======
meddlepal
Is there any good reason to use this over the much more mature Kubernetes?

~~~
samber
Easier to go from dev to prod ?

~~~
andrewstuart2
Not really, though. It's super easy to run a local kube instance (now even
cross-platform [1]). In fact, it's much easier at the moment to spin up prod
environments for Kubernetes, so I'd argue that Kube still wins in the
dev->prod arena.

[1] [https://github.com/micro-kube/micro-kube](https://github.com/micro-
kube/micro-kube)

~~~
olalonde
But you have to learn a bunch of new words (manifests, pods, controllers,
etc.) and while spinning up new prod environments is relatively
straightforward, maintaining/upgrading one is not. Kubernetes is still a lot
more mature though.

------
drchiu
Given that Flynn came out with 1.0 the other day, would really appreciate it
if someone with knowledge of Flynn and Docker Orchestration / Docker compose
describe the differences.

~~~
jacques_chester
If I'm not mistaken, it's apples and oranges. Docker Orchestration is closer
to the Kubernetes/Diego/Mesos level, in that it's assigning opaque workloads
to machines in a distributed cluster.

Flynn is more of a PaaS: it has additional logic to wire up your app, inject
services, route traffic and so on.

I work for Pivotal, we donate the majority of engineering on Cloud Foundry, so
I wind up peeking at lots of other cloud platforms that are emerging in this
space. For example, Red Hat dropped their own code and built OpenShift 3
around Kubernetes.

For Cloud Foundry, we don't use any of these HN-famous systems currently. We
built and use Warden/Garden because Docker didn't exist at the time. We built
and use the Diego orchestrator because Kubernetes, Mesos and co didn't exist
at the time. There's a lot of convergent evolution occurring in this area
because lots of smart people see the same problems and arrive at similar
ideas.

As for scaling up, Nomad currently holds the benchmark porn crown: they posted
a 1-million number a few months ago. That said, it depends a lot on what
you're counting. We've have Diego running at 10,000 genuinely genuine
application instances -- full routing, full logging, full service injection --
in production for some time now. There's work to push the official-we-can-
sell-it-to-F1000s-and-not-get-sued limit to 250,000.

Sure, there are are companies who need more than 250,000 copies of their apps
running and it may be a bit longer before Cloud Foundry can meet their needs.
I mean, I _guess_ they can probably afford to run _two_ copies of Cloud
Foundry, if they _have_ to. They might have to hire another operator. Ruinous.

Anyhow. We built an early version of Diego that was to my mind architecturally
very elegant but, as I am given to understand, fell prey to a stampeding herd
problem as we began to approach heavier workloads and simulations showed it
would only get worse unless we went to something simpler.

I'm not sure I agree with the argument in this post that Docker's approach is
particularly novel in this area -- separating workers and managers seems
pretty standard to me. Diego breaks the work into more than a dozen
cooperating subsystems, variously distributed into brains and cells. Diego
relies on etcd, which Docker's engineers consider to be an operational
overhead. That makes sense because of Docker's engineering assumptions (that
someone has _only_ installed Docker), so they kinda sorta _have_ to be
autarkic on this matter.

Cloud Foundry -- and by extension Diego -- can rely on BOSH to make
operational management of etcd or any other service very close to a non-issue,
while pushing that engineering effort out of the core product. I heard
recently that a standard OSS Cloud Foundry has 90 (ninety) running processes.
I never knew this because I've never noticed, because the upgrades are
relatively trivial and almost offensively reliable.

Docker say that read optimisation is a major win from using an inbuilt store,
but this presupposes a read-heavy design. A lot of what Diego does is push
that work out of the central manager towards the edges. Mesos achieves a
similar trick (though it pushes back to the consumer, rather than to the
executor). Originally Diego's approach was was more decentralised, with a full
auction model (meaning that the central orchestrator didn't need to update
_any_ node information at all), but as I said, the stampeding herd of bids on
new jobs or processes made it hard to scale. (I am not sure what the
alternative was, I'm just an interested spectator.)

Reminder: I work for Pivotal, insofar as Docker is turning back into a PaaS
company, we're becoming competitors. I don't work on Diego and I never have,
so my understanding of architecture and the course of its evolution is based
on snippets of conversation and distant fan-boying. My understanding of
Docker's new architecture is from hastily skimming a single blogpost and
includes the industry-standard level of heavily and unfairly discounting every
other engineer's intelligence and foresight.

So take everything I said under careful consideration.

