The k8s upstream was forward thinking enough to make the scheduler parts of it pluggable, which allow the (imo) holy grail of something like this https://github.com/mesosphere/kubernetes-mesos. This gives you the nice logical "pod" description for services, running on the battle tested mesos scheduler.
There are many 10k+ node bare metal mesos deployments (apple, twitter, etc). There aren't yet many kubernetes deployments of that scale. They truly are mutually beneficial. Mesos makes ops happy, and k8s makes devs happy. Together you have a relatively easy to setup internal PaaS (your own heroku with not a ton of work) more or less.
Disclaimer: I'm a heavy mesos and apache aurora user.
I'm still not quite understanding what utility Kubernetes brings to the table if you can also use it with Mesos. If you use Mesos, why involve Kubernetes at all, and not some Mesos-specific framework like Marathon or Aurora? Is Kubernetes simply a competitor to those frameworks?
My concern about Mesos is mainly footprint and complexity. You need to run ZooKeeper, the master, the slaves, and then each framework. Only Mesos itself is written in C++, everything else is JVM, which is a pretty significant memory hog. By installing Mesos you just increased the complexity of the deployment/ops stack by a huge margin; you reap many benefits, of course, but Mesos is a lot more opaque and complex than a few daemons and some SSH-based scripts.
That being said, you k8s is sexy stuff, it just ties you to docker, and I believe soon to be rocket. When I first started evaluating both (around docker 1.2.x), docker was not super viable and was pretty buggy. With 1.6.x and newer, most of my original concerns cease to matter. They are both excellent technologies, use whatever works for your environment.
I've worked on Cloud Foundry, which is ostensibly a Heroku competitor.
The idea that you can replicate Heroku's full functionality "easily" is just silly.
Full-feature PaaSes do a lot of things, including a whole bunch of tedious nitty-gritty details.
We're well into the days of early maturity on PaaS products. You can install Cloud Foundry or OpenShift, or host on Heroku. Writing your own PaaS at this point is a bit like writing a custom operating system circa 1995. Unless you have a compelling reason to do so, you'd be utterly crazy to.
I did this on my Macbook, using Virtualbox, Vagrant, and this:
^ I started with that, and then installed Chronos with apt-get in addition.
Specifically, for launching Docker containers, this was useful:
I didn't try Aurora but it seems it'd be an alternative to Marathon + Chronos (Mesos calls all of these "frameworks").
The whole thing is pluggable and can use various distributed state backends (etcd, zookeeper etc) or IP connectivity backends (veth, macvlan, vxlan, openvpn etc) without changing your application. Service discovery uses DNS so you don't need to modify your application to take advantage of it. It's probably the most significant change to Docker in the last year.
This will make the integration with Kubernetes smoother. Currently Google is forced to rip out Docker's native networking stack because it is not flexible enough for their opinionated networking model. This causes many Docker applications to break in Kubernetes today. That problem should go away with Docker 1.7+ because Google-style networking can be expressed as a Docker plugin, which Kubernetes can load programmatically as part of its orchestration. An added benefit is that you can augment Docker with Google-style networking even if you use Kubernetes competitors like Mesos, Swarm, Cloudfoundry etc.
(EDIT added details more relevant to Kubernetes)
Sorry about the barrage of questions. As you can probably guess, I'm very interested in trying this out.
On a past project I did service discovery with Docker and CoreOS using SkyDNS with etcd. Services would register their network location in etcd and SkyDNS would translate those entries into DNS records. SkyDNS ran on every host and the app-level containers linked to the SkyDNS container.
If a container was moved across hosts, the etcd entry would be updated automatically and eventually the updated DNS entry for that service would propagate across the cluster.
Well, let's not get ahead of ourselves.
The only thing I'll add is that k8s isn't targeting the same scale as Mesos. Their current goal is to support up to 400-500 nodes, max.
Source: One of the core k8s developers I met at a CoreOS meetup in SF earlier this year. They said if I needed to go beyond 500 nodes that I should probably look at something else.
Disclaimer: I work for Kismatic.
It might not be correct for much longer, but if it is the case now, how can you say it's categorically incorrect?
ohh i bet we're in the same building right now
when we were looking at building k8s our mission was to help the world move forwards to a more cloud native approach to development. by cloud native i mean container packaged, dynamically scheduled, micro-services oriented. we figured that in the end our data centers are going to be well suited to run cloud native apps, since they were designed from the ground up for this approach to management, and will offer performance and efficiency advantages over the alternatives. we also however recognized that no matter how cheap, fast and reliable the hosting offering is, most folks don't want to be locked into a single provider and Google in particular. we needed to do what we were doing in the open, and the thing that we built needed to be pattern compatible with our approach to management and quite frankly address some of the mistakes we had in previous frameworks (Borg mostly as a first system).
we looked really closely at Apache Mesos and liked a lot of what we saw, but there were a couple of things that stopped us just jumping on it. (1) it was written in C++ and the containers world was moving to Go -- we knew we planned to make a sustained and considerable investment in this and knew first hand that Go was more productive (2) we wanted something incredibly simple to showcase the critical constructs (pods, labels, label selectors, replication controllers, etc) and to build it directly with the communities support and mesos was pretty large and somewhat monolithic (3) we needed what Joe Beda dubbed 'over-modularity' because we wanted a whole ecosystem to emerge, (4) we wanted 'cluster environment' to be lightweight and something you could easily turn up or turn down, kinda like a VM; the systems integrators i knew who worked with mesos felt that it was powerful but heavy and hard to setup (though i will note our friends at Mesosphere are helping to change this).
so we figured doing something simple to create a first class cluster environment for native app management, 'but this time done right' as Tim Hockin likes to say everyday.
now we really like the guys at Mesosphere and we respect the fact that Mesos runs the vast majority of existing data processing frameworks. by adding k8s on mesos you get the next-generation cloud native scheduler and the ability to run existing workloads. by running k8s by itself you get a lightweight cluster environment for running next gen cloud native apps.