I am curious to see how long of a shake-out period will exist before there's either a de facto stack of "compute resource" tooling, or if there's always going to be a highly fragmented and diverse way to accomplish your goals. Just off the top of my head (and there's way more) I'm thinking about Tectonic, Mesosphere, Rocket , Kismatic  as a few examples.
As a technologist and a planner, it's been challenging to see far enough into the future to decide on what tools to devote myself to learning at this point. I do think we're certainly in a "post-public cloud" timeline where we're getting good enough (or will be in 6-12 months) at abstracting virtualization right up to a millimeter or two below the application layer of our stacks. How we choose to do so seems to be currently up in the air.
In my mind, this opens up the possibility of compute as a resource much wider than had previously been possible. We'll be less reliant upon Azure, AWS, and GCP's mixture os Paas and Iaas and much more interested in compute as a resource, likely from bare metal or private cloud providers.
I'm looking forward to the increased efficiency (both through compute power and cost) and security available in moving from a application-level virtualization to operating system-level virtualization.
I think your observations are interesting. From my (somewhat biased) viewpoint I don't think we will enter into a 'post cloud' world. There are very real efficiency gains from running at public cloud provider scale, and the economics you see right now are not what I would consider 'steady state'. Beyond that the systems we are introducing with Kubernetes are focused on offering high levels of dynamism. They will ultimately fit your workload precisely to the amount of compute infrastructure you need, hopefully saving you quite a lot of money vs provisioning for peak. It will make a lot of sense to lease the amount of 'logical infrastructure' you need vs provisioning static physical infrastructure.
There are however legitimate advantages to our customers in being able to pick their providers and change providers as their needs change. We see the move to high levels of portability as a great way to keep ourselves and other providers honest.
Edit: Wired story: http://www.wired.com/2013/03/google-borg-twitter-mesos/
Kubernetes is heavily inspired by both Borg and Omega, and incorporates many of the ideas from both, as well as lessons learned along the way. And many of the engineers who work on Kubernetes at Google, also worked on Omega and Borg.
Please feel free to respond to me at your leisure, but are you * sure * we will never enter a post-cloud world?
Not to say that there will be no cloud infrastructure, per se, just as mainframes still exist today.
On the other hand, I imagine someday we will have "datacenter in your pocket" type devices. The challenge will be who has the data -- obviously Google has already identified this as a key strategic advantage. The challenge will * not * be who has enough resources to compute it.
These pocket devices seem natural as a way to place strong AI at your fingertips, Siri-like agents, autonomous robots, etc. The first ones, which we have now, either use a data connection or are optimized to have small data sets, but the need for larger data sets is obvious. Once it becomes the primary limiter, I think it will only be a matter of time before "big data" is decoupled from the cloud and personal computing retakes its dominant position. Some will use laptops, some will use phones, but the effect will be the same.
There are also the privacy benefits from managing large datasets on your own device -- solutions are already available for things like how to back up your data, how to sync large sets of common data among a network of untrusted peers, and how to curate that data.
Disclaimer: I work on Google Cloud but not Kubernetes or GKE. Also, Satya was my PhD advisor.
Thanks for commenting on this thread!
One of the reasons that I pushed hard to get Kubernetes open sourced, is the hope that we could get out in front of this, and allow the developer community to rally around Kubernetes as an open standard, independent of any provider or corporate agenda.
We've spent a lot of time working with the Kubernetes community. I can only speak to our experience, but Brendan, Craig, and the rest of the team at Google have 100% lived up to the commitment of treating the Kubernetes project as truly open and independent.
Our Kubernetes dashboard was recently merged into Kubernetes . We brought our own vision of a web ui to the project, and we could have gotten bogged down defending technology decisions, and philosophical nits. Instead, the response from Google, RedHat, and others in the community, was basically "Awesome! How soon can we get it in?"
All of the key players have the right approach, and that gives me confidence in the project's longevity.
 UI Demo video - https://www.youtube.com/watch?list=PL69nYSiGNLP2FBVvSLHpJE8_...
I look forward to Kubernetes becoming an independent project outside of Google then :)
Independent ownership and proper governance will setup the project for long term success and as a small company, you should prefer it to be that way.
I'm extremely pleased that Kubernetes has been open sourced by Google. It truly seems to me that the developer community is and will remain to be able to rally around Kubernetes as an open standard both today and in the future without fear of any outside agendas; as Brendan so eloquently stated. I for one applaud Google's level of transparency when it comes to the future of the project and the overall product vision.
Thanks for building k8s! Even if it doesn't "win" in the end, it's been an extremely useful and reliable solution for my needs.
for turn up instructions on AWS, it's as easy as:
export KUBERNETES_PROVIDER=aws; wget -q -O - https://get.k8s.io | bash
It's not just them - doing things this way makes it seem like this is in any way acceptable. It's not. Stop it.
No wonder it's so easy for TAO.
It's for the people who don't know any better and see this anti-pattern everywhere and thereby begin to think it's okay or accepted. It's not.
Disclaimer: I work on Google Cloud but not Kubernetes or GKE.
Think of a cluster of VMs running CoreOS + Tectonic as an alternative to Google Container Engine.
Kismatic apparently calls itself "the Kubernetes Company."
I can imagine a future where it gets easier and more common to build an arbitrarily complex backend by just hooking together AWS services, using Lambda (or something that evolves from it) to write all your custom business logic without ever thinking about a server, VM, or container. I'm working on a greenfield app and very seriously considered this route now we but ended up deciding the uncertainty vs doing it the way we know wasn't quite worth it. It feels very close to the tipping point to me though.
Either way it's definitely an exciting time
you're risking to awaken the ghost of Application Server.
I think the crucial question for us is going to be adoption and support within the AWS ecosystem. It checks out (to me at least) as the the technically superior option, but Amazon clearly wants to compete in this space as well and they have the home turf advantage.
like @brendandburns, I just want the best technology to win and become the standard. It would be shame if the Amazon/Google rivalry got in the way of something that important.
Can someone from Amazon chime in on this? Is there anything the Google team could do that would make Kubernetes a neutral project that Amazon would support? I feel that there's a ton of raw knowledge that Google engineers have accumulated on cluster management, and Kubernetes is an opportunity for that not to go to waste.
I'd also love to know the split between this, Omega, and Kubernetes at Google.
Think of how long the Python 2->3 transition has taken (outside Google, not speaking in Google terms anymore). It's been six years, and we're only now reaching the point where Python 3 may be a better choice for green-field projects than Python 2, and Python 3 may never be a better choice for legacy installs. The Borg -> Omega transition has a similar dependency issue (everything runs in the cloud at Google), the learning curve is worse than Python 2->3, and all of Google's code is legacy. That's independent of any technical differences between them, and also irrelevant to whether an organization just getting onto the cloud would be better off with Docker, Mesos, or Kubernetes.
The technically interesting question is whether decentralized scheduling in the large scale is a solved problem or not. Can we do it better than centralized today?
It's a fairly straightforward getting started experience.
Also, if you want to turn up a cluster in a cloud provider, it's as simple as https://get.k8s.io
I have not yet tried any other docker orchestration framework (there seem to be a few popping up right now), but concerning clustering: In comparison Mesos appears intimidating to me (there is certainly not the 2min "I get this" experience, I've had with tools like etcd & kubernetes) and I remember building clusters w/ technology like heartbeat, corosync, openais & drbd not so long ago - compared to this distributed computing became incredibly easy.
My advise for starters would be to pick some ready2go vagrant-coreos-setup and get it running on your workstation, this should be pretty straightforward. (We are running k8s on openstack/rackspace and there were too many moving parts involved to get the included starter-scripts to reliably bootstrap a kubernetes installation)
Then look at the user-data/cloud-init of that project and try to rebuild things on your preferred stack from the bottom upwards, step after step - I feel a lot more sovereign when doing that. The components' logfiles are actually helpful when you assemble things. It also helps to look at the generated (and documented, thx for this) iptables nat rules, when you have problems with service discovery/communication.