Hacker News new | comments | show | ask | jobs | submit login

There are a couple of things here. 1) Right now everyone is afraid that Docker will emulate VMware, and crowd them out of the container space, much like VMware killed most of their competitors. 2) To this end, I have heard that Google and Redhat have massive marketing budgets, and that the marching orders have been over and over - don't say docker, say k8s. 3) The real battle is where the money is - large scale distributed systems. Companies want to freeze docker out, because Docker controls the lowest point of access - the container runtime itself. 4) hence google is trying to push "docker compatible" ideas that are just the OCI standard - nothing to do with Docker itself.

AWS doesn't want to support Swarm, because it gives people portability off of their cloud. Google doesn't want to support swarm, because K8s is a trojan for Google Cloud. No one else wants to support swarm because it competes with their products.

That said, what's happening right now, if we are not careful, will fragment the container ecosystem, and it make it impossible for single containers to target multiple runtimes.

Docker is the only one who can deliver a universal set of functionality that is leveraged by all. From a technology point of view, Docker is going in the right direction. We got burned with Redhat in Openshift 1 & 2 land, and that's left us with a point of view that the only thing we can depend on is a container runtime itself, and 12fa applications.

K8s does not really work that way. It's huge and it's heavy, and it expects every app to be written it's way.

The technical direction here for Docker is good. But the implementation and early release is ridiculous. I was impressed by the first RC release, and then terrified that they released a RC as production.




> Docker is the only one who can deliver a universal set of functionality that is leveraged by all.

Why do you say that? I have quite a bit more faith in the design chops of the folks behind Kubernetes (Google, Redhat, CoreOS, and many others) than Docker Inc.

Swarm really only touches the surface of the requirements for large scale distributed container orchestration.

Kubernetes is complex because the problem it attempts to solve is complex.

I'd also add that Kubernetes is dead simple to use. The difficulty is in setting it up - but even that is getting much better.


Good question. K8s has a network mode that is incompatible with swarm, mesos and nomad. Swarm only touches the very top of requirements for complex deployments, but going into K8s, the way they do thing pretty much prevents separate container orchestration systems from working in parallel.

For it to be universal, it has to live in the container runtime.


> K8s does not really work that way. It's huge and it's heavy, and it expects every app to be written it's way.

I disagree. Kubernetes is quite lightweight, and its architecture is nicely modular. The core of Kubernetes is just four daemons. You can also deploy most of its processes on Kubernetes, which greatly eases the operational side.

> and it expects every app to be written it's way.

Kubernetes makes zero assumptions about how an app is written, as long as it can run as a Docker (or rkt) image.

It imposes certain requirements, such as that pods are allocated unique IP addresses and share networking between containers, but that doesn't really impact how apps are written.

> K8s is a trojan for Google Cloud

Doubt it very much. For one, the Kubernetes experience on GCloud (GKE) isn't particularly good at all — the "one click" setup uses the same Salt ball of spaghetti that kube-up.sh uses, the upgrade story isn't great, alpha/beta features are disabled, you can't disable privileged containers, ABAC disabled, the only dashboard is the Kubernetes Dashboard app (which is still a toy), and GCloud still doesn't have internal load balancers. Setting it up from scratch is preferable, even on GCE.

Additionally:

* Kubernetes has excellent support for AWS as well as technologies such as Flannel for running on providers with less flexible networking.

* Google makes a lot of effort to help you to set it up on other providers (also see kube-up).

* Projects like Minikube let you run it locally.

If Kubernetes is a "trojan" of anything, it's to improve the containerization situation generally, because this is an application deployment model where they can compete with AWS, which doesn't have a good container story at all (ECS is pretty awful).


The arguably whole reason Google is sponsoring K8S is to promote GCE and GKE. It's their main long term game play vs. AWS (moving the world to containers instead of VMs).


Disclaimer: I work for Red Hat on OpenShift.

I apologize for your experience with Red Hat OpenShift 1 & 2. OpenShift 3, which has been out for more than a year now, is natively built around both docker and kubernetes. Red Hat developers are among the top contributors to docker, kubernetes, and OCI. With OpenShift we seek to provide an enterprise-ready container platform, built on standard open source technologies, available as both software and public cloud service. I hope you will give us another look!


I work for what is a Red Hat competitor in this space, Pivotal.

Like this fellow says, OpenShift 3 is lightyears ahead of 1 & 2.

(Obviously, my horse in this race is Cloud Foundry)


I work for Google Cloud (though my opinions are my own).

If people want to run Swarm or Nomad or Rancher on Compute Engine, then more power to them!

In fact, I even open sourced deployment templates to run Swarm on GCE and hopefully will add autoscaling and load balancing soon: https://github.com/thesandlord/google-cloud-swarm


I agree with you that lock-in is a big motivator here. It's always been king in the software space. As you point out, k8s exists as a public project specifically to diminish AWS's lock-in and make it simple to deploy out to other cloud providers (Google Cloud specifically).


Disclaimer: I work at Google and was a founder of the Kubernetes project.

In a nutshell yes. We recognized pretty early on that fear of lockin was a major influencing factor in cloud buying decisions. We saw it mostly as holding us back in cloud: customers were reluctant to bet on GCE (our first product here at Google) in the early days because they were worried about betting on a proprietary system that wasn't easily portable. This was compounded by the fact that people were worried about our commitment to cloud (we are all in for the record, in case people are still wondering :) ). On the positive side we also saw lots of other people who were worried about how locked in they were getting to Amazon, and many at very least wanted to have two providers so they could play one off against the other for pricing.

Our hypothesis was pretty simple: create a 'logical computing' platform that works everywhere, and maybe, if customers liked what we had built they would try our version. And if they didn't, they could go somewhere else without significant effort. We figured at the end of the day we would be able to provide a high quality service without doing weird things in the community since our infrastructure is legitimately good, and we are good at operations. We also didn't have to agonize about extracting lots of money out of the orchestration system since we could just rely on monetization of the basic infrastructure. This has actually worked out pretty well. GKE (Google Container Engine) has grown far faster than GCE (actually faster than any product I have see) and the message around zero lock-in plays well with customers.


Not speaking in an official capacity, but the analogy I've seen used is that big companies don't want to relive the RDBMS vendor lock-in experience.

I'm speaking about something other than k8s (Cloud Foundry), but the industry mood is the same. Folk want portability amongst IaaSes. Google are an underdog in that market, so it behooves them to support that effort -- to the point that there are Google teams helping with Cloud Foundry on GCP.

Disclosure: I work for Pivotal, we donate the majority of engineering to Cloud Foundry.


k8s is essentially "aws in a box" and it's a product that locks. As soon as k8s cluster is running in GKE - it becomes not that portable at all, due to operational complexity as well as tide up to the google infra.


> That said, what's happening right now, if we are not careful, will fragment the container ecosystem, and it makes it impossible for single containers to target multiple runtimes.

Not a chance. There is Packer [0] to get rid of all potential lock-in and monopoly. It's a universal image/container creation tool.

- It re-uses your ansible/chef/puppet/shell/whatever scripts for setting up the image.

- It outputs a docker containers, Amazon AMI, Google images, VmWare Images, VirtualBox Images. Whichever you like, with the same configuration.

[0] https://www.packer.io/




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: