brew install kubectl
Then follow the tutorial https://github.com/kubernetes/minikube#quickstart
- Arch linux AUR https://aur.archlinux.org/packages/minikube/
- Windows installer https://github.com/kubernetes/minikube/releases/download/v0....
- Deb package https://github.com/kubernetes/minikube/releases/download/v0....
ps. images.rcw..... not images.rcs..
In fact, IMHO kubernetes has tried to do something similar with .. but it is not engineered ground up for simplicity. Which is why it has MULTIPLE tools for this - minikube, kubeadm, kompose - but nothing matching the ease of use of docker and its yml files.
The last survey showed 32% of the polled used Docker Swarm versus Kubernetes' 40% - and this is back when Docker Swarm was highly unstable. https://clusterhq.com/2016/06/16/container-survey/#kubernete...
Are people here using Swarm ? what have your experiences been like.
I think you are misunderstanding the tools listed. Minikube sets up a single-node local cluster. Kubeadm sets up a multi-node cluster. No matter how or where your cluster is set up, you still deploy with manifests.
Which is what my point was with minkube and kubeadm and kompose - for swarm, you use a single tool for either a single node cluster.. or a multi node cluster. Even more, kompose was invented to read from the same Docker Swarm compose file format - because it is so intuitive.
I'll go one step further - kubeadm does not actually have high availability support, so you actually have to use kargo or kops to reasonably deploy in production.
Kubernetes introduces a lot of upfront complexity with little benefit sometimes. For example, kargo is failing with Flannel, but works with Calico (and so on and so forth). Bare metal deployments with kubernetes are a big pain because the load balancer setups have not been built for it - most kubernetes configs depend on cloud based load balancers (like ELB). In fact, the code for bare metal load balancer integration has not been fully written for kubernetes.
Now, my point is not that kubernetes sucks - I think its a great piece of tech. But its around why do people think Docker Swarm will die.. or that it sucks? Because, relatively speaking, while kubernetes NEEDS all kinds of complicated orchestration tools (and consultants!) to set it up .. Swarm on the other hand is damn easy to setup by a developer building his first stack.
Is Docker Swarm the heroku to Kubernetes AWS ?
I have no specific examples for Docker Swarm, but using this approach in other areas has led to some pretty major deficiencies in Docker's design that they have been slow to fix, and I'm not keen on seeing that happen again.
For a concrete example, see https://github.com/docker/docker/issues/19474 - in a minor release, they completely changed how DNS worked and broke previously working systems (i.e. https://github.com/weaveworks/weave/issues/2157), all in the name of service discovery
incidentally the embedded DNS feature is fairly extensively leveraged by kubernetes - it takes of the situations where you dont want to muck around with underlying /etc/hosts (on the actual metal) and do your changes only on the containers.
But I'm hearing what you are saying more and more - Docker Inc is having a huge PR problem. Docker Swarm may actually be good, but people are generally disliking the organization itself.
You dont see these kind of answers with Fleet, Mesos..even Openstack. Docker Swarm is a genuinely sweet piece of tech.. so this is rather unfortunate.
Add to the fact that Docker Swarm is adding Enterprise features (such as Secrets in 1.13) and that is has an Enterprisey version (Docker Datacenter) which supports multiple teams, why would I - an Enterprise developer and architect - look at Kubernetes over Docker Swarm?
With docker swarm it's taken them this long to get simple secrets integrated, and as with all of my experiences with first party docker tools: they seem ok at first, but the devil (and problems) are in the details.
I trust Google more to get this right, and I highly doubt Kubernetes is going anywhere.
Kubernetes is also based on years of running containers with Google itself, it solves real problems. Allowing containers to run in the same pod allows for much nicer composability than running multiprocess containers.
Have you tried setting up a k8s cluster recently, I believe they added kubeadm for much easier setup in 1.5, which was released a few weeks ago.
Why are you equating low barrier of entry with quality? I think MongoDB ought to have taught everybody in this field that you can have a low barrier of entry and still be a crap product.
Kubernetes whilst a cool product still has a lot of rough edges even now. One I encountered recently was that to upgrade a locally deployed cluster from 1.4 to 1.5 the answer appears to be "re-install from scratch" as the upgrade script is still "experimental" (https://kubernetes.io/docs/admin/cluster-management/#upgradi...)
For a MVP or a small production stack that runs on one server, I would go with Docker Swarm for its simplicity and small footprint. And even if you do end up scaling across many nodes, you still won't need k8s (kubernetes).
Definitely pretty painful for people who have already adopted fleet, but a year of support is much better than I would expect
Right now I believe Kubernetes is the project with the most accepted pull requests per day. This came up in a talk from GitHub at Git Merge 2017. It shows that k8s is on its way to becoming the default container scheduler platform. It will be interesting to see how Docker Swarm and Mesosphere will compete during 2017.
The container scheduler is becoming the next server platform. The fifth one after mainframes, minicomputers, microcomputers, and virtual machines.
While configuring GitLab to run on k8s we learned that much of the work (like Helm Charts) doesn't translate to Docker Swarm and Mesosphere. I think there might be strong network effects similar to the Windows operating system.
Given the same information, I'm really confident that I'd still make both choices the same way.
Just to add some of my own perception as someone who works on Mesos, Mesos continues to be popular with large technology companies that don't make their technical investments lightly: Twitter, Apple, Netflix, Uber, Yelp, for example. Companies continue to choose a Mesos stack based on its technical merits. The project is still moving fast and adding powerful primitives to support the needs of production environments while distributions like DC/OS are trying to make Mesos more approachable (easy to install, administer) and comprehensive (providing solutions for load balancing, logging, metrics, etc). I hope you will take another look at the Mesos ecosystem at some point, a lot of care has gone into it :)
Interesting though that the last 3 paradigms are largely built on each other. I'm involved in a deployment at the moment which started with buying servers, implementing VMs on them, and finally laying k8s on top of that.
I know most uses of k8s won't ever really see the layers below, but they're still there...
Nomad is a single executable for the servers, clients, and CLI. Just download & unzip the binary and run:
nomad agent -dev > out &
nomad run example.nomad
nomad status example
Nomad supports non-Docker drivers too: rkt, lxc templates, exec, raw exec, qemu, java. To use the "exec" driver that doesn't use Docker for containerization you'll need to run nomad as root.
Extensible volume support will be coming in the 0.6 series via plugins.
BTW, we are also using Triton (formerly SmartDC) from Joyent and are absolutely loving it. It's not without it's rough edges, but it is by and far the best public / private cloud option we have found that supports containers and VMs.
I have projects where Kubernetes is probably the right choice, but I have many more where Kubernetes is massive overkill and where I also need/want the distributed systemd units.
Luckily for me, I'd stuck with making all my units global and driving their deployment off of metadata. I think I'll just strip off the [X-fleet] section, and start deploying them straight to systemd with ansible.
CoreOS are awesome, and I hope that rkt takes off (no pun intended)
K8s has been a fun companion to travel with on the road to stability, but I think they've now got it right. I remember the confusion regarding config file formats, network architecture, persistent storage etc and I'm happy to say they've mostly got it nailed now.
Congrats to thocken and team ️
My next experiments are with the smartos docker support and Kubernetes. Hopefully I can get K8s running nicely on solaris zones and get better container isolation happening ️
Once again, I think CoreOS have made the right decision here, but that doesn't preclude major changes in K8s itself!
One I ran across recently was the upgrade process for clusters. Per (https://kubernetes.io/docs/admin/cluster-management/#upgradi...) it seems that unless you're on GCE the best way to upgrade a cluster is by rebuilding it from scratch as the upgrade script is still "experimental", which doesn't seem great.
The other area that I think Kubernetes is lagging Docker quite a bit on is security documentation and tooling. There's no equivalent of the CIS guide for Docker or Docker bench, both of which are useful in understanding the security trade-offs of various configurations and choosing one that suits a given deployment.
Upgrading a cluster in place will come in the future.
You can also take a look at https://coreos.com/tectonic where coreos provides a enterprise kubernetes distribution that supports updating a kubernetes cluster without downtime but I personally haven't tested tectonic.
Yes, I'm concerned about this not just with k8s, but Docker as well. Both are very immature products and there's a massive rush to adopt them, attributable almost entirely to social pressures and the insecurities of people who lead these tech depts.
When things like StatefulSets and persistent storage are still iffy/under development, it should be clear that these things are nowhere near production-ready.
if people remove the low level tools to manage a cluster it will be harder and harder to bootstrap higher level stuff.
but well, what to expect in the container space, stuff changes there just way too often.
That said, being a lower level tool as you point out, it can be useful during e.g. troubleshooting. Imagine the case where `fleetctl list-machines` returns more nodes than `kubectl get nodes`.
My shortcut was using https://github.com/kubernetes-incubator/kompose to convert my docker-compose.yml to the equivalent K8S objects. It wasn't as simple as just running it, but it let me see what it would basically take to do the same thing in Kubernetes. It ended up taking just a few days to wrap my head around it all and get it up and running. Probably even easier if you use something like GKE which manages the cluster for you. If you're investing in using containers for the long-haul, I think it's definitely worth the learning overhead.
There are only three key object types you need to understand to start using K8S: Deployments, Pods & Services. Feel free to msg me if you have some questions about getting started.
Thankfully, that is changing with things like minikube, kubeadm, kops, and self hosted Kube.
I think the orchestration wars are essentially over. Kube has insane momentum, and is a well architected solution.
To be clear I'm not saying there aren't deployments where the complexity of something like kubernetes isn't necessary.
But most people only run a small number of servers. I'd argue most clusters people are deploying are going to stay below 10 servers for their entire lifetime, and a dozen or two services that generally tends to need basic high availability and load balancing and 1-3 different data stores with replication/data persistence requirements. For that kind of setup, while you certainly can run kubernetes, the complexity of it simply isn't needed.
Just after finishing a prototype Redis Cluster pseudo-PaaS built on fleet makes it a bit of a gut punch though.
Right now, we use Fleet to schedule a highly available k8s API server and associated singleton daemons. Then API server is required to get anything else scheduled in the cluster.
How are they going to solve this bootstrap problem?
This is the exact same methodology that i've been using and it's worked rather well. The current CoreOS documentation  on running Kubernetes follows this methodology too.
We use fleet to schedule the HA API server. You cannot use the Kubelet to schedule this, because you need an API server to schedule cluster-wide pods.
The only solution I can see is to have a config that launches a special 'master' node that runs the API server, but this is uncompelling to me. I'd rather have every single node be identical, and get the API server to pop up somewhere in the cluster using a master election process - which is precisely what fleet does.