
Container orchestration: Moving from fleet to Kubernetes - trojanowski
https://coreos.com/blog/migrating-from-fleet-to-kubernetes.html
======
mevile
I love the suggestion for new users to try minikube. I got started with
minikube and kubernetes recently and it was only then when I had an aha moment
with containers. I get it now. I know containers have been around a while but
with kubernetes the orchestration difficulty has been lowered to the point
where I can't imagine going back to the way I was getting things working
before. From minikube I moved to kubernetes on GCE, and it mostly just worked.
I still use minikube for my local dev environment.

~~~
hackerboos
brew cask install minikube

brew install kubectl

Then follow the tutorial
[https://github.com/kubernetes/minikube#quickstart](https://github.com/kubernetes/minikube#quickstart)

~~~
emmelaich
Why does this process involve curl getting from
images.rcs.realclearpolitics.com?

~~~
rckrd
Something is not right on your machine. You can see what the homebrew recipe
is doing here:

[https://github.com/caskroom/homebrew-
cask/blob/master/Casks/...](https://github.com/caskroom/homebrew-
cask/blob/master/Casks/minikube.rb)

~~~
emmelaich
Maybe it's Little Snitch getting confused.

ps. images.rcw..... not images.rcs..

------
zebra9978
What do people generally think about Docker Swarm ? The new deployment using
.yml files is pretty cool :
[https://www.infoq.com/news/2017/01/docker-1.13](https://www.infoq.com/news/2017/01/docker-1.13)

In fact, IMHO kubernetes has tried to do something similar with .. but it is
not engineered ground up for simplicity. Which is why it has MULTIPLE tools
for this - minikube, kubeadm, kompose - but nothing matching the ease of use
of docker and its yml files.

The last survey showed 32% of the polled used Docker Swarm versus Kubernetes'
40% - and this is back when Docker Swarm was highly unstable.
[https://clusterhq.com/2016/06/16/container-
survey/#kubernete...](https://clusterhq.com/2016/06/16/container-
survey/#kubernetes-is-the-most-popular-container-orchestration-tool-but-its-
close)

Are people here using Swarm ? what have your experiences been like.

~~~
EtienneK
I would also like to know the answer to this question. Every time I try
setting up a Kubernetes cluster, it's an exercise in frustration. Docker Swarm
is much easier in comparison.

Add to the fact that Docker Swarm is adding Enterprise features (such as
Secrets in 1.13) and that is has an Enterprisey version (Docker Datacenter)
which supports multiple teams, why would I - an Enterprise developer and
architect - look at Kubernetes over Docker Swarm?

~~~
orf
My £0.02 is that Kubernetes has the backing of Google who have a tremendous
amount of experience with container orchestration. And while using Kubernetes
it really shows, things are pretty well thought out, lots of features out of
the box etc.

With docker swarm it's taken them this long to get simple secrets integrated,
and as with all of my experiences with first party docker tools: they seem ok
at first, but the devil (and problems) are in the details.

I trust Google more to get this right, and I highly doubt Kubernetes is going
anywhere.

~~~
nul_byte
Red Hat are doubling down on Kubernetes too (second biggest contributor to
Kubernetes), and if there is anyone who is good at taking parts of an
opensource eco system and supporting them for an enterprise, its Red Hat.

------
EwanToo
A brave decision, but I think it's the right one for both CoreOS, and in the
long-run, their customers.

Definitely pretty painful for people who have already adopted fleet, but a
year of support is much better than I would expect

~~~
sytse
I too salute CoreOS for doing the right thing for their customers and the
ecosystem. Kubernetes was something that was hard to predict, it didn't grow
organically but was suddenly released by Google.

Right now I believe Kubernetes is the project with the most accepted pull
requests per day. This came up in a talk from GitHub at Git Merge 2017. It
shows that k8s is on its way to becoming the default container scheduler
platform. It will be interesting to see how Docker Swarm and Mesosphere will
compete during 2017.

The container scheduler is becoming the next server platform. The fifth one
after mainframes, minicomputers, microcomputers, and virtual machines.

While configuring GitLab to run on k8s we learned that much of the work (like
Helm Charts) doesn't translate to Docker Swarm and Mesosphere. I think there
might be strong network effects similar to the Windows operating system.

~~~
davidmr
Completely agreed. The landscape 2-3 years ago looked incredibly different,
and I picked Mesos for a rather ambitious project. After it became relatively
clear that k8s was going to eclipse Mesos and not by a little bit, trying to
unwind that decision basically cost me my job through some political
infighting. It's a shame, but such is the price of early adoption, I guess.

Given the same information, I'm really confident that I'd still make both
choices the same way.

~~~
benjamin_mahler
Hi David, can you reveal some of your environment (e.g. on-prem vs cloud),
were there any technical reasons for switching or was it primarily a matter of
perception of velocity/popularity between the two projects?

Just to add some of my own perception as someone who works on Mesos, Mesos
continues to be popular with large technology companies that don't make their
technical investments lightly: Twitter, Apple, Netflix, Uber, Yelp, for
example. Companies continue to choose a Mesos stack based on its technical
merits. The project is still moving fast and adding powerful primitives to
support the needs of production environments while distributions like DC/OS
are trying to make Mesos more approachable (easy to install, administer) and
comprehensive (providing solutions for load balancing, logging, metrics, etc).
I hope you will take another look at the Mesos ecosystem at some point, a lot
of care has gone into it :)

~~~
mentat
Not OP, but I think the perception is that Mesos requires you to roll a lot
more of the solution yourself. That's fine if you're a large company who can
throw hundreds of developers at your platform, less so if you've got 5 or even
50.

------
simonvdv
Hmm that's a pity even though it shouldn't come as a surprise for anyone who's
actively using/involved with fleet. I like the simplicity and flexibility of
fleet (basically distributed SystemD) a lot. Don't necessarily want to switch
to a bigger scheduler like Kubernetes. Anyone have any suggestions
for/experiences with an alternative simpler scheduler (like Nomad or an
alternative solution like the autopilot stuff from Joyent)?

~~~
schmichael
Nomad dev here. We should definitely tick the simplicity box for you. If not,
let me know. :)

Nomad is a single executable for the servers, clients, and CLI. Just
download[0] & unzip the binary and run:

    
    
        nomad agent -dev > out &
        nomad init
        nomad run example.nomad
        nomad status example
    

And you have an example redis container running locally!

Nomad supports non-Docker drivers too: rkt, lxc templates, exec, raw exec,
qemu, java.[1] To use the "exec" driver that doesn't use Docker for
containerization you'll need to run nomad as root.

[0]
[https://www.nomadproject.io/downloads.html](https://www.nomadproject.io/downloads.html)

[1]
[https://www.nomadproject.io/docs/drivers/index.html](https://www.nomadproject.io/docs/drivers/index.html)

~~~
pslijkhuis
Completely unusable product for us because of the lack of persistent storage.

~~~
schmichael
Sorry to hear that! We've definitely focused on stateless containers until 0.5
which introduced sticky volumes and migrations. Useful in some cases but
definitely doesn't cover all persistent storage needs.

Extensible volume support will be coming in the 0.6 series via plugins.

------
chad-autry
I've been telling friends and co-workers I think kubernetes has won the
orchestration war. But even as I did so I wanted something simpler for my own
purposes, and so was using fleet.

Luckily for me, I'd stuck with making all my units global and driving their
deployment off of metadata. I think I'll just strip off the [X-fleet] section,
and start deploying them straight to systemd with ansible.

~~~
tvaughan
Or just copy them? [https://gitlab.com/tvaughan/docker-flask-
starterkit/blob/mas...](https://gitlab.com/tvaughan/docker-flask-
starterkit/blob/master/Makefile#L81)

~~~
chad-autry
Ansible brings inventory management to the table. I can have an inventory with
my backend and frontend instances tagged, run my playbook, and it will
copy/start the appropriate systemd units.

------
Intermernet
As someone who's been working with containers since docker was released, I
feel like this is the right decision.

CoreOS are awesome, and I hope that rkt takes off (no pun intended)

K8s has been a fun companion to travel with on the road to stability, but I
think they've now got it right. I remember the confusion regarding config file
formats, network architecture, persistent storage etc and I'm happy to say
they've mostly got it nailed now.

Congrats to thocken and team ️

My next experiments are with the smartos docker support and Kubernetes.
Hopefully I can get K8s running nicely on solaris zones and get better
container isolation happening ️

Once again, I think CoreOS have made the right decision here, but that doesn't
preclude major changes in K8s itself!

------
raesene6
I think Kubernetes is a really interesting product and obviously has a lot of
momentum. That said for something thats seeing wide adoption it still has a
lot of rough edges and things that need fleshed out.

One I ran across recently was the upgrade process for clusters. Per
([https://kubernetes.io/docs/admin/cluster-
management/#upgradi...](https://kubernetes.io/docs/admin/cluster-
management/#upgrading-a-cluster)) it seems that unless you're on GCE the best
way to upgrade a cluster is by rebuilding it from scratch as the upgrade
script is still "experimental", which doesn't seem great.

The other area that I think Kubernetes is lagging Docker quite a bit on is
security documentation and tooling. There's no equivalent of the CIS guide for
Docker or Docker bench, both of which are useful in understanding the security
trade-offs of various configurations and choosing one that suits a given
deployment.

~~~
eicnix
Building a cluster from scratch is usually not a bad idea: You create a new
cluster with the upgraded version, combine both clusters through federation
and start moving pods from the old to the new cluster.

Upgrading a cluster in place will come in the future.

~~~
raesene6
Whilst for major upgrades that _might_ make sense, what about instances like a
high risk security fix where upgrade speed is important... People don't want
to be re-building from scratch in that kind of setup...

~~~
eicnix
I fully understand your issue. Creating a new cluster means for me running a
script that sets up a new cluster in ~15min. There is
[https://github.com/apprenda/kismatic](https://github.com/apprenda/kismatic)
which can help simplify your cluster setup if you run in a enterprise
environment.

You can also take a look at
[https://coreos.com/tectonic](https://coreos.com/tectonic) where coreos
provides a enterprise kubernetes distribution that supports updating a
kubernetes cluster without downtime but I personally haven't tested tectonic.

------
merb
I don't get that move. fleet was extremly well suited to schedule a kubernetes
high available master. as soon as you have 3 etcd nodes and 3 fleet nodes you
could use fleet to bootstrap kubernetes in a way more stable fashion than all
of the other available options.

if people remove the low level tools to manage a cluster it will be harder and
harder to bootstrap higher level stuff.

but well, what to expect in the container space, stuff changes there just way
too often.

~~~
puzzle
You can do that kind of bootstrap without fleet. Just use ignition or cloud-
config with the right systemd units and a bunch of fixed IP addresses. I think
the CoreOS folks worked on a number of ways to simplify and automate
bootstrapping of the Kubernetes control plane, so they saw fleet as redundant
now. Besides, it took a long time for it to get something resembling a
mechanism that updates units in the cluster.

That said, being a lower level tool as you point out, it can be useful during
e.g. troubleshooting. Imagine the case where `fleetctl list-machines` returns
more nodes than `kubectl get nodes`.

~~~
merb
with fleet you could have a __single __kubernetes master that would 've been
started on another node, as soon as one node would go down. that won't work
with __just __systemd units.

~~~
puzzle
I'm sure there are good use cases for Fleet, but running Kubernetes (or
anything else) with just one master is asking for trouble.

------
avichalp
I think it is a brave decision which might affect the current users of Fleet
for a while but will prove to be a good for the community overall.

If you think from a new comers perspective who is actually getting started
with container orchestration he/she does a lot of research to choose a
framework/tool and if you provide them with a lot of suboptimal solutions it
doesn't really help (I do not mean Fleet is suboptimal but k8 is already close
to become a standard). It is always better to have one or two standard
solutions for a particular problem. Parallels can be drawn from the javascript
world where we have this influx of libraries, frameworks and tooling which
only does few things differently than others but this has led a lot of
confusion specially among beginners and instead to thinking deeply about core
concepts people are often seen chasing the new shiny frameworks.

------
newsat13
I am sad to see fleet go. Fleet was quiet simple to setup but k8s was a
monster. They have so much terminology and it tries to cover all the cases of
cloud orchestration. I think my fallback now is Swarm (hope it gets more
stable though)

~~~
wstrange
Kubernetes is dead simple to _use_ , but can be a little daunting to set up.

Thankfully, that is changing with things like minikube, kubeadm, kops, and
self hosted Kube.

I think the orchestration wars are essentially over. Kube has insane momentum,
and is a well architected solution.

~~~
vidarh
They're not over. Kubernetes might be dominant in the short term, but it's
very complex compared to the use cases a lot of people have been using things
like fleet for, and I for one will continue evaluating other options for that
reason as most cluster deployments I work on by far have needs where the
complexity of something like Kubernetes is totally unnecessary - there will be
plenty of space for alternatives for that reason.

~~~
wmf
It looks like k8s is simplifying fast enough that it may eclipse the others.
kubeadm is already a huge improvement.

~~~
vidarh
kubeadm is not fixing the underlying complexity - it's putting a veneer on
top. It's certainly helpful to simplify the deployment, but kubeadm is only
needed in the first place because of how complicatd kubernetes is.

To be clear I'm not saying there aren't deployments where the complexity of
something like kubernetes isn't necessary.

But most people only run a small number of servers. I'd argue most clusters
people are deploying are going to stay below 10 servers for their entire
lifetime, and a dozen or two services that generally tends to need basic high
availability and load balancing and 1-3 different data stores with
replication/data persistence requirements. For that kind of setup, while you
certainly _can_ run kubernetes, the complexity of it simply isn't needed.

~~~
Plugawy
That's the boat I'm I'm - we have less than a dozen services and all we need
is packing them neatly on worker nodes and some load balancing. Setting up
Kubernetes for that looked like an absolute overkill. I need to try Docker
Swarm and Nomad again but either of them has to support rolling deploys out of
the box - lasy time o checked neither of them did

------
adamu__
That's too bad. I quite liked fleet for its simplicity, but maybe it is time
to spend more time with Kubernetes.

Just after finishing a prototype Redis Cluster pseudo-PaaS built on fleet
makes it a bit of a gut punch though.

------
dpratt
This is interesting, but has a potential problem - what do you use to schedule
the control plane?

Right now, we use Fleet to schedule a highly available k8s API server and
associated singleton daemons. Then API server is required to get anything else
scheduled in the cluster.

How are they going to solve this bootstrap problem?

~~~
Perceptes
As moondev pointed to, eventually bootkube will handle bootstrapping k8s
clusters. At my company we just set everything up using cloud-config. A
systemd unit boots the kubelet on each server, and static k8s manifests are
loaded by the kubelet to run the rest of the k8s components as pods. This way,
the kubelet itself is the only component that is not managed by k8s itself.

~~~
dpratt
This is precisely what we do on our cluster as well for the node-level daemons
- kubelet and kube-proxy.

We use fleet to schedule the HA API server. You cannot use the Kubelet to
schedule this, because you need an API server to schedule cluster-wide pods.

The only solution I can see is to have a config that launches a special
'master' node that runs the API server, but this is uncompelling to me. I'd
rather have every single node be identical, and get the API server to pop up
somewhere in the cluster using a master election process - which is precisely
what fleet does.

~~~
Perceptes
Fleet is still not necessary for HA control planes. There is no danger in
running multiple API servers as once. For some time now, the controller
manager and scheduler binaries have supported built-in leader election with
the --leader-elect option.

------
seenitall
Sensible move, though I hope it's not too disruptive for Fleet users. Don't
think they have any option though. The list of easy ways to try K8s should
include conjure-up on Ubuntu for either laptop-scale or large
cloud/Vmware/bare metal deploys

