
Official Kubernetes on CoreOS Guides and Tools - philips
https://coreos.com/blog/official-kubernetes-on-coreos/
======
philips
If you are looking for the fastest possible guide for playing around with
Kubernetes check out the vagrant single machine instructions:
[https://coreos.com/kubernetes/docs/latest/kubernetes-on-
vagr...](https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant-
single.html)

This will get you to a "single machine" kubernetes cluster and a working
`kubectl` tool in a few minutes.

~~~
wstrange
This is awesome.

The existing single node solutions have been painful to get up and running,
and always seem to be missing some feature or other (DNS, for example).

~~~
hosh
I just spent 30 hours figuring that out (DNS, flanneld), and that's with using
hyperkube. (I'm building a new framework that helps you build a custom dev-
environment for your workflow using Kubernetes, called Matsuri).

I don't regret it, since I know a lot more about the details of how this works
together, and this will help a lot when it comes time to putting this on AWS
for production. But I don't want to force this on my team either, so I'll end
up putting something together with Vagrant (or finding something that works
with it).

One thing I realized though: this stuff should be baked in as part of a
standard Linux distro. A working, single-node Kubernetes setup works far
better than docker-compose / fig / crane / any number of homebrewed
orchestration for dev work. No wonder people are using CoreOS and RHEL
7/Atomic, and not Ubuntu. It's seriously making me consider switching to one
of those, off of Ubuntu.

~~~
crb
You can get a one-click managed Kubernetes cluster on Google Cloud Platform
(we call it Container Engine.) Would that be interesting to you vs. running on
AWS? [Googler asking.]

~~~
hosh
Yeah, I saw there was a of great instructions on Kubernetes and GCE.

AWS, for all it's warts and cruft, is familiar to me, and we have our
operations on it right now.

What I'm focused on right now is building a dev environment framework. There's
already excessive focus on getting this stuff into production. Neither AWS or
GCE will help me with that.

------
bkruse
We've been using CoreOS in production with etcd and fleet for over 1 year now
on 500+ machines. They're have been some growth pains, specifically with etcd
- but now they are mostly gone (with the new raft implementation in 2.x). I
really appreciate the CoreOS team and all related contributors.

First persistent storage tackled, then networking and now resource-aware
orchestration.

~~~
edutechnion
I went down a similar road with etcd and fleet but abandoned it earlier this
summer after testing failure scenarios with etcd. With a cluster of 5 etcd
nodes in EC2, I started hard-killing etcd EC2 instances and noticed fleet
inconsistency (e.g., nodes being restarted, not able to see the entire fleet).

Can you expand on the etcd growth pains you've been through?

~~~
bkruse
The basis of this, was being pointed in the right direction by the community.

etcd had a HUGE issue with the implementation of the raft consensus algorithm
they were using. This was in version 0.x

The tough part was that, even though etcd 2.0 was released in January [1], it
was not put into CoreOS alpha until April [2]

After moving to 2.x - all my problems went away. It had a small learning curve
of setting up lots of nodes in the cluster vs proxies [3]. 2.x had a lot of
functionality added, but the main one for us was it's reliability. Being able
to query status of members, add/remove members from the cluster and
monitoring.

Before etcd 2.x, the whole etcd infrastructure would die (and consequently,
fleet) if just ONE node restarted. Needless to say, it's come a long way.

We've been running etcd 2.x since January in a container [4], then just doing
export FLEETCTL_ENDPOINT=[http://127.0.0.1:2379](http://127.0.0.1:2379)

[1] - [https://coreos.com/blog/etcd-2.0-release-first-major-
stable-...](https://coreos.com/blog/etcd-2.0-release-first-major-stable-
release/)

[2] - [https://coreos.com/blog/coreos-alpha-with-
etcd-2/](https://coreos.com/blog/coreos-alpha-with-etcd-2/)

[3] -
[https://coreos.com/etcd/docs/latest/admin_guide.html](https://coreos.com/etcd/docs/latest/admin_guide.html)

[4] - [https://coreos.com/blog/Running-etcd-in-
Containers/](https://coreos.com/blog/Running-etcd-in-Containers/)

------
yuvipanda
Nice! Wikimedia Labs is going to start offering Volunteers who want to do any
Wikimedia related projects Kubernetes access for free soon
([https://lists.wikimedia.org/pipermail/labs-l/2015-September/...](https://lists.wikimedia.org/pipermail/labs-l/2015-September/004033.html))
although that isn't going to be on CoreOS.

------
saosebastiao
Anybody know of any projects of a similar concept but applying to unikernels
instead of containers? I've had way more luck working with OSv than I have had
with Docker and I'm kinda burnt out with futzing with it...but I do like the
ideas behind kubernetes.

~~~
justincormack
There is no fundamental reason why you shouldnt use the same tools. There is a
bit of work required to get it going, around standards and so on that I am
looking at.

------
flyt
Has Kubernetes grown past its limit of ~100 underlying servers? What's the
largest production Kubernetes cluster running today?

~~~
philips
Yes, the community is working hard to make k8s scale to 1000s of machines and
beyond. There is a great blog post about the engineering effort from the
various communities (k8s, etcd) to make this happen:
[http://blog.kubernetes.io/2015/09/kubernetes-performance-
mea...](http://blog.kubernetes.io/2015/09/kubernetes-performance-measurements-
and.html)

In fact someone recently made a video showing a 1000 machine cluster; let me
find that.

UPDATE: here is the video
[https://www.youtube.com/watch?v=sgWv9sVTIYQ](https://www.youtube.com/watch?v=sgWv9sVTIYQ)

~~~
flyt
The last time that I saw somebody from Google give a presentation about k8 it
didn't seem like "larger cluster sizes" was a real priority at this point, and
they were more focused on features. Good to hear this is getting some
attention, since it was the major issue that made our team dismiss it as
useful in production.

~~~
jcastro
I've heard googlers mention that same thing, but now that I think about it I
believe they were meaning to say that "for 1.0" they were targeting ~100
nodes, and once they got out there door they would start tackling the large
cluster case, which appears to be progressing nicely.

------
Perceptes
How does having the kubelet binary shipped with CoreOS itself work with
installing and upgrading Kubernetes? Won't the versions get out of sync and
cause problems?

~~~
philips
We rely on the API boundaries between the kubelet and the kubernetes API
server remaining well versioned without backwards compatible changes. The k8s
project is very conscious of the needs for that and work hard to ensure that
there are no breaking changes between releases. This is similar to all pieces
of software in a CoreOS release: systemd, docker, cloudinit, Linux, rkt, etc.

~~~
Perceptes
I see. I suppose the only down side of using the CoreOS version is having to
wait for it to be updated when new minor releases come out with new
features/improvements.

------
arunoda
Here's another way to jumpstart and learn Kubernetes. Copy and paste this
single command into any Ubuntu 64bit VM and it'll give you a single node
Kubernetes cluster with all the tools and services(incl. DNS)

    
    
        wget -qO- http://git.io/veKlu | sudo sh
    

Read More Here: [https://github.com/meteorhacks/kube-
init](https://github.com/meteorhacks/kube-init)

~~~
ImJasonH
Piping a random shortened link to sudo sh? What could go wrong?

