
Kubernetes the Hard Way - tantalor
https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/README.md
======
anemic
Lots of good information here but it is still not enough for a production
setup IMHO.

There is a great need of good source of production setups. In open source
software this seems to be the secret that no one is willing to reveal. I tried
to setup a kubernetes cluster from scratch a while ago and soon I was browsing
the source code for answers. Openstack is the same, you need to understand a
lot about the inner workings before you can even attempt to setup something
for production.

There is always a simple "this shell script starts your own <name your tech
here> cluster in vagrant" but it is still not a production setup.

And even if this article is the "hard way" it describes:

> This is being done for ease of use. In production you should strongly
> consider generating individual TLS certificates for each component."

And it does not mention that the crucial part is the common name field in the
certificate maps to the user name that is the magic information that I once
needed.

I sincerely appreciate this article but production setup is still a long
learning experience away.

~~~
continuational
I propose a new term: Consultancy Driven Development. It goes like this:

\- If it's too easy to set up, nobody will hire us to make it work.

\- Implement a kickass setup dirt cheap for some big-name company, so we can
claim they use it in production. Yeah we tweaked it so it bears little
resemblance to the original product, and only fits an incredibly narrow use
case, but nobody stands to benefit from blogging about that.

\- Better ship with a configuration file that isn't production ready.

\- Did I say one? Better have three configuration files, each duplicated in
distribution-dependent directories (in some cases), needing manual sync
between servers to prevent catastrophic data loss.

\- Remember not to publish the program that checks for errors in
configuration; half of our income would disappear.

\- Benchmark with a configuration file that nobody would use in production,
but looks really impressive when taken out of context.

\- People want transactions, remember to claim support (and if you must,
explain somewhere in the fine print that a transaction can only span a single
operation on exactly one document, and btw. is precisely none of A, C, I or
D).

\- Somewhere on the front page, it should say how we can support petabytes of
data (and it performs very well, as long as you write all your data in one
batch, never modify it, keep it all in memory, and turn off persistence).

\- Never give away answers online. Answer every question about configuration
with "it depends".

\- Don't release a new version without renaming a few configuration options.
Be "backwards compatible" by ignoring unknown, obsolete and misspelled
options.

~~~
ymse
What you are describing is basically Openstack. Although Hanlons razor
applies, none of the current actors stands to benefit from improving the
situation.

* Extremely difficult to set up.

* Claims that half of Fortune 100 uses it (read: many are required to support it; the rest have one guy with a toy installation in some branch office).

* Consists of dozens of components, each with several-thousand lines config files (actually Python code) that must be kept in sync between all nodes (yet have node-specific data).

* Claims to be "modular", but have complex interdependencies between each of the components.

* Upgrading is not officially supported, but some companies will help you.

* Will break in mysterious ways, and require you to backport bugfixes since you're stuck on an unsupported version after a year.

* Have unhelpful error messages (e.g. throw Connection Refused exception when you're actually receiving an unexpected HTTP return code).

* Write documentation in a way that appears OK to new users, but vague enough to be useless for those who are looking for specific information.

~~~
andrewjf
This is the most accurate description of OpenStack i've ever read. Congrats!

------
mattikus
Just saw Kelsey give a talk at Abstractions about more advanced patterns in
Kubernetes and he mentioned this repo. Looks like a fantastic tutorial and his
talk was very informative.

Highly recommend watching the video when it's released if you weren't able to
attend.

~~~
jeffnappi
Same! Actually I asked the question that led him to point me to this repo :)

We had a conversation later in the hotel lobby where he made a great analogy:
running this stuff yourself is going to be like running your own mail server.
Its really nice to know how to do it, but at the end of the day unless you are
a very large organization, you're most likely going to use a hosted service.

Personally, I'm going to go through with setting up a test kubernetes cluster
just so I know what it's made of. Then if I think it's great, maybe .. just
maybe I'll give Google's hosted solution a try with a small project to start.

~~~
mattikus
I feel similarly to Kelsey. I also plan on setting up a test cluster to learn
the ins and outs and seeing if it's something that might fit in at work for
our needs.

For personal stuff GKE looks really nice.

------
marcoceppi
This is a great starting point. We're been running Kubernetes in production
alongside an OpenStack for a while and charm'd up the deployment:
[https://jujucharms.com/kubernetes-core](https://jujucharms.com/kubernetes-
core). The majority of the information here (and more) seems to already be
encapsulated: `juju deploy kubernetes-core`. Since we need things like logging
and monitoring, we bolted the elastic stack on the side and called it
observable kubernetes: `juju deploy observable-kubernetes`.

While one-liners are typically pretty limited, the charms come with quite a
few knobs to help tweak for deployments.

[http://www.jorgecastro.org/2016/07/29/ubuntu-
kubernetes-v1-d...](http://www.jorgecastro.org/2016/07/29/ubuntu-
kubernetes-v1-dot-3-3-ready-for-testing/)

There's still room to improve, but we've been happy with the cluster so far.
Considering Juju and charms are open source. Eitherway, great guide for those
getting started.

------
jondubois
The article mentioned Google Container Engine as one of the 'easy ways' but it
didn't mention Rancher [http://rancher.com/](http://rancher.com/) \- This is
not quite as easy as GCE but I found it pretty easy.

~~~
nhumrich
I use rancher in prod. I absolutely love it. I don't use kubernetes in prod
though. But if you want to go kubernetes through rancher, it is pretty easy.

------
mmagin
So, I see something like this, assume they mean "from scratch", read a little
way down the README and it says "The following labs assume you have a working
Google Cloud Platform account and a recent version of the Google Cloud SDK
(116.0.0+) installed."

The irony.

------
nathan_f77
Here is Kubernetes the easy, production-ready way:
[https://github.com/kz8s/tack](https://github.com/kz8s/tack)

I've spun up a few clusters using this, and I absolutely prefer learning this
way. I love to get something running before I dive in and look at all the
pieces and individual options. I also really love "convention over
configuration". I want to study a production-ready reference implementation
that simply works, with sane defaults.

I've spent a lot of time banging my head against the wall while I try to
follow some complicated tutorial that doesn't work with the latest versions.
Maybe this approach works for others, but it's not for me. I like to stand up
a cluster, deploy a database and a real application, try to scale it, set up
some test DNS records, do some load testing. Figure out the pain points and
learn as I go. If something just works perfectly fine behind the scenes, then
I probably don't need to learn that much about it (or I have enough experience
that I already know what it does and how it works.)

This might not be a suitable learning style for a beginner, but I think I have
enough experience working and experimenting with AWS, Saltstack, Chef, Puppet,
Ansible, OpenStack, Deis, Flynn, Terraform, Docker, CoreOS, etc. etc. So at
this point, I just prefer to evaluate new technologies by spinning them up and
diving straight in.

------
philips
For Kubernetes the easy way checkout this blog post on the work going on
upstream for "self-hosted" Kubernetes: [https://coreos.com/blog/self-hosted-
kubernetes.html](https://coreos.com/blog/self-hosted-kubernetes.html)

------
readams
It would be nice if all the Kubernetes tutorials didn't assume you're using
GCE.

~~~
ibizaman
Exactly. I expected "hard way" to be using your local box without any external
cloud provided service.

------
felixgallo
Kelsey is a national treasure. Kubernetes is getting pretty close to being
ready, if it can avoid the same fate as OpenStack and the like. Interesting
times.

------
bdcravens
What's nice about this is it seems to teach you without taking a GKE-first
approach, like some of the other tutorials out there.

~~~
SEJeff
Which is doubly awesome coming from a google employee. It shows Kelsey really
wants to teach k8s to everyone.

------
user5994461
Looked at the documentation and quickly gave up on kubernetes. It's nice and
it solves real problem but the barrier to entry is INSANE.

And it's lacking wayyy to much documentation deploy in production on own
cluster. It's probably gonna take years to improve.

I'm wondering. Did anyone tried the kubernetes on GCE?

If Google can handle all the annoying setup and makes a mostly managed
service, that would be extremely attractive. Actually, that's probably the
only way k8s would be achievable in production, i.e. have someone else do it.

~~~
felixgallo
Google did that already: [https://cloud.google.com/container-
engine/](https://cloud.google.com/container-engine/)

------
ldehaan
this seems interesting. I have to say I have heard a lot of people talking
about kubernetes but few actually using it in production.

for your CI woes there is a system that isn't hard to setup and is actually
really easy to use: Mesos and Marathon with weave (without the plugin, for
now) and docker.

Your biggest challenge is learning zookeeper, but really if you're dealing
with large scale deployments, you're probably already using it or something
like it.

there are puppet/chef/ansible modules for installing and configuring mesos and
zookeeper.

toss in gluster as a storage driver in docker and you're pretty much ready to
go for most types of application deployments using docker.

heck, there's even kubernetes integration if you're really hung up on it ;)

