

Play with Kubernetes Quickly Using Docker - zwischenzug
https://zwischenzugs.wordpress.com/2015/04/06/play-with-kubernetes-quickly-using-docker/

======
gizzlon
Cool, looks like it's not _that_ hard to try out Kubernetes :)

Some of the Kubernetes terminology is explained more thoroughly here:

[https://www.digitalocean.com/community/tutorials/an-
introduc...](https://www.digitalocean.com/community/tutorials/an-introduction-
to-kubernetes)

[https://github.com/GoogleCloudPlatform/kubernetes/blob/maste...](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/examples/walkthrough/README.md)

~~~
zwischenzug
The first time I tried was in November. The Vagrant one is not hard, but the
salt provisioning was taking forever, and it was using a lot of memory, which
meant I had to shuffle around services on my server. Hence the 'soviet'
comment, as the OpenShift one was a lot easier:
[https://zwischenzugs.wordpress.com/2015/04/01/play-with-
an-o...](https://zwischenzugs.wordpress.com/2015/04/01/play-with-an-openshift-
paas-using-docker/)

------
briandorsey
Also, for fun:

Which is faster? Setting up a @kubernetesio cluster vs. making a latte?

Brendan Burns answers that question:
[https://youtu.be/7vZ9dRKRMyc](https://youtu.be/7vZ9dRKRMyc)

------
zwischenzug
Google's response went way beyond my expectations here.

~~~
otoburb
I didn't see anything in your blog post that refers to a response from Google.
Could you elaborate on that? Thanks!

~~~
zwischenzug
"I tried to follow Kubernetes’ Vagrant stand-up, but got frustrated with its
slow pace and clunkiness, which I characterized uncharitably as ‘soviet’.
Amazingly, a Twitter-whinge about this later and I got a message from the
Kubernetes Lead Engineer saying they were ‘working on it’. "

I'll try and make this clearer - thanks for the feedback!

~~~
chuhnk
Had similar pain points 6 months ago. Went through vagrant, docker, ec2, gce,
gke and found issues almost everywhere. That's just the way it goes in the
early days. 6-12 months down the line this will all be really easy and
seamless with hopefully a lot more people contributing to the kubernetes
ecosystem.

~~~
zwischenzug
Glad it wasn't just me... I did waste a lot of time 'researching' Kubernetes
this way.

~~~
brendandburns
I updated it over the weekend, it's down to 3 steps now:

[https://github.com/brendandburns/kubernetes/blob/hyperkube/d...](https://github.com/brendandburns/kubernetes/blob/hyperkube/docs/getting-
started-guides/docker.md)

And I think I can get it down to a one-liner.

(also, can you update your blog to point to the hyperkube:v0.14.1 image
instead of :dev, :dev is a random binary from my client, where as v0.14.1 is
an official release... Thanks!)

~~~
mdekkers
I have also tried kubernetes a few times, and got stuck everywhere. I was
unable to find a "this is how you build a kubernetes system from scratch"
document somewhere. All I managed to find where very specific howto's for
specific system combinations, none of which fits my needs. I tried a few times
with building systems on the documentation list, but ran into issues at each
step.

My biggest worry and issue with running kubernetes in production is the
overall workflow of standing up an environment, which seems to be:

1\. download some images/dockerfiles 2\. [magic] 3\. profit

I cannot find a document anywhere that tells me what components are used, how
they interact, what settings are required, etc. I'd love to be able to give
kubernetes a try and see how it would work for our service, but am having a
very hard time getting the right level of detail. It appears to be either "go
the [magic] route", or "read all the code" with little in between.

If I had some kind of pointer, I'd be happy to write something up about how to
get it running for a prod setup...

~~~
zwischenzug
I think it's too early for you to think about Kub in production by the sounds
of it.

This whole area is still in the early stages; any documentation you see on
specifics are likely to be soon out of date.

If you try the hyperkube command you'll see very many command line options,
and I can only see that growing.

To an extent though, there is a certain amount of magic when downloading
images that do stuff for you. For example, the scheduler is a pod that just
starts up on the master. What exactly it does, I've no idea yet.

To quote the kub github page:

"Kubernetes is in pre-production beta!

While the concepts and architecture in Kubernetes represent years of
experience designing and building large scale cluster manager at Google, the
Kubernetes project is still under heavy development. Expect bugs, design and
API changes as we bring it to a stable, production product over the coming
year."

~~~
mdekkers
> I think it's too early for you to think about Kub in production by the
> sounds of it.

Maybe this is getting lost in translation, but I find this line to be somewhat
condescending. I am asking for some hint as to where I can find documentation
that goes beyond "here, run the docker/vagrant/VM image, and have fun" \- I
want to know what is what, which pieces talk to which pieces, and how. I
specifically not asking "Is this ready for production" which is a decision I
am happy to make for myself, on the basis of the research I hope to do.

Just to be clear: I am an (admittedly increasingly unfashionable)
infrastructure guy. I am pretty sure that if I follow your examples, I can get
something up and running that allows me to feed some DSL into some tool, and
have a running set of containers. Not interesting to me. I am happy to believe
that this works, and am happy to take people's word for it.

I want to know and learn about required infrastructure, about failure domains,
about networking requirements, about load and overhead, and ultimately, about
"If I build something like this, what are going to be the issues in making
sure it will never go down". I see a lot of high-level architecture, which
appears to segue quite suddenly into "now do magic, and here is how you then
start a pod". I am asking about this in between stage, as I have been unable
to find this.

> This whole area is still in the early stages; any documentation you see on
> specifics are likely to be soon out of date.

I would expect any documentation to be at least somewhat relevant to the
version it is released with. It isn't important for us to be on the latest
greatest - it is important for us to know and understand how the stuff we use
behaves and is to be operated, especially in failure modes. If my only
recourse during a failure is "well, lets try to restart a container running
something critical and lets see if the problem goes away" then indeed it isn't
ready for anything other than being a toy.

I do see the likes of Kismatic and now tectonic making moves to run this as a
production system, so somewhere, somehow, it would be possible to stand up a
system that has not had all the key decisions made for you, and would allow
you to build something that is ready for a particular environment.

Kismatic have actually released some packages that appear helpful in pulling
the pieces apart and I will be looking at those to get a better understanding
of what does what.

> To quote the kub github page: "Kubernetes is in pre-production beta!

Yeah, I saw that, thanks... From the perspective of the Kubernetes team "pre-
production" likely means that they have not yet evaluated many probable edge-
cases for many different use-types. This is important to them, not so much for
me. What is important for me is that _my_ workload works. It is a lot easier
and faster for me to test this (and feed back the results, thus helping the
project towards production status) on the basis of an infrastructure I know
and understand than it is for me to try and put together something something
[magic] and trying to figure out stuff isn't working. Case in point: I
followed one of the VMware install guides. at the "now run this image, and xyz
will happen" nothing happened. No reasonable possibility of troubleshooting,
as the image in question was/is a black box, and no documentation I could
find, outside of the "invoke these magic incantations".

To be very honest, I am not too bothered - there are plenty of viable
alternatives that do the same / similar things. We are evaluating a large list
of possible environments, and kube got crossed off the list pretty quickly,
which is a pity as it looks interesting.

~~~
zwischenzug
Didn't mean to be condescending, sorry if it came over that way :(

Anyway, FWIW I've had similar troubles with documentation with this and
OpenShift Origin.

------
metral
It's great to see more options out there for folks to start playing with
Kubernetes. It can be a bit daunting initially, especially for newcomers.

As an alternative, I started a project called "Corekube"
([https://github.com/metral/corekube](https://github.com/metral/corekube))
that allows you to kick the tires on Kubernetes on top of CoreOS.

Corekube is an OpenStack based Heat template that just requires an OpenStack
provider to stand up a Kubernetes cluster in minutes - the cluster comes
complete with a private Etcd discovery service, 1 K8s master, 3 K8s minions,
and a logic coordinator/deployment operator aka "the Overlord" which interacts
with Etcd, Fleet and K8s via their API's to set things up.

~~~
zwischenzug
That's good - the 'problem' with my one is that afaict there's no way to get
more minions going.

You should blog the steps getting that up.

~~~
metral
Corekube's Heat template is structured to allow up to 12 minions to be used,
you just have to change the parameter.

I have a blog post on the Rackspace Dev Blog from when I initially released
it, but it is a bit outdated - [https://developer.rackspace.com/blog/running-
coreos-and-kube...](https://developer.rackspace.com/blog/running-coreos-and-
kubernetes/)

Most of the same information on the blog can be pulled from the README as far
as I'm aware, but I'm happy to answer any further questions.

------
PopsiclePete
Hadn't realized Kubernetes was written in Go. Cool.

