
Kops – Kubernetes Operations - emersonrsantos
https://github.com/kubernetes/kops
======
awinter-py
FWIW kops gave my team a better kube experience on AWS than EKS (their managed
version)

guessing EKS isn't as bad now as it was 12 months ago, but 'managed kube'
varies tremendously across clouds, especially re ingress & overlay networks

in my experience of AWS EKS, GCE's GKE and digitalocean's one, GCE has the
tightest integration between load balancers and ingress, and even that can be
clunky

kops is overlay agnostic and lets you choose, which is a con but also a pro

my guess is anyone that needs serious networking performance is still not sold
on kube

~~~
nodesocket
Heads up, AWS has a public GitHub repo ([https://github.com/aws/containers-
roadmap](https://github.com/aws/containers-roadmap)) where users can submit
bugs and feature request for managed Kubernetes (EKS). It is quite robust and
actively maintained by AWS engineers.

Recently managed Kubernetes on AWS (EKS) rolled out the ability to make your
master public or private. Previously if you set it to private, you were only
able to lookup the master endpoint DNS entry within the VPC. After much
backlash[1], they now advertise the private ip publically which allows much
greater flexibility. A highly anticipated and nice change.

[1] [https://github.com/aws/containers-
roadmap/issues/221](https://github.com/aws/containers-roadmap/issues/221)

~~~
awinter-py
making an open source contribution to benefit a big 5 company with deep
networking expertise in-house and which _also_ has forked secret versions of
mysql and postgres should at least earn me some kind of rebate

~~~
nodesocket
huh? Rebate how you figure.

~~~
awinter-py
oh wait, this is just a bug tracker -- I thought you were suggesting people
make OSS contributions to kube plugins that only work on EKS

------
hardwaresofton
If you're looking to standup a kubernetes server and _not_ on AWS, take a look
at kubeadm[0][1]. kubeadm does just enough to build a cluster and not much
more -- it's a really light-feeling (even though what it sets up is
complicated) bit of kit.

I use it to run on baremetal, and it's a wonderfully simple and robust tool
(as well as officially supported) -- it's almost _too_ good, because you
should really know everything it's doing and if you use it from the start it's
easy to get lost when something goes wrong (most of the time nothing will, but
eventually something will).

[0]: [https://kubernetes.io/docs/reference/setup-
tools/kubeadm/kub...](https://kubernetes.io/docs/reference/setup-
tools/kubeadm/kubeadm/)

[1]:
[https://github.com/kubernetes/kubeadm/](https://github.com/kubernetes/kubeadm/)

~~~
escardin
The difference between kubeadm and 'The Hard Way' is setting up a ton of
certificates you then have to manage and manifests/config for etcd, kubelet,
kube-proxy, kube-apiserver, kube-controller-manager, and core-dns. If you diff
the default manifests of kubeadm and 'The Hard Way' you'll find they're not
especially different.

You should still take the time to read the docs on the components and decide
if you want to make changes. 'The Hard Way' doesn't explain what's going on
any more than kubeadm does. It's just a set of instructions to follow (that
are GCP specific).

~~~
hardwaresofton
> The difference between kubeadm and 'The Hard Way' is setting up a ton of
> certificates you then have to manage and manifests/config for etcd, kubelet,
> kube-proxy, kube-apiserver, kube-controller-manager, and core-dns. If you
> diff the default manifests of kubeadm and 'The Hard Way' you'll find they're
> not especially different.

The end results are similar but the process is vastly different. kubeadm is
almost a one-shot tool, doing it the hard way is a learning process. I was
trying to say that obviously when you want to just get it done (or script your
cluster) then use kubeadm, but if you're learning k8s it's better to do it the
hard way at least once. Building an intuition for for the system is made up of
(and where to go/which logs to look at when something goes wrong) is
important.

> 'The Hard Way' doesn't explain what's going on any more than kubeadm does

Even just following a guide, running the ancillary tools yourself, downloading
and running the binaries, etc is way more explanation than kubeadm or other
tools give you -- 90% on a fresh machine you just run kubeadm and you're
"done" (for some sense of the word).

Before kubernetes the hard way[0] (which is a fantastic resource, though it's
gcp specific) existed, I used the CoreOS guides[1] (this was before they
started just pushing "use tectonic" and obviously before they were bought out
and essentially merged into atomic/fedora/whatever) -- they were fantastic and
did more of a walk through of the what and why of the pieces of k8s. Reading
through these guides was essential to building my intuition about k8s (I've
even written about the process[2]), and I think it's important for people to
do this experimentation as well.

One of the best things about k8s is it's ability to unchain you from cloud
providers -- using only managed k8s (without something like crossplane[3])
would be effectively squandering that advantage IMO. Yes most companies don't
actually _need_ to run multi-cloud, but I expect that the ability to offer k8s
as a managed service interface will make it easy for _anyone_ to become/run a
cloud provider, and that's a future I want to see -- more cloud providers
means more competition which means a better world for startups and those who
consume hosting services.

[0]: [https://github.com/kelseyhightower/kubernetes-the-hard-
way](https://github.com/kelseyhightower/kubernetes-the-hard-way)

[1]:
[https://coreos.com/kubernetes/docs/1.2.2](https://coreos.com/kubernetes/docs/1.2.2)

[2]: [https://vadosware.io/post/fresh-dedicated-server-to-
single-n...](https://vadosware.io/post/fresh-dedicated-server-to-single-node-
kubernetes-cluster-on-coreos-part-2/)

[3]: [https://crossplane.io/](https://crossplane.io/)

------
hodgesrm
Kops is very convenient. It's been the go-to solution for standing up clusters
on Kubernetes in my last two companies. In fact, I'm using it tomorrow to set
up a demo cluster for a talk in San Diego.

------
DiabloD3
If nobody makes a thing called Keystone for this Kops thing, I am going to be
very disappointed.

------
CSDude
I once created a Kops cluster and decided to update masters from m4 to m5
family and then noticed the AMI did not have the actual drivers to mount NVMe
disks but the blue green did not work and until I could revert it the old
master instances were terminated and I could not get them back. But it was ~2
years ago and I believe it would be much more useful than the current state
EKS is in.

------
saintfiends
I like the fact that it can generate terraform configurations.

------
privacyonsec
I read Kpops

