
Setting up an HA Kubernetes cluster with private networking in AWS - kris-nova
https://www.nivenly.com/k8s-aws-private-networking/
======
jondubois
I wrote a similar article a while ago; except using Rancher - Which adds extra
features for infrastructure management. [https://blog.baasil.io/how-to-
install-kubernetes-on-aws-d9fb...](https://blog.baasil.io/how-to-install-
kubernetes-on-aws-d9fbbc04e816#.cnqfv841r)

------
whalesalad
Good lord this is so much easier than it was 6 months ago. Really glad to see
the barrier of entry going down to get these clusters built. Kubernetes has
been paramount to our engineering growth at FarmLogs.

~~~
chrislovecnm
Yah I agree. K8s install has been the bane of the projects existence.

~~~
kris-nova
Happy to know we are finally "here" with deploys/installs!

------
iddqd
kops is pretty great for managing the lifecycle of a k8s cluster. It even
supports outputting Terraform code, if that is your Infrastructure-as-Code
tool of choice.

~~~
kris-nova
One of the best features of kops, is it's ability to be open ended in
generating what the user wants! The private networking piece of Terraform
codegen is right around the corner, and rumor has it, it will be used in some
new API tests

------
alexbilbie
ECS tasks now support IAM roles; how does this work for K8s equivalent? Do you
need to use IAM user credentials for app credentials?

~~~
tazjin
IAM credentials for ECS tasks are implemented with a transparent proxy to the
metadata API, and there are similar projects for Kubernetes.

The one that's most popular I believe is kube2iam:
[https://github.com/jtblin/kube2iam](https://github.com/jtblin/kube2iam)

Using IAM roles then boils down to setting an annotation such as
`iam.amazonaws.com/role: my-db-access-role` on a pod (assuming the correct
trust relationship has been configured).

------
sandGorgon
kops is nice... but too tied to AWS.

I'm really liking the direction that kargo is taking. It leverages a well
known toolset - ansible - to build out its functionality.

kargo is compatible with everything from bare metal to AWS... with the caveat
that when using it in AWS, you have to use a different provisioning tool like
Terraform.

This is a feature.. not a bug.

~~~
justinsb
Kops actually isn't tied to AWS, though we only support AWS right now. We'll
likely be adding GCE support next; user requested features on AWS have taken
priority so far though...

~~~
sandGorgon
@justinsb - oh I didnt know that. For example, we are struggling to use kops
on bare metal. However, I quite understand your AWS feature priority.

~~~
justinsb
So right from the start I wanted to make sure it wasn't locked to AWS. We used
to have GCE support enabled (and continue to have it on a branch, though
obviously it has bit-rotted as we've added more AWS feature support) But we
built GCE support early so we could be sure that the abstraction wasn't AWS
specific. And we have bare metal support on another (more experimental)
branch. Bare metal is much more challenging, largely because on clouds we can
run etcd in HA mode with automatic instance replacement, which isn't really
possible on bare-metal, so it looks we have to settle for HA with operator
intervention when an etcd instance fails.

So I actually think kargo is a great choice on bare-metal - and it also does a
great job of running on multiple clouds and bare metal. But the design of kops
isn't particularly tied to AWS, though it does match better with clouds.

------
marcoceppi
This is a nice write up, but we honestly needed to stand up kubernetes on-
premise and in public/private clouds. This is why we've been using this:
[https://www.youtube.com/watch?v=B7nMFVaOOi8](https://www.youtube.com/watch?v=B7nMFVaOOi8)

