Hacker News new | past | comments | ask | show | jobs | submit login
A Primer: Accessing Services in Kubernetes (alexellis.io)
41 points by alexellisuk on Feb 5, 2022 | hide | past | favorite | 14 comments



We have been using inlets when it had just come out in 2020. It is super robust. It is a great way to peer various cluster for eg. Other usecases where we use it is to connect multi cluster by expose api server to our platform. We had to build lot of tooling before around the product. Now it looks like they have build a pretty good stack around inlet[1].

From my perspective, it is like ngrok but for kubernetes.

[1] https://github.com/inlets


Does anyone use k8s clusters that don't come with Ingress these days? This article exposes a lot of complexity for what should just be "use ingress".

More advanced ways to get traffic into the cluster are only really needed for non-https traffic. It comes up, but way more rarely.


How does "just using ingress" work on your local cluster with K3d/KinD/minikube/Docker Desktop etc?

There are so many options, and Ingress isn't the answer for those. Ingress isn't a thing that can receive traffic, Ingress Controller is and it - itself has to be exposed somehow, on port 80 and 443.

The lowest common denominator is the one which works everywhere - kubectl port-forward. After that, hobbyists have a propensity for MetalLB, but it's not strictly safe at work with advertising ARP ranges.

I think the takeaway is YMMV. For production, it's of course Ingress Controller plus a cloud-managed LoadBalancer.


> How does "just using ingress" work on your local cluster with K3d/KinD/minikube/Docker Desktop etc?

The nginx ingress controller works great when running on a local cluster using KinD and others. You can drop it in and it works, then for the hostname + path combo in your ingress config you can use a combo that produces URLs like `localhost/myapp/test` or `localhost/myapp/prod` to test different app + environment combos if you were using Kustomize or Helm variables. No port forwarding or extra tooling needed.

This lets you keep things fairly simple at the Kubernetes level where all services end up being a ClusterIP type and then you hook up an ingress to the services you want to route traffic to from outside of your cluster.

The same ends up working in production except you'd use your cloud provider's ingress controller instead if they happen to have one (such as the AWS Load Balancer Controller). From your service's point of view nothing changes, the ingress config is also mainly the same except for a few different annotations being set which is easy to account for with Kustomize or Helm. This gives you a consistent interface to your services in any environment.


Even k3s/k3d (I haven't used the others but I assume it's the same) comes with ingress controllers built-in, Traefik in the case of k3s, but it's easy to switch to nginx. So the OP is right, this is unnecessary complexity, and it really is just an ingress, even in a local cluster.


You can launch kind clusters with some nodeports binded to your workstation, they are just docker containers after all.

My favorite way is to use metallb - simply deploy it with the same subnet cidr of your docker kind network and your loadbalancer services will be routable from your workstation


Yes! I use Vultr for my personal kubernetes clusters and it does not come with an ingress controller.

> Does VKE come with an ingress controller?

> No, VKE does not come with an ingress controller preconfigured. Vultr Load Balancers will work with any ingress controller you deploy. Popular ingress controllers include Nginx, HAProxy, and Traefik.

(as an aside: VKE is in beta and they absolutely actually mean "beta". It has many issues that need to be resolved (for example, I haven't been able to connect to the control plane for weeks because of a memory leak) so at this time I'd only recommend trying them out for non-mission-critical stuff. I would have been fired by now if I had ported my startup over to this.)


Additionally with Vultr VKE the control plane is exposed to the public without any way to whitelist source ip addresses. I asked them to support white listings of IPs as a bare minimum.


The author mentions ingresses don’t work for non-http TCP traffic. Is that true?


Nope, but it really depends on which Ingress you use. But this isn't a k8s/ingress only issue, HAProxy technically is an L7 tool, for example and people will compare it to NGINX a lot but HAProxy only covers a little bit of what NGINX can really do.

99% of the time a k8s ingress controllers on bare metal boil down to:

- A DaemonSet (container that usually runs on every node in your cluster but could be fewer) with access to your node's ports

- One or more controller deployments/pods/statefulsets (maybe the DS, maybe not) watching for Ingress objects or other Custom Resources, reconfiguring the DS on the fly

Things are different in the cloud distributions heavily rely on LoadBalancers, but if you squint it's pretty similar (I'd also argue that LoadBalancers shouldn't have been added from the get go but that's a whole 'nother discussion).

Traefik[0] is the ingress I use and I've written about why I switched to it and continue to choose it[1]. Long story short, I do TCP and UDP traffic with Traefik and it works great.

[0]: https://doc.traefik.io/traefik

[1]: https://vadosware.io/post/ingress-controller-considerations-...


Thank you for the excellent answer!


Has anyone had success using WireGuard to VPN into their cluster and have local access to pods via K8s DNS?


Yes, at my startup we run https://github.com/Place1/wg-access-server inside our kubernetes clusters to give engineers access to local pods vs the k8s DNS (*.svc.cluster.local). It works relatively well but we've had to make a couple modifications:

1. I had to deploy CoreDNS to our cluster and set up DNS query rewriting to redirect queries from *.cluster.mystartup to *.svc.cluster.local because docker on OS X has issues resolving hostnames that end in .local

2. I had to modify the code to support a non-standard MTU to deal with a networking issue related to google cloud.


Not plain Wireguard, but I've set up Tailscale with a proxy container + their MagicDNS pointed towards the kube-dns pod (running on GKE Autopilot).

It works like a dream and was dirt simple to set up, but would probably require some additional work once you go beyond one cluster (which I think is true for cross-cluster service discovery regardless in Kubernetes).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: