Hacker News new | past | comments | ask | show | jobs | submit login

Agreed about poor title, but:

> DNS resolution is indeed a bit slower in our containers (the explanation is interesting, I will leave that for another post).

I would like to see this expanded upon or hear if anyone else has suffered similar.

Exactly! On our OpenShift production cluster we ran into ndots problems with DNS and slow DNS resolution overall. This blog post was very helpful in understanding the issue and ways to fix it, https://pracucci.com/kubernetes-dns-resolution-ndots-options...

Yeah, Tim Hockin and I still regret not designing the DNS name search process in Kube better. If we had, we would have avoided the need for 5 and could have kept 90% of the usability win of “name.namespace.svc” and “name” being resolvable to services without having to go to 5. And now we can’t change it by default without breaking existing apps.

Forwards compatibility is painful.

Pardon my lack of Kubernetes knowledge, but any regrets supporting the hierarchical lookup where they don't have to qualify their dns requests (and maybe could have used some other way to find their "same namespace")?

Good question. I certainly use the “name” and “name.namespace.svc” forms extensively for both “apps in a single namespace” and “apps generic to a cluster”.

I know a small percentage of clusters makes their service networks public with dns resolution (so a.b.svc.cluster-a.myco is reachable from most places).

The “namespace” auto-injected file was created long after this was settled, so that wasn’t an option. I believe most of the input was “the auto env var injection that docker did we don’t like, so let’s just keep it simple and use DNS the way it was quasi intended”.

Certainly we intended many of the things istio does to be part of Kube natively (like cert injection, auto proxying of requests). We just wound up having too much else to do.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact