Hacker News new | past | comments | ask | show | jobs | submit login

Correct me if I'm wrong, but I believe Consul, lacking a mesh of its own, is leveraging the early 1990's era trick of using round robin DNS to split load over available servers.

Caching those values for very long subverts the point of the feature.




By way of correction: Consul does not simply "round robin" DNS requests unless you configure it in a particularly naive manner.

Prepared queries [1] and network tomography (which comes from the Serf underpinnings of the non-server agents) [2] allow for a much wider range of topologies just using DNS without requiring proxies (assuming well behaved client software, which is not a given by any stretch).

Furthermore, Consul _does_ have a mesh as around 2 years ago - [3].

You are correct though that long caches subvert much of the benefit.

[1]: https://www.consul.io/api/query.html

[2]: https://www.consul.io/docs/internals/coordinates.html

[3]: https://www.consul.io/docs/connect/index.html


Not really - resolve all backend servers to IPs and list all of them as the nginx backends. When a backend server is removed, update nginx backends.

Round-robin balancing using DNS towards a small cluster is silly - you know when any new instance is added to the pool or removed from a pool, so why not push that load balancing onto the load balancer which in your case is nginx?


You're talking about layering the thing on top of Consul that I already identified in my top level comment.

Consul itself advertises DNS resolution for service discovery.


Maybe I was not clear.

Whatever is the technology that you use to register the active backends in the DNS, rather than doing name => ip address lookup per request, you can resolve all those names => ip address maps upon the service being brought up/taken down and push the resolved map as a set of backends into nginx config, thus removing the need to query DNS per request.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: