Often times the behavior of the endpoints (e.g. discovery, reconnecting, timeout, etc) is so critical that you need that in all clients which totally defeats the "decoupling" of microservices  (ie any language or technology can participate). That is extremely complicated endpoint logic requires shared components (see referenced article).
You could get all of the behavior by having a semi smart broker like RabbitMQ with out having an extremly complicated endpoints... but thats not microservice.. its evil old enterprise single point of failure.
Perhaps I'm biased, old school or just so jaded of hearing "microservice" but I really think many companies particularly young startups could get more work done with an intelligent pipe ie message broker over fairly complicated to setup microservies and duplicated intelligent endpoint logic.
TL;DR semi-smart pipes are looking better and better these days.
I am not sure why people choose such an architecture, but for me the biggest reason to do this is to be able to run it on platforms like Mesos, Kubernetes, Nomad etc. Once you've aligned your system with such a platform you get plenty of benefits(improved workflow, reliability, fault tolerance, cloud mobility). But the price is pretty big.
The good part is that good tooling is becoming more widespread and open source projects like this make it easier for people to start up.
The goal is to offload that logic out of your application.
But isn't it? Presuming you're writing RESTful things, and emitting the right error codes for the right reasons, any RFC-compliant HTTP client (as there should exist at least one of in every language) should automatically be capable of doing all those things.
IMHO, though, startups would do much better writing "component" libraries that force the same decoupling as microservices, without paying the cost of creating a distributed system from the beginning. Then, later, they can lift a "component" out and define another version of its 'client' library which is RPC rather than local. (In other words, follow the idiomatic approach to designing Erlang applications, and you'll scale nicely.)
One more reason to start reading that book I bought a few months ago!!
Beyond Micro, I'm working on go-platform https://github.com/micro/go-platform which addresses the higher level needs for microservices. Auth, monitoring, metrics, distributed tracing, etc, etc. Again pluggable like Micro. Still in it's infancy.
At what point do we just admit that we've re-invented the large co-ordination frameworks of years past? Like CORBA?
Is it the point where we add a JSON schema validator? or switch to Protocol Buffers or some variant?
Moving to a distributed system is inevitable when you want to scale. We make certain tradeoffs doing it. Over the next couple of decades we'll see some cycles happen again where monoliths are cool again and then distributed systems come back into fashion. Just the way it goes.
Go does not support DSOs yet so bit difficult to do "hot swapping" in that way if thats what you mean.
Honest feedback--and this is a bit out of left field--but I think that the author of this project needs to SLOW DOWN.
Here is his Github contribution chart:
This... does not look like a sustainable level of work. 100 straight days of coding since last December? Yikes.
This sounds like an interesting project, founded on good ideas, but sustaining this level of work will do no good if you burn out before there are other contributors to pick up the slack. I know that a lot of people can keep working this hard for much longer, but quality will inevitably start to suffer, well before you actually burn out. And, at the very least, you need to take the time to step back and validate your ideas. This is something you'll never do if you're always heads-down coding.
I'm not saying that taking more breaks (weekends off, etc) works for everyone, but I just thought something needed to be said. I've seen friends and former colleagues fall into the trap of working on a project so deeply that it becomes a crazy obsession that they cannot escape. It's all they talk about and it can kind of scare people away. Not saying that this is such an extreme case, but it's something that can happen if you're not careful.
Appreciate the comments.
I felt the need to speak up because I've seen contribution graphs EXACTLY like the author's and yours, except where the end result was not so happy. Take Zach Holman's for example:
https://zachholman.com/posts/streaks/ (chart is 2/3 down the post)
Zach's streak was great for him. Until suddenly it wasn't, and far too late he realized he was burnt out. He's been amazingly public about the whole experience (his subsequent sabbatical, then his being fired from Github, then his year of soul-searching), and for every public story like this I suspect there are many, many private examples that go unheard.
I think it's great when you can reach such a peak of productivity, like in your case. But it's important to recognize that you're kind of an outlier. The endless contribution "streak" doesn't work for everyone, at least not always and indefinitely.
I don't want to generalize about what is always a good level of work and what isn't. What I DO want to communicate is that it's important to be introspective and take periodic steps back to evaluate the sustainability.
After all, a level of work that seems unsustainable might turn out to be quite manageable, and vice versa.
This reads like a joke... but reading on I don't understand if you meant it...
The combination RPC / Pubsub / Reverse Proxy + registry is pretty sharp.
Anything that can do most of the work of haproxy and be easier to administer gets my vote.