More and more I see toolkits basically turning microservice into a message queue system. More and more intelligent behavior is needed at the endpoints that its starting to leak into the pipes.
Often times the behavior of the endpoints (e.g. discovery, reconnecting, timeout, etc) is so critical that you need that in all clients which totally defeats the "decoupling" of microservices [1] (ie any language or technology can participate). That is extremely complicated endpoint logic requires shared components (see referenced article).
You could get all of the behavior by having a semi smart broker like RabbitMQ with out having an extremly complicated endpoints... but thats not microservice.. its evil old enterprise single point of failure.
Perhaps I'm biased, old school or just so jaded of hearing "microservice" but I really think many companies particularly young startups could get more work done with an intelligent pipe ie message broker over fairly complicated to setup microservies and duplicated intelligent endpoint logic.
TL;DR semi-smart pipes are looking better and better these days.
Your observation about clients requiring more and more logic (e.g. discovery, reconnecting, timeout, etc) is spot on. It is a pretty big overhead and teams that are building an architecture based on micro-services should be prepared to pay that cost.
I am not sure why people choose such an architecture, but for me the biggest reason to do this is to be able to run it on platforms like Mesos, Kubernetes, Nomad etc. Once you've aligned your system with such a platform you get plenty of benefits(improved workflow, reliability, fault tolerance, cloud mobility). But the price is pretty big.
The good part is that good tooling is becoming more widespread and open source projects like this make it easier for people to start up.
I want to second the fact that a microservice architecture is great because of the ease of deployments (but only after the initial overhead!). I can attest first-hand to the large overhead that comes with creating something from scratch with micro-services - we are still battling with Mesos/Marathon, and recently found Mantl (https://mantl.io/) which has been a huge help.
You may be interested in linkerd (https://linkerd.io) -- we built it specifically to address the complexity of proper connection management at scale (service discovery, timeouts, loadbalancing, retry policies). Extends to routing and instrumentation as well.
The goal is to offload that logic out of your application.
> Often times the behavior of the endpoints (e.g. discovery, reconnecting, timeout, etc) is so critical that you need that in all clients
But isn't it? Presuming you're writing RESTful things, and emitting the right error codes for the right reasons, any RFC-compliant HTTP client (as there should exist at least one of in every language) should automatically be capable of doing all those things.
IMHO, though, startups would do much better writing "component" libraries that force the same decoupling as microservices, without paying the cost of creating a distributed system from the beginning. Then, later, they can lift a "component" out and define another version of its 'client' library which is RPC rather than local. (In other words, follow the idiomatic approach to designing Erlang applications, and you'll scale nicely.)
Hi, author here. I built Micro based on my experience at Google and Hailo. At Hailo we built a global microservices platform with over 200 bespoke services in production by the time I left. I realised that more and more people were tackling the problem of scale both technically and as an organisation. I felt like the tools were lacking and most companies end up building their own from scratch. Micro was a way of creating a foundation for writing and running distributed systems. It's a pluggable architecture so that the underlying systems can be swapped out based on preference.
Beyond Micro, I'm working on go-platform https://github.com/micro/go-platform which addresses the higher level needs for microservices. Auth, monitoring, metrics, distributed tracing, etc, etc. Again pluggable like Micro. Still in it's infancy.
"""Micro was a way of creating a foundation for writing and running distributed systems. It's a pluggable architecture so that the underlying systems can be swapped out based on preference."""
At what point do we just admit that we've re-invented the large co-ordination frameworks of years past? Like CORBA?
Is it the point where we add a JSON schema validator? or switch to Protocol Buffers or some variant?
I think those who've been doing this for a while joke about it all the time. I know I do. I and many others have no illusions about the cyclical nature of technology. It's just that at certain points in time we've tried certain methods and they just weren't the right fit or bastardised to a point of becoming more of a problem. So we try again, lo and behold, we start to get somewhere with it.
Moving to a distributed system is inevitable when you want to scale. We make certain tradeoffs doing it. Over the next couple of decades we'll see some cycles happen again where monoliths are cool again and then distributed systems come back into fashion. Just the way it goes.
I just had a different vision for how such a thing would be built based on my experiences and I think the great thing about programming and open source is that we're free to explore our own choices. I'm used to writing software in a certain way based on my accumulative experience and there's certain types of people who also gravitate towards it.
That's really cool. You're right that we have so much freedom to interpret the problems we see and solve them in our own right. Thanks for putting this out there and standing up for your work.
Could you expand on how micro is different from go-kit and why would somebody choose one over the other? Something for the people who are trying to make a decision without getting their feet wet with both toolkits first.
Fault tolerant in what way? And hot swapping in what way? Go micro uses client side selection and load balancing so you can set number of retries and the timeout period so that it will iterate through nodes for a given request. There's also implementations for rate limiting and circuit breakers as client side wrappers/middleware in the go-plugins repo https://github.com/micro/go-plugins/tree/master/wrapper. Plugins themselves can either be imported and added to a map within the go-micro/cmd package which can be set via flags or env vars or you can set it up yourself and pass it into the client/server.
Go does not support DSOs yet so bit difficult to do "hot swapping" in that way if thats what you mean.
Edit: As I posted below, my reasoning for this comment is perhaps better summarized by Zach Holman's discussion on the topic of github "streaks" and how he found himself burned out after a long series of streaks. Worth reading if you haven't already:
This... does not look like a sustainable level of work. 100 straight days of coding since last December? Yikes.
This sounds like an interesting project, founded on good ideas, but sustaining this level of work will do no good if you burn out before there are other contributors to pick up the slack. I know that a lot of people can keep working this hard for much longer, but quality will inevitably start to suffer, well before you actually burn out. And, at the very least, you need to take the time to step back and validate your ideas. This is something you'll never do if you're always heads-down coding.
I'm not saying that taking more breaks (weekends off, etc) works for everyone, but I just thought something needed to be said. I've seen friends and former colleagues fall into the trap of working on a project so deeply that it becomes a crazy obsession that they cannot escape. It's all they talk about and it can kind of scare people away. Not saying that this is such an extreme case, but it's something that can happen if you're not careful.
Hey, thanks for the feedback. I agree that going all out 100% is not sustainable. Don't be fooled by the GitHub streak. Some days its a bug fix, a test or some documentation. At some point I got addicted to maintaining the streak and actually made a conscious decision to stop at 100. What it represents though is consistency. Even a little bit of progress everyday is something. The comment about taking a step back to evaluate, it's true. The reason the project has gotten so far is because I've been very thoughtful about the interfaces, sometimes spending 4-5 days thinking about adding a feature before getting to it, evaluating all the scenarios of use.
Good to hear. I actually think that maintaining the streak is pretty cool -- even if it seems like an alarming pace at first glance. Sounds like you're taking a healthy approach to the project. Best of luck!
I couldn't disagree with this more. Just because you have 100 days of "coding" straight, doesn't mean it's just coding. It could be docs or anything else that's required. And most of those days look light. This is more like getting a habbit formed, then actually working 7 days a week. I mean look at my commit log: https://github.com/timothycrosley/, I plan on doing this the rest of my life. In reality, I've been coding every single day since I was 8. I'm now 26.
That's great -- and I'm glad you disagree. It's awesome to find a sustainable level of work that can be repeated like that, indefinitely.
I felt the need to speak up because I've seen contribution graphs EXACTLY like the author's and yours, except where the end result was not so happy. Take Zach Holman's for example:
Zach's streak was great for him. Until suddenly it wasn't, and far too late he realized he was burnt out. He's been amazingly public about the whole experience (his subsequent sabbatical, then his being fired from Github, then his year of soul-searching), and for every public story like this I suspect there are many, many private examples that go unheard.
I think it's great when you can reach such a peak of productivity, like in your case. But it's important to recognize that you're kind of an outlier. The endless contribution "streak" doesn't work for everyone, at least not always and indefinitely.
I don't want to generalize about what is always a good level of work and what isn't. What I DO want to communicate is that it's important to be introspective and take periodic steps back to evaluate the sustainability.
After all, a level of work that seems unsustainable might turn out to be quite manageable, and vice versa.
It's sattire for sure. Everything has a phase of being a buzzword. Buzzwords annoy the crap out of me but I've also been using the word microservices before it was a thing, back when just Netflix was blogging about it. I could say SOA or distributed systems but I say microservices because it encapsulates what I'm trying to say, "looseley coupled service oriented architecture with a bounded context", Adrian Cockroft's words.
The sidecar part of this looks like what is essentially SmartStack[1] but requiring the user be aware of it's existence (due to Micro's proto3 api) whereby users of SS can more or less be ignorant of it's existence. Actually, to fully to what the sidecar does, you'd want something like Kafka[2] or some other pubsub system too.
It's great that more and more toolkits are being released. But my biggest gripe with Microservice architectures and toolkits are that they ignore transactions across services.
It's not that they're ignoring transactions across services, it's that it's a difficult problem that shouldn't actually be solved by a toolkit. It's something that has to be addressed on a per use case basis and most likely modelled differently based on your architectural and database choices. At Hailo we used Cassandra which was replicated globally. An eventually consistent database. We then used Zookeeper at a regional level to do locking for parts of the system that required consistency and serialisation. We made tradeoffs that allows to scale a global system. Data modelling and transactions were no easy thing but the benefits were clear. I did not envy the payments team and have a lot of respect for what they accomplished with said architecture.
Often times the behavior of the endpoints (e.g. discovery, reconnecting, timeout, etc) is so critical that you need that in all clients which totally defeats the "decoupling" of microservices [1] (ie any language or technology can participate). That is extremely complicated endpoint logic requires shared components (see referenced article).
You could get all of the behavior by having a semi smart broker like RabbitMQ with out having an extremly complicated endpoints... but thats not microservice.. its evil old enterprise single point of failure.
Perhaps I'm biased, old school or just so jaded of hearing "microservice" but I really think many companies particularly young startups could get more work done with an intelligent pipe ie message broker over fairly complicated to setup microservies and duplicated intelligent endpoint logic.
TL;DR semi-smart pipes are looking better and better these days.
[1]: http://www.microservices.com/ben-christensen-do-not-build-a-...