
Microservice Trade-Offs - mjohn
http://martinfowler.com/articles/microservice-trade-offs.html
======
cddotdotslash
While microservices are certainly the buzzwordy trend of the month, developers
really need to take a step back and ask if designing or refactoring a project
into microservices is really helpful. I've talked to developers who have taken
a single app that worked perfectly fine and split it into five or six services
just because "microservices!" Now they're stuck maintaining five apps, five
APIs, etc. and have yet to actually benefit from the split. My rule of thumb
is: can I see parts of this project being used in another project? If the
answer is yes, it gets considered. If no, it stays the working app that it is.

~~~
noelwelsh
IMO, microservices are successful when the issue is scaling people (you can
split into loosely coupled teams, one per service), or scaling data (different
read / write patterns allow different data stores). I don't consider reuse as
a criteria, unless as a project you mean a new feature within the existing
app. It's not like Spotify, for example, is starting many new projects but
they will benefit from the above.

~~~
wpietri
Yes. A friend is at a company that went from a monolith to a microservices
approach. (The service-to-engineer ratio is something like 1:2.) He says it's
radically better because somebody after a new feature can mainly just compose
existing services. But they had hundreds of employees before they made the
switch.

Personally, I would start sooner than that. I think new technologies make
service creation easier. But the right moment for me depends a lot on product,
team, and tech.

~~~
apalmer
Most of the 'problems' with microservices are in the operations phase, not
typically in the development phase.

~~~
wpietri
Do serious shops have phases for these things anymore? In my view a modern
shop should be developing and releasing frequently, and that engineers should
experience operational consequences of development choices.

So I guess I would say that any place not working like that might not be ready
for a microservices approach.

------
programminggeek
I contend that a lot of the issues related to microservices growing so popular
is that we are missing boundaries in our applications. Instead of creating
stronger boundaries between modules (or having well defined modules at all),
we create physical boundaries between applications and computers to force a
protocol based system.

I gave a talk about MWRC 2015 about this: [http://brianknapp.me/message-
oriented-programming/](http://brianknapp.me/message-oriented-programming/)

~~~
josho
You hit the nail on the head. Without discipline module boundaries aren't
enforced as developers reach into internal APIs between modules to meet
deadlines. The result is a complex web of dependencies that are difficult to
understand and maintain.

Micro services are simply a physical boundary to enforce discipline between
logical modules.

It is a solution to a symptom rather than solving the root causes.

~~~
beat
But is treating the symptom rather than the root cause always a bad thing?

Alcoholics don't keep booze in the house either. That's treating a symptom,
not a cause, but it's effective. And yes, I did compare programmers violating
interface boundaries to alcoholics.

~~~
josho
Good point and I do agree to an extent. However, without discipline we are
simply moving the risk. Eg. We can't enforce interface boundaries in our
system vs. we have a micro service architecture with N systems that we need to
deploy as a monolith because we don't have the discipline to ensure backwards
compatibility between systems.

Yes. My current enterprise client takes a week to deploy their micro service
arch. Because all systems need to be deployed at once.

So, they simply traded one complexity for another without fixing the root
problem--a disciplined development methodology.

~~~
beat
Yeah, I think there's a problem with the definition of microservices in theory
vs in practice. In theory, the boundary of a microservice is the api, right?
In practice, the boundary is the set of things you must change in order to
update one part. If you have to DEPLOY ALL THE THINGS!!!! in order to update a
single component, _it 's not a microservices architecture_. Not in practice,
anyway. It's a monolith with a bunch of different moving parts.

------
markbnj
Reading this makes me feel a little like I woke up in 1990.

>> Distributed systems are harder to program, since remote calls are slow and
are always at risk of failure.

This is a true statement, but it's hardly the point on which current
architecture choices turn. Distributed systems are the norm now. Almost
everyone is working on one, even if they don't think of it that way (and I bet
they don't). There are still simple, single-process programs that are relevant
to users and the people working on them, but those aren't the domains in which
this microservices debate is supposed to be taking place.

Is it relevant to evaluate "microservices" vs. a "monolithic architecture"
based on the costs of traversing the network stack? Back when we were thinking
about breaking up our C and C++ programs into processes using RPC as glue this
was something we worried a lot about, but that is because in many cases having
everything in a single process was still a credible alternative. Few people
are wrestling with this choice today.

The last site I worked on consisted of nginx, elasticsearch, logstash, kibana,
postgresql, celery, redis, and a bunch of custom python, java, and javascript
code. It could run on one server or (as it did in production) twelve. Almost
all of those pieces ran in separate processes, communicated over the network
using mostly standard protocols, and did one specific thing. Were these
microservices?

I feel like microservices as a thing is one of the least meaningful tech fads
I have seen. Minimality and cohesiveness aren't surprising new challenges.
They were desirable concepts of C++ class libraries two decades ago. An
implementation of a service should always be both minimal and cohesive, and it
should be complete. It should be as small as possible. Whether that is "micro"
or not is entirely too subjective for me.

The other trade-offs mentioned, consistency and complexity, are not much more
relevant to the big question the author is trying to convince us to ask.
Consistency is a property of a view of state, but the article is about
distributing code (otherwise why care about module boundaries and
deployment?), not distributing state. Complexity is always a a key trade-off,
but the complexity of distributed code is table stakes in the world we
actually work in.

~~~
bradhe
Plust there's a HUGE flaw in:

> since remote calls are slow and are always at risk of failure.

Talking to e.g. MySQL is a remote call too, that can fail just as easily.

~~~
icebraining
Yes, and sometimes an in-process database makes sense to avoid those calls
too. I don't see the flaw.

~~~
bradhe
My point was that most people don't consider the fact that calling MySQL or
the same is a remote call and is just as likely to fail. Of course there are
ways to avoid that, but most people don't think in those terms.

------
sinzone
With a small team, we've refactored more than 150k LOC from being a single app
into a lot of small services and we're not looking back.

To avoid increasing complexity in managing all these APIs we have added KONG
[1] on top, so we can keep higher consistency and orchestrate common
functionalities among multiple services. In some ways it's funny, because KONG
"centralizes" a decentralized architecture but we needed one home base to
always refer to or rely on,when most of the system started to live on the
edges.

In space explorations you always have a mothership after all.

[1] [https://github.com/mashape/kong](https://github.com/mashape/kong)

------
Animats
Well, dividing your problem into microservices is a lot more likely to work
than another approach to change being pitched today - patching running
programs on the fly. If you're considering that, something is horribly wrong
with the system architecture.

As for "microservices", part of the problem is that the UNIX/Linux world has
historically sucked at interprocess communication. Everything looks like a
pipe, and you have to build something that works like a subroutine call on top
of it. (Yes, there's System V IPC, which nobody uses.) The mismatch there
results in much overhead associated with framing and such. Also, because the
IPC and scheduler aren't integrated, each interprocess call tends to put
either the sender or receiver at the end of the line for CPU time. This can
add huge latency to service calls.

I've written hard real time code for robotics on QNX, which has a good
MsgSend/MsgReceive system for calling other programs. Message passing and That
works out quite well, especially when some programs run at higher priorities
than others and have hard time constraints. QNX doesn't have a really good
system for starting up a set of programs and getting them communicating,
though; I had to write something for that.

One lesson from QNX is that marshalling and interprocess communication should
be separated if performance matters. IPC done right is fast, and marshaling
done with code generated for each message format is fast. Generalized
interpretive schemes like CORBA and JSON-based systems have much higher
overhead than a subroutine call.

Another lesson is that better tools for managing groups of programs are
necessary. It's a hard problem. Look at the "initd" mess. That's just a
special case of managing a group of microservices. Just getting everybody
connected up securely at startup is hard.

~~~
icebraining
_As for "microservices", part of the problem is that the UNIX/Linux world has
historically sucked at interprocess communication. Everything looks like a
pipe, and you have to build something that works like a subroutine call on top
of it. (Yes, there's System V IPC, which nobody uses.)_

What about Datagram UNIX sockets using sendmsg/recvmsg? I've implemented a
simple RPC using them and it seemed fine.

------
PaulHoule
Correct but a bit boring.

In the Java world there is the idea that you make interfaces for services and
then a dependency injection systems decides which ones to create at runtime.

Many useful microservices can be implemented with a key-value store, and this
could be implemented with anything from an in-memory Hashtable or something
that uses the disk or off-heap memory or runs in a huge distributed cluster.

In that case you get to use in-process when it is expedient (like literally,
you cut power and latency if you don't waste time turning floating points
numbers into strings and whatnot.) When out-of-process makes sense you are
ready.

To accompany this you need frameworks and tools that eliminate a lot of
overhead, for instance, to automatically generate the stub code for service
calls, manage a large number of servers, etc.

~~~
fixxer
> In the Java world...

We do the same thing in Go, C#, ...

[https://en.wikipedia.org/wiki/Interface-
based_programming](https://en.wikipedia.org/wiki/Interface-based_programming)

------
gshx
It's hard to correlate the points made in this post with eventual consistency.
Microservice or any service's runtime distribution and deployment model has
impacts on consistency but that doesn't imply that systems will automatically
become eventually consistent. Nothing prevents service instance clustering and
co-location on the same hardware machine in different runtime units like
containers or vm's talking to a db running on a single machine. It's a
function of the scale and maturity of the system. Consistency itself is a
function of state management within a system and more importantly, a non-
trivial system typically has quite a few data stores and access patterns with
their respective consistency requirements. Saying that running microservices
implies eventual consistency is painting with very broad brush strokes.

------
siscia
Why we don't build "micro monolithics" ?

Why microservice I am very specialized, and every service should do just one
thing well, think about send an email, load the ads or load the comments.

Why we don't build a very shallow service, that does everything for just a
very small part of the website ?

In a classic website a micro monolithics (mm) could be something like "/login"
or "/post-picture".

The mm will not call any other service but it will be able to do every action
by itself.

Simply deploy a (or more) new service for every (busy) end point of your
application.

In this way we can share, or isolate, our codebase.

It would be easy to deploy, basically you are simply using a whole server to
run one single function at very high frequency.

It can bring a lot of chance for optimization.

Just thinking...

~~~
agumonkey
I don't know if this was researched by system theorist. A spectrum of scale
from micro to mono. When micro communication costs are too high, integrate,
rinse, repeat.

Makes me think of
[https://news.ycombinator.com/item?id=9777567](https://news.ycombinator.com/item?id=9777567)

------
kordless
> Many organizations will find the difficulty of handling such a swarm of
> rapidly changing tools to be prohibitive.

I've been working at Giant Swarm[1] for the last few months as the evangelist.
Our original intent was to focus on providing a PaaS-like containerized stack
in a multi-tenant/public cloud offering; something that felt a bit like
Heroku, but less restrictive stack-wise. We're now engaging in a few
professional services deals, given the high demand for an easy-to-deploy/easy-
to-run container system.

I think the high demand we are seeing in the ecosystem is coming from
organizations/individuals who want to understand how to use and deploy
containerized stacks. I see the primary problems being people are unable to
keep up with the innovation in the space (drinking from a firehose) and are
unable to find the properly skilled labor force to implement a solution
(training/expereience lagging behind demand). That observation would directly
relate to Martin's comments on operational complexity.

I will note I observed a similar phenomenon with OpenStack, so my observations
are certainly not an indicator that people will be _using_ these deployments
initially. It will likely take some time before microservice-based develplment
is a common pattern in larger organizations. At the very least, they know they
need to be thinking about it...

[1] [https://giantswarm.io](https://giantswarm.io)

------
randerson
One benefit of a microservices/SOA architecture that I never see mentioned is
that it can make you a more attractive acquisition target (if your acquirer is
doing decent technical due diligence.) There's a good chance that your
acquirer (a) just wants a specific part of your stack, not the whole
monolithic codebase with things they don't care about, and (b) they want to
integrate it into their existing systems, which are very likely written in a
different language.

------
sybhn
> Operational Complexity: You need a mature operations team to manage lots of
> services, which are being redeployed regularly. IMO you need first and
> foremost a mature development team that releases operable, diagnosable and
> fixable (micro)services. Managing a swarm of mini component is a lot easier
> when each part is manageable to being with.

~~~
mfburnett
> Operational Complexity: Tooling is still immature, but my instinct tells me
> that even with better tooling, the low bar for skill is higher in a
> microservice environment.

Seems like most of the tooling now focuses on automating small parts of dev
and ops teams' workflows, instead of looking at the larger picture of
organizational tooling. I would guess that in a few years we'll see a lot more
PaaS-workflow solutions focused on abstracting away a lot of the operational
complexity of microservice architectures, reducing the barrier to entry for
"maturity" of dev/ops teams, just like AWS & co reduced the need for people to
really understand server hardware.

------
ninjakeyboard
I like the approach of building a monolith (with smaller libraries), and then
split it once you need to. The argument is that you're likely to choose the
wrong services to seperate until after you've built a bunch of the system.

~~~
josho
I never worked with a client that built libraries (eg. Jars or dlls, etc). The
reason was because it was hard and took effort. So we ended up with monolith
applications.

Fast forward to today we have essentially jars and Dlls with the added
overhead of an entire system and deployment to maintain.

Boy have we gotten things wrong in this industry.

------
wiremine
I'm surprised we haven't seen any attempts at frameworks that try and solve
this: something that allows you to start with a monolithic app, but is
designed to split off bits and pieces into stand alone microservices.

Or maybe this exists and hasn't got a lot of press. Or it has been tried and
failed.

------
jermo
If you use frameworks where components communicate by exchanging messages then
it's quite easy to start with a single app and later scale it to multiple
services. With frameworks like Akka (Actors) or Vert.x the communication can
be local or remote and the calling code is the same for both.

~~~
andreasklinger
In rails you can do similar approaches (prepare for later scaling) by
encapsulating the actual fetching logic in Repository objects

------
jtwebman
I think Elixir/Erlang is a good tradeoff on both, would you guys agree?

[http://blog.plataformatec.com.br/2015/06/elixir-in-times-
of-...](http://blog.plataformatec.com.br/2015/06/elixir-in-times-of-
microservices/)

------
jackgavigan
Despite having an MBA, I do, on occasion, find myself involved in a discussion
about architecture.

My rule of thumb when deciding whether to deliver a piece of functionality as
a service is "Will another application/system/whatever ever want to be able to
use this functionality?"

If the answer is yes, that would be a fairly compelling argument for
implementing it as a service. If the answer is no, then the question becomes
"Why _not_ implement it monolithically?"

------
ExpiredLink
First he fuels the hype. The he backpedals. A master of self-promotion.

~~~
yummyfajitas
As a person who has done this accidentally, I can offer an alternate opinion.

First, you learn about a cool new thing, understand it and it's limitations
well, and advocate it as an improvement:
[https://www.chrisstucchio.com/blog/2012/bandit_algorithms_vs...](https://www.chrisstucchio.com/blog/2012/bandit_algorithms_vs_ab.html)

Sometime later, you realize that people are using it well beyond it's domain
of usefulness and "backpedal", i.e. try to make sure the limitations are also
well known:
[https://www.chrisstucchio.com/blog/2015/dont_use_bandits.htm...](https://www.chrisstucchio.com/blog/2015/dont_use_bandits.html)

------
curiously
Why is there a consistent drive in the industry to invent new buzzwords and
apply untested, complicated ways of working?

The compromise you have to make to have a micro service architecture doesn't
make sense for anyone else other than Google or Amazon or extremely large
organizations.

Even with such architecture in place, you are going to end up with far more
overhead by using microservices, it _simply_ isn't the case that by isolating
individual components into functions, you suddenly get productivity.

It just infuriates me when engineers or product managers bored with their job
constantly invent buzzwords to confuse, increase complexity, end up failing,
and back to just regular old boring tech.

If it ain't broke don't fix it. Why the fuck would you want to now have 100
different API end points to do something that would've taken less than 50 or
so lines of code? This doesn't make sense for 99% of software companies out
there.

~~~
parasubvert
The majority of software companies (by employment) are enterprises that have
thousands of legacy endpoints in different protocols like SOAP, MQ, or CORBA,
deployed in various monolithic shapes.

They damn well will get a lot of benefits from microservices for their newer
capabilities... IF they also work on the operational aspects (continuous
delivery, a devops culture, and some kind of automated operating platform).

No one sane is publically advocating building microservices for a single team
small app.

