

MicroservicePremium - danielalmeida
http://martinfowler.com/bliki/MicroservicePremium.html

======
benjaminwootton
I wrote this article some time ago and it has generated hundreds of inbound
emails asking me about the risk points associated with Microservices:

[http://highscalability.com/blog/2014/4/8/microservices-
not-a...](http://highscalability.com/blog/2014/4/8/microservices-not-a-free-
lunch.html)

Building Microservices does have a real premium - you are building a complex
platform which needs specialists skills and additional work to make happen.

The payback of that premium comes later, in the ability to then iterate and
change the system easily and quickly.

The problem comes when people think they are building an application and not a
platform which will support them for the next 5+ years. I try to hammer this
point home to our clients who are looking at Microservice architectures.

~~~
MetaCosm
I have been slowly (last 4 or so years) moving away from microservices towards
the "cookie cutter" ([http://paulhammant.com/2011/11/29/cookie-cutter-
scaling/](http://paulhammant.com/2011/11/29/cookie-cutter-scaling/)) model (we
didn't use that term of art) and been very delighted with the change. I had
completely drank the koolaid on microservices, but after having to maintain
them in production for a few years, changed my tune.

In my experience that "iterate and change the system easily and quickly"
simply doesn't end up being true. That is the bait, but then comes the switch.
Once you have a real system, heavily interdependent, the cost of changing
things starts to go up greatly, and your ability to cleanly describe a working
system becomes challenging. Answering simple management questions like "OK, so
if we change X, what could break?"... "We should send out an email to all the
teams to ask!" (ugh)

The problem is that for most significant changes to a component -- there are
changes to the interface to that component. So you have 'the choice' \-- (1)
Maintain this interface forever or (2) Ensure all the teams transition off it
by X date, via a deprecation system. Both work, and most companies seem to use
a mix of both, they strive for (1) until it gets insane, then they do (2) and
force everyone to upgrade to X version by Y date.. rinse and repeat.

We are currently doing the "fat binary" cookie cutter system. Our deploy is a
single binary that does "all the things". There is a lot to like about this
system. Easy to manage and role back. Compile time checking, including
additional tools for static analysis, etc. No network overhead on making local
calls. You know and can clearly describe 'the system', it isn't in some
continuous flux state. Due to compiling time checking and static analysis you
can maintain 'one truth' and when you upgrade a component, you can go into all
the callers and update them, so you don't have to maintain that old interface.
You can scale decently vertically not just horizontally, which at times is a
far more efficient way to scale. Overall, it has made us able to move much
faster because it gives us a lot of confidence in what we are shipping, and a
sense of safety in that we can quickly go back to what we had.

------
twotwotwo
Yao Yu made some observations in her (excellent) talk about caching at Twitter
over the years ([https://youtu.be/rP9EKvWt0zo](https://youtu.be/rP9EKvWt0zo))
that stuck with me: having a bunch of services doesn't _itself_ isolate
problems within one service. Twitter's cache service had to deal with clients
rampaging to all load the same key zillions of times, for example.

The impression that gives me is that some of the gain from microservices might
be from doing all of the things we're used to doing at service boundaries:
documenting APIs, sanity-checking requests, logging, monitoring
availability/resource usage/other metrics, etc.--the same sort of things any
public Web service does knowing it's going to have to deal with funny clients,
rising and falling load, and so on.

If all of the stuff it would take to properly maintain another usable
customer-facing service sounds too expensive, that may be a sign that this
isn't the place to split off a service. But if you find yourself _wishing_ for
separate monitoring, a cleaner API, etc. around some part of your system, it
becomes more interesting to separate.

(I can also buy that some folks can see organizational benefits, or benefits
from isolating hardware or making operations async, but separate ideas.)

~~~
tracker1
There's also load... If you have a specific service that needs more resources,
you can allocate more instances of that service on more machines.

~~~
twotwotwo
(Yep, totally agree--was trying to get at how there could be other benefits in
the last parenthetical comment. The extra process we do at service boundaries
just stood out at me from the caching talk.)

------
grandalf
What is a monolith? Intentionally tightly coupled code that you feel is going
to be manageable in the future?

I think the biggest anti-pattern is "DRY". This turns a simple rails view into
a monstrosity that is used to render every page of a site, or a class into a
self-referencing recursing mind-puzzle.

When a codebase starts down the monolithic path, developers will often use the
DRY anti-pattern to "shrink" what is actually complexity creep. They will also
use another anti-pattern which is using the test coverage percentage as a key
metric for deciding what to build/improve.

Both of these are attempts to fix something that is a far bigger problem:
excessive, unnecessary coupling in the core design. This is often because (as
others have pointed out) of blind adherence to frameworks and framework "best
practices".

In my opinion, the most important metrics are coupling, lines of code (the
fewer the better) and readability/understandability. Every line of code that
you maintain ads a cost to changing/improving the codebase (because someone
has to understand it in order to change it).

If you think of code in terms of its consumers, nearly every API is a micro-
service. The less magic/faith required by the consumer to use the API, the
more effectively it can be used and changed.

~~~
efsavage
There are two ways to think about microservices. Development (which your
comment focuses on) and deployment. The pure/idealized microservice is
isolated from the rest of your environment/codebase on both of those axes. In
practice, there will be some coupling on both, and perhaps one of those isn't
isolated at all.

The monolith is simply the opposite of the microservice ideal, it is heavily
coupled in both development and deployment.

And yes, DRY does often run counter to the development isolation, as you need
to balance repeating yourself or sharing code (and reducing isolation).

As these things generally go, any system in contact with the real world will
exist somewhere between the monolith and a federation of microservices. My
current company, for example, employs 2 distinct codebases, Java and
JavaScript, which have 3 and 2 deployments respectively, each in two similar-
but-not-the-same environments (in-house and white-label). We could easily
split those deployments into twice or 5x the number, we could also join them,
this just happens to be where things naturally settled for the time being.

~~~
grandalf
Makes sense. I typically try to design things so that they are initially
joined (for convenience) but can easily be split off -- such as a simple
router between micro-apps which can be replaced by haproxy (or similar) later.

~~~
anezvigin
I like this too and usually do the same in the application layer for new
projects (though it takes more discipline).

Every module is built with the intent of breaking away later. Disallowing
intra-module communication is essential to this. That means no global ORMs or
global anything really.

------
timruffles
Does this feel like backtracking to anyone else? Compare to Fowler's previous
discussions [1].

Maybe it's just hard to write about a new, interesting thing without most
people getting the impression you think it's _the_ way (especially developers,
who seem eager to declare any new tool the One True Way® and everything else
'legacy').

[1]
[http://martinfowler.com/articles/microservices.html#AreMicro...](http://martinfowler.com/articles/microservices.html#AreMicroservicesTheFuture)

~~~
trustfundbaby
No?

> So we write this with cautious optimism. So far, we've seen enough about the
> microservice style to feel that it can be a worthwhile road to tread. We
> can't say for sure where we'll end up, but one of the challenges of software
> development is that you can only make decisions based on the imperfect
> information that you currently have to hand

~~~
timruffles
I think I linked to the most balanced bit of that article :)

"We've seen many projects use this style in the last few years, and results so
far have been positive, so much so that for many of our colleagues this is
becoming the default style for building enterprise applications"

"Monolithic applications can be successful, but increasingly people are
feeling frustrations with them "

------
anezvigin
I have some experience designing and working on this style of architecture in
a non-enterprise non-bigco environment. Some things I learned over the years:

1) If the system is young and the team is small, consider imposing constraints
onto the system that remove certain classes of problems entirely. You can
later lift these constraints as the team grows. [1]

2) Your ops environment must be capable of multiplexing multiple services onto
the same virtual machine. The way you provision and classify machines is
affected by this. Services, by their nature, are small and are likely going to
be I/O bound. Deploying multiple services onto a single machine is very cost
effective.

3) If you're starting with a monolith, the best early candidates to break off
are things like search, graph stuff (eg: Neo4j), recommendations, activity
feeds, image processing, and other important but non-critical things. It's a
great way to drum up support for this architectural style and/or test how
useful it will really be.

4) Unless your engineering team has a polyglot culture (good for you!), be
careful about introducing your favorite language when building a new service.
Alienating services is a great way for them to rot.

5) I've been asked how to define a service. Should X be one service? Should X
be split into two services? A good way to start is to think about data
locality. When a "user" is fetched are "friends" almost always queried too? If
so, keep them together. The next thing to look at are traffic patterns.
Another thought exercise is to ask, "if our startup makes a hard pivot
tomorrow, what are we likely going to keep? Users?".

6) Do what makes sense. A small startup with a 5 person team doesn't need 20
"microservices" to maintain. It's an operational burden. Break things away in
stages on an as needed basis. I've actually found that keeping around the
monolith is a great outlet for rapidly prototyping. Give the monolith a beefy
database and an in-memory datastore. When you need to prototype a feature for
a week, feed it to the monolith. When it's ready for production, break it off
if it makes sense.

7) If your ops environment is underdeveloped, it'll hurt bad. The value of a
good ops person cannot be overstated (esp. if you're dealing with rapid
growth). They're wizards. Learn everything you can from them.

[1]: I like to impose a constraint that no service is allowed to talk to any
other service unless it is: a) customer facing (eg: the website or the mobile
api), or b) non-critical and low volume. At first, your services will act as
self contained front ends to their databases. Your customer facing
applications will stitch together responses from these services (hopefully in
parallel). You'll have a clear understanding of how data flows in your system.

~~~
jacques_chester
> _If your ops environment is underdeveloped, it 'll hurt bad._

This is what PaaSes are for.

------
pacuna
my question is, what happen is eventually some tools can be useful for
overcoming the complexity introduced by microservices (e.g. CoreOS, docker,
kubernetes, Netflix OS technology, etc), what could be the argument for not
using them instead big monoliths?

~~~
jacques_chester
Those tools already exist -- platforms as a service. Heroku pioneered them and
you can get on-premise installations of Cloud Foundry or OpenShift to play
along at home.

I've worked on Cloud Foundry and basically it makes deployment a non-issue.
Here's how you deploy the service:

    
    
        cf push the-microservice
    

And if you need 100 copies:

    
    
        cf scale the-microservice -i 100
    

Or maybe 2 copies with more RAM:

    
    
        cf scale the-microservice -m 4G -i 2
    

Need to update it?

    
    
        cf push the-microservice
    

You see where I'm going here.

I know it's fun to play Dr Frankenstein and hand-roll your own devops system.
I've seen systems built out of 3 layers of Jenkins servers, two running
Puppet, emitting timestamped RPMs played on fresh VM images. I've seen people
use rsync, I've seen them use git, I've seen all manner of clever hacks and
they all have the same problem.

You marry this system of deployment and upkeep and now you own it. Forever. By
yourself. Then the genius who wrote it leaves, and you're stuck with a system
_literally nobody else uses_.

There's _no_ advantage to writing your own PaaS. Just use one off the shelf.

~~~
pacuna
yeah but I would prefer to work and learn that system rather than a giant
monolith that nobody understand

------
commentnull
I preferred the 'Pattern' pattern, where each line of code is its own pattern.
In fact, I think he writing a whole new book about it now.

------
strictfp
Most linux distros can be considered a collection of micro services. The
stability vs innovation tradeoffs inherent to this type of systems is well
understood, as is the complexity of updating and testing compatibility. I
don't see why it should be so hard to draw a parallel to microservices.

